text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
SystemSettingsView #include <MenuItem.h> Detailed Description Provides a specific item in the list of modules or categories. This provides convienent access to the list of modules, providing information about them such as name, module information and its service object. This is created automatically by System Settings, and is shared among all plugins and so should not be modified under any circumstances. System Settings creates it in a tree like manner, with categories containing subcategories and modules, and subcategories repeating this. The service object must be set, unless it is the top level item, otherwise using applications will crash when attempting to sort the children by weight Definition at line 50 of file MenuItem.h. Constructor & Destructor Documentation - Note - Will not provide keywords, name, or a module item until a service has been set. - Parameters - Definition at line 48 of file MenuItem.cpp. Destroys a MenuItem, including all children, the service object and the module information. Definition at line 59 of file MenuItem.cpp. Member Function Documentation Convienence function which provides the System Settings category of the current item. - Returns - The category of the item, if the service object has been set. Definition at line 111 of file MenuItem.cpp. Provides the MenuItem for the child at the specified index. - Parameters - Definition at line 70 of file MenuItem.cpp. Provides a list of all the children of this item. - Returns - The list of children this has. Definition at line 91 of file MenuItem.cpp. Provides the KDE control module information item, which can be used to load control modules by the ModuleView. - Returns - The control module information object of the item, if the service object has been set. Definition at line 101 of file MenuItem.cpp. Returns the list of keywords, which is used for searching the list of categories and modules. - Note - The parent items share all the keywords of their children. - Returns - The list of keywords the item has. Definition at line 75 of file MenuItem.cpp. Provides information on which type the current item is. - Returns - true if it is a category. - false if it is not a category. Definition at line 121 of file MenuItem.cpp. Convienence function which provides the name of the current item. - Returns - The name of the item, if the service object has been set. Definition at line 106 of file MenuItem.cpp. Returns the parent of this item. Definition at line 86 of file MenuItem.cpp. Returns the service object of this item, which contains useful information about it. - Returns - The service object of this item if it has been set. Definition at line 96 of file MenuItem.cpp. Sets the service object, which is used to provide the module information, name and keywords Applications will crash if it is not set, unless it is the top level item. - Parameters - Definition at line 126 of file MenuItem.cpp. Sorts the children depending on the value of "X-KDE-Weight" in the desktop files of the category or module. Definition at line 65 of file MenuItem.cpp. Provides the weight of the current item, as determined by its service. If the service does not specify a weight, it is 100 - Returns - The weight of the service Definition at line 116 of file MenuItem.
https://api.kde.org/4.x-api/kde-workspace-apidocs/systemsettings/core/html/classMenuItem.html
CC-MAIN-2019-47
refinedweb
540
59.09
MODGPS MODGPS is an easy to use library that allows you to get basic time/date and location information from an attached GPS module. Connecting up the GPS module¶ The MODGPS library only requires you to connect the TX serial out from your GPS module to one of the three available RX serial inputs on the Mbed. Optionally, if your GPS module has a One Pulse Per Socond (1PPS) output you can connect this to any of the Mbed pins that is supported as an InterruptIn. Without this the GPS the library will return the time to within +/-0.5 of a second. However, connecting the 1PPS signal if available will increase the acuracy to +/-0.001 of a second. Note, the 1PPS output is sometimes incompatible with the Mbed input. In this case you may need to buffer the signal using a transistor or FET. Buffering the signal will often turn a default 1PPS active leading edge to an active trailing edge. More on this later. Using the MODGPS library¶ First, import the library into the online compiler. Import libraryMODGPS Allows for a GPS module to be connected to a serial port and exposes an easy to use API to get the GPS data. New feature, added Mbed/LPC17xx RTC synchronisation Once imported, change your projects main.cpp to:- #define COMPILE_EXAMPLE_CODE_MODGPS #define PCBAUD 115200 #define GPSRX p25 #define GPSBAUD 9600 //#define PPSPIN p29 #include "/MODGPS/example1.cpp" Note, please set PCBAUD to whatever you normally connect your Mbed to Pc/Mac/Linux box. The GPSBAUD is set to 9600 which is common among many GPS modules. If it is different set that too. The PC serial isn't required but you will get more information. Optionally define PPSPIN if you have connected it. If it's not connected leave the #define commented out, otherwise remove the leading comment. Now switch everything on. LED1 should begin slowly flashing. If you connected any optional 1PPS singal then LED2 will flash once per second. When the library receives a GPS NMEA RMC data packet LED3 flashes and when a NMEA GGA packet is received LED4 flashes. If you have a PC connected to the USB serial port on the Mbed, additional data is displayed. Using the MODGPS library¶ First, create an instance of the GPS object and pass in the RX pin it's connected to:- #include "mbed.h" #include "GPS.h" GPS gps(NC, p25); Getting data from the MODGPS library is very simple. There are several ways, the simplest of them shown below. - To get location data:- double latitude = gps.latitude(); double longitude = gps.longitude(); double altitude = gps.altitude(); - Or, to get all data together:- GPS_Geodetic g; gps.geodetic(&g); double latitude = g.lat; double longitude = g.lon; double altitude = g.alt; - To get time data:- GPS_Time t; gps->timeNow(&t); pc.printf("%02d:%02d:%02d %02d/%02d/%04d\r\n", t.hour, t.minute, t.second, t.day, t.month, t.year); The API contains many more functions such as Julian Date, Sidereal time and more. Follow the link to find the full list and documentation.
https://os.mbed.com/cookbook/MODGPS
CC-MAIN-2018-13
refinedweb
517
59.6
The NMSSMWWHVertex class is the implementation of the coupling of two electroweak gauge bosons to the Higgs bosons of the NMSSM. More... #include <NMSSMWWHVertex.h> The NMSSMWWHVertex class is the implementation of the coupling of two electroweak gauge bosons to the Higgs bosons of the NMSSM. It inherits from VVSVertex and implements the setCoupling member. Definition at line 26 of file NMSSMWWHVertex.h. Make a simple clone of this object. Implements ThePEG::InterfacedBase. Definition at line 78 of file NMSSMWWHVertex.h. Make a clone of this object, possibly modifying the cloned object to make it sane. Reimplemented from ThePEG::InterfacedBase. Definition at line 84 of file NMSSMWW::VVSVertex. Storage of the couplings. The last value of the electroweak coupling calculated. Definition at line 121 of file NMSSMWWHVertex.h. The static object used to initialize the description of this class. Indicates that this is a concrete class with persistent data. Definition at line 104 of file NMSSMWWHVertex.h.
https://herwig.hepforge.org/doxygen/classHerwig_1_1NMSSMWWHVertex.html
CC-MAIN-2019-30
refinedweb
157
61.43
Creating vCard action result I added support for vCards to one of my ASP.NET MVC applications. I worked vCard support out as very simple and intelligent solution that fits perfectly to ASP.NET MVC applications. In this posting I will show you how to send vCards out as response to ASP.NET MVC request. We need three things: - some vCard class, - vCard action result, - controller method to test vCard action result. Everything is very simple, let’s get hands on. vCard class As first thing we need vCard class. Last year I introduced vCard class that supports also images. Let’s take this class because it is easy to use and some dirty work is already done for us. NB! Take a look at ASP.NET example in the blog posting referred above. We need it later when we close the topic. Now think about how useful blogging and information sharing with others can be. With this class available at public I saved pretty much time now. :) vCardResult As we have vCard it is now time to write action result that we can use in our controllers. Here’s the code. public class vCardResult : ActionResult { private vCard _card; protected vCardResult() { } public vCardResult(vCard card) { _card = card; } public override void ExecuteResult(ControllerContext context) { var response = context.HttpContext.Response; response.ContentType = "text/vcard"; response.AddHeader("Content-Disposition", "attachment; fileName=" + _card.FirstName + " " + _card.LastName + ".vcf"); var cardString = _card); } } And we are done. Some notes: - vCard is sent to browser as downloadable file (user can save or open it with Outlook or any other e-mail client that supports vCards), - File name is made of first and last name of contact. - Encoding is important because Outlook may not understand vCards otherwise (don’t know if this problem is solved in Outlook 2010). Using vCardResult in controller Now let’s tale a look at simple controller method that accepts person ID and returns vCardResult. public class ContactsController : Controller { // ... other controller methods ... public vCardResult vCard(int id) { var person = _partyRepository.GetPersonById(id); var card = new vCard { FirstName=person.FirstName, LastName = person.LastName, StreetAddress = person.StreetAddress, City = person.City, CountryName = person.Country.Name, Mobile = person.Mobile, Phone = person.Phone, }; return new vCardResult(card); } } Now you can run Visual Studio and check out how your vCard is moving from your web application to your e-mail client. Conclusion We took old code that worked well with ASP.NET Forms and we divided it into action result and controller method that uses vCard as bridge between our controller and action result. All functionality is located where it should be and we did nothing complex. We wrote only couple of lines of very easy code to achieve our goal. Do you understand now why I love ASP.NET MVC? :)
http://weblogs.asp.net/gunnarpeipman/creating-vcard-action-result
CC-MAIN-2015-18
refinedweb
460
69.48
wcsdup man page wcsdup — duplicate a wide-character string Synopsis #include <wchar.h> wchar_t *wcsdup(const wchar_t *s); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): wcsdup(): - Since glibc 2.10: _POSIX_C_SOURCE >= 200809L - Before glibc 2.10: _GNU_SOURCE Description The Value On success, wcsdup() returns a pointer to the new wide-character string. On error, it returns NULL, with errno set to indicate the cause of the error. Errors - ENOMEM Insufficient memory available to allocate duplicate string. Attributes For an explanation of the terms used in this section, see attributes(7). Conforming to POSIX.1-2008. This function is not specified in POSIX.1-2001, and is not widely available on other systems. See Also strdup(3), wcscpy(3) Colophon This page is part of release 5.00 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Referenced By pmemobj_alloc(3), strdup(3), vmem_malloc(3), wcscpy(3).
https://www.mankier.com/3/wcsdup
CC-MAIN-2019-13
refinedweb
165
60.92
Join the community to find out what other Atlassian users are discussing, debating and creating. I am wanting to show the "last comment" written in a JIRA to provide an overall status. I was just wondering on when you create a dashboard if there is a way to add in the last comment added in a JIRA to a gadget in the "columns to display"? I added "comments" listed in the columns to display option but nothing pulled up. I am using the JIRA3 instance. Thanks! Are you using Jira Cloud or Jira Server/Data Center? There's no way to do this by default in either case, so you'll need some add-on to help you with this. If Cloud, this link might be helpful to you: If Server/Data Center, you can definitely achieve this probably using the same steps as in the last link above using Automation for Jira Lite, but if you have ScriptRunner installed, you can create a scripted field using this script: /** * Field name: Last Comment * Template: Text Field (multi-line) * * Description: * Returns the last comment on an issue. * */ import com.atlassian.jira.component.ComponentAccessor // get the latest comment def commentManager = ComponentAccessor.getCommentManager() def comment = commentManager.getLastComment(issue) // check to see if comments exist - return comment body if so if (comment != null) { return comment.body } return null This was exactly what I was looking for after trying the JQL Function and realizing I needed to display the data in a field. Thanks!!! Hello Alex_Christensen, Thanks for the script. It works. I created a Custom Script Field, Field Name = Comment(customfield_10845). I added this customfield Advanced Settings to jira.table.cols.subtasks in order to show the latest Comment customfield under parent-task. When I add a comment to sub-task and I put it a restriction (Restricted to jira-administrators) only jira-administrators can see the comments in the sub-task but under parent task every project user can see this comment field. Can we add a kind of restriction to this script ? Thanks. Hi! @Alex Christensen Could you add, please some, disclaimer, around performance degradation, and some limitationss etc. 1. for example, users from one of my instances, want to implement it, But they have 1 - 3 mln cases, +1-5k daily increase + used 120k symbols for the text limits as results want to have boards with last comment. 2. Technically, it's easy implement, but before they should understand writing bots and doing a lot of updates and checking via dashboard it's not good way. 3. Once if you want to migrate to another DB, or data, you will meet with limits. 4. Also, script should have some truncate function to have a strict rule. Cheers, Gonchik Tsymzhitov Could this be used to pick out the last Customer comment or the last internal comment? What about the date of the last comment and the user who made the comment? Hi Stephanie, I can confirm that if you are using Jira Cloud that it is not possible to create a Scripted Field like you can do with ScriptRunner for Jira Server. The reason we are unable to provide this functionality is due to the restricted functionality and API's which Atlassian provide inside JIRA Cloud, due to the fact that Atlassian only provides a Rest API in Jira Cloud and not the same Java API that the server version contains. You can see more detailed information on the differences between ScriptRunner for JIRA Cloud and ScriptRunner for JIRA Server inside of the documentation page located here. I can, however, confirm that the closest functionality to a Script Field that can be achieved in Jira cloud is a Script Listener which calculates a value on an issue as described inside of the documentation page here. The Script Listener would need to be configured to fire on the Issue Updated event and would need to call the Get Comments API documented here to get all the comments from an issue and then to add the text from the last comment to a custom Text field. If this response has answered your question can you please mark it as accepted so that other users can see it is correct when searching for similar answers. Regards, Kristian @Alex Christensen i need to add comment restriction also in the last comment so can you tell me how to do it? If you want to restrict comment on the issue, restrict it from Permission Scheme. Or If you want to customize and restrict Last Comment field to other projects, just add the context. How do I actually do that I keep getting all kinds of errors. Can we see example code to retrieve the last comment from jira cloud using a script listener? Hi Mark, Thank you for your response. I have created some simple example code located here which when configured to run on the Issue Updated event in a Script Listener in ScriptRunner for Jira Cloud will get the text from the last comment off an the issue and save it to a variable. You will be able to use this as a reference to help you create the script that you require. I hope this information helps. Regards, Kristian Thanks so much Kristian. And to put this into a custom field called "Last Comment" ID 10093 would you be able to help me with the put statement that follows on from this? I really appreciate this Hi Mark, Can you please advise what type of field the Last Comment Field is? I can confirm that in our documentation site located here that we have lots of examples of how to set different fields and the example located here shows how to set a text field and you can change the summary field to your custom field if you wish to set a text field by specifying the ID for the custom field. I hope this information helps. Regards, Kristian Hi Kristian, The Last Comment Field I made is multi-line text. I'll give the advice a try and see what the result is. Again thanks Hiya, So I tried it and it works for any plain text and multi-line text. As soon as I put in bullet points into my comments though it breaks: // Get the key of the current issue def issueKey = issue.key // Make a call to the Get Comments API to get all the comments on the issue def getComments = get("/rest/api/3/issue/${issueKey}/comment") .header('Content-Type', 'application/json') .asObject(Map) // If the issue has comments if(getComments.body.total != 0){ // Save the last comment to a variable called lastComment def lastComment = getComments.body.comments.last().body.content.content.text.toString().replaceAll("\\[", "").replaceAll("\\], ", "\n").replaceAll("\\]", "").replaceAll(", null, ","\n") //This is my custom field ID for Last Comment def customFieldName = 'customfield_10093' def result = put("/rest/api/2/issue/${issueKey}") .queryString("overrideScreenSecurity", Boolean.TRUE) .header('Content-Type', 'application/json') .body([ fields: [ (customFieldName): lastComment ] ]) .asString() // log out the last comment logger.info("The Last Comment = " + lastComment) if (result.status == 204) { return 'Success' } else { return "${result.status}: ${result.body}" } }else{ // If the issue has no comments log out message to say this. logger.info("The issue has no comments") } Is there anything that can be done to show bullet points? Also if I wanted it to only apply to projects of a project category called "IT" only can that be done as a check? Thanks again for the assistance. Hi Mark, Thank you for confirming you got the script to work for a text field. I can confirm that the example provided is setting the field as a string and I do not believe that Atlassian allow you to retrieve the formatting of the comment as the API's do not allow this. You may wish to look at using a Groovy Multi Line String if you wanted to save the value to a multi line text field and to try to format this text yourself that is displayed, but I do not have any examples of how to do this that I can share with yourself. As for limiting the listener to all projects in the IT category then I can confirm that this is not possible but that you can specify just the projects that the listener should run for in the Projects box when editing the Script Listener. I hope this information helps. Regards, Kristian Hi Mark, I have just tested your code and can confirm that the get comments API does not appear to support getting the formatting for comments which have complex formatting in them such as bullet points etc. However, I noticed as a workaround if you place the formatting in a noformat block using the syntax which you can see by navigating to the url of <JiraBaseURL>/secure/WikiRendererHelpAction.jspa?section=advanced in your instance then you will be able to show formatting such as a bulleted list in its raw wiki format. I hope this helps. Regards, Kristian Hi Kristian, Do you mean if I place the output from my comment into a no format block and then put this into my Custom Field it will actually preserve the bullet points? I'm a little confused where the no format block should be utilised? Hi Mark, I meant if you place the no format block in your comment and then put the bullets inside this in the comment that it will return the markup for the comment. This means the custom field would then show the bulleted list similar to below where it uses a * symbol to indicate the bullet point. * a * bulleted * list This means the formatting wont be the same as in the comment but as mentioned previously it is not possible to extract the same formatting as the Atlassian API's does not allow this so this is the closet to getting it that you will be able to achieve. I hope this information helps..
https://community.atlassian.com/t5/Jira-questions/Adding-quot-last-comment-quot-in-a-Filter-Results-Dashboard/qaq-p/1066548
CC-MAIN-2022-05
refinedweb
1,666
59.03
Seems that zgeev routine could write past the end of the rwork array of the recommended size ( 2n ). here is my code calling zgeev using namespace std; #define ComplexD std::complex<double> bool zeigValVecA(ComplexD *Val, ComplexD *Vec, ComplexD *A, int *n, bool *brk) { char left = 'N', right = 'V'; int ok(0), LW(-1); vector<double> RWORK(size_t(*n)*2); // +1 fixes the access beyond the end of the array inside MKL ComplexD work; mkl_brk = brk; ZGEEV(&left,&right,n,A,n,Val,nullptr,n,Vec,n,&work,&LW,RWORK.data(),&ok); if (*brk) return *brk; LW=int(work.real()); vector<ComplexD> WORK(LW); ZGEEV(&left,&right,n,A,n,Val,nullptr,n,Vec,n,WORK.data(),&LW,RWORK.data(),&ok); return ok!=0 || *brk; } This code resides in a dll statically linked with MKL and called from a main program written in Delphi. Dll is compiled with the latest Visual Studio 2019. Debugging the the main program I get the following debug output (presumably from VS C++ RTL) Debug Output: HEAP[ModalCol.exe]: Process ModalCol.exe (13956) Debug Output: Heap block at 000000001525E490 modified at 000000001525F130 past requested size of c90 Offset is beyond the end of the vector storage (n=201). Adding +1 to the RWORK length fixes the problem. The smallest n where i got this behavior is 17. Tests were made on Windows 10 1909 x64, Intel Core i9-9900K processor, INTEL_MKL_VERSION 20200000, code is 64bit. 2*n+32 will be better. We will add some detailed instructions about this issue into the coming MKL 2020.1 Link Copied Boris, this is a known issue and we are planning to fix it into one of the next updates of MKL v.2020. We will keep you informed when this update would be released. Thanks, I suppose that for now my +1 fix is enough? 2*n+32 will be better. We will add some detailed instructions about this issue into the coming MKL 2020.1 the problem has been fixed in mkl v.2020.1 Can confirm that the problem has been fixed in 2020.1 thanks Boris
https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/zgeev-problem/td-p/1137427
CC-MAIN-2021-31
refinedweb
352
67.04
- Advertisement - entries 41 31 - views 38703 About this blog My brain falling out my mouth. Entries in this blog There's monsters in here My factory implementations aren't as concrete as I thought they were. Unknown design issues with third party types threw a big wrench into things. I hacked together a quick chunk to get things loading and displaying correctly, but I'm going to need a refactor. Which I think I'll use my next chunk of time to do. Gotta clean things up every once in a while and make sure I'm staying on track with my original goals. I use a sorta factory design to load in templates of everything. Such as a creature factory that loads all the base templates of each creature type. The rest of the program is supposed to then be able to create new creatures from the templates. Unfortunately some of the objects with my objects are sensitive :( . Several implement copy constructors and so forth, but I failed to read about the various variables that don't get copied and must be set through the new objects interface. This made me get sickly dirty to hack together the copying outside of the factory for the video. Till next time happy coding, I'll have a walking player and traversable floors by the next update. *cross my fingers* More Rouge DUNGEON: I've been putting most my effort into getting the dungeon generator up and running before getting things on the screen. The dungeon is a container the holds a series of floors. Each floor contains a terrain, monsters and items on the floor and other various objects that exist on that floor. The idea is that when the player enters a floor, the game will pull the floor from the dungeon and use it as the active floor. When the player leaves the floor, it gets saved back into the dungeon at its current state. To generate each floors layout I'm using a rough binary space partitioning method. Starting with a starting quad that covers the entire bredth of the floor. I then randomly split the quad in half along with the resulting childs. This splits the floor into a series of neighboring rooms. I then randomly delete some of the rooms, while making sure they are all still connected. For now the generator then randomly places monsters and items throughout the floor. ACTIONS AND EFFECTS: I settled on a list of every action I want to support. I split the actions into different categories based on the data required to perform the action. Combat actions are rather small, but refer to any action that uses the creatures equipped weapon. Equip actions, once again small, refer to ofcourse equipping items. unequipping, dropping and picking up items also fit neatly into this category. Terrain actions for things like moving, opening doors or using stairs. Lastly, consumtion actions are for use items in the inventory. Things like eating food and drinking a potion. I grouped using wands and throwing into this category aswell. it might seem like it makes sense to put those into the combat category, but the data required makes it fit better in the consumable list. Consumables are pretty boring by themselves though. Most if not all will have a related effect. effects can be simple things like "IncreaseHealth" which will increas the targets health by the given amount. Effects are further described with what are valid targets for the effect. Items like a potion of healing might actually use a "IncreseUsersHealth" effect that only allows targeting the user of the action. A wand of healing on the other hand would allow targeting the user and other creatures. This does limit the combinations of item types and the effects that can be associated with them. Such as equipment won't be able to have effects like that of a healing potion. I do want to keep things simple though so no biggy if I can't make crazy items like an amulet that casts fireballs. Next I WILL have the basic view put together with the logic to move the camera around the floors and traverse up and down them. 'Til then happy coding. Crawling For Fun I finally crawled back into the world of RougeLikes, I could never bring myself to play any ASCII version consistantly. I've been hooked on 'Dungeon Crawl Stone Soup'. Which has a graphical version (tiles) aswell and is updated quite alot. The mechanics are similar to D&D rules. You start off as a single guy trying to get to the bottom of a dungeon, steal an orb and get the hell out. Simple as that.... until you play it. My favourite and the most fustrating part of the game is the immense unknown. The first time you die on the first floor cause you drank a potion of poison will make you scream and laugh all the same. You'll quickly learn how fragile you are and that everything in the game can be a deadly threat. The moment you feel strong you're almost gauranteed to slip up and send your hero to the grave. Aslong as you can transition to the "take your time" attitude, you'll find the game easy enough but it could take years to actually beat it. Rough RougeLike A post or two ago I mentioned a random dungeon generator I was messing around with. I figure it's a great jumping point for trying a small RougeLike project of my own. So I put together a simple prototype, just melee combat. I've spent a bit of time working out the base structure of the game, the objects and divisions of the program. Hopefully it's just a matter of coding it all now. The major roadblock I'm trying to decide on first though, turn order and queueing actions. In the prototype, there was only two actions a creature could take, items can't take actions. They either walked, attacked or did nothing. It was pretty easy to queue a simple struct like so;struct ACTION{usertarget;type;} An actionProcessor object would store all the actions in a vector, then when it came time to process the turn. it would sort them all by the users speed and process the actions one by one. I plan to use something similar, but I'm also storing creatures, items and all other objects different then before. Before, the handles to the actors were very loose. The actor could be nonexistant and the handle could still be used. it would just cause the action processor to disregard the action entirely once one of the actors wasn't found. This wasnt a bottle neck what so ever in the prototype, but I wouldn't think so with only 15 actors present. I plan on floors with 200+ objects present all the time, so looking through an entire list of objects everytime I want to find one isn't going to work. I decided to go with a simple quadTree container that will be used to store the actual objects. Then the controllers and actions will use iterators into the sub containers that hold the objects. The actions take pointers to these iterators as aurguments for the user and targets of an action. This means iterators can't be removed until all actions are processed or so voodoo needs to happen to remove actions with iterators that were removed while it's processing, and that just seems wasteful. Simply checking if either is invalid because of death or out of range or etc seems more logical. Now that I'll be using more fragile pointers, and quite alot more actions; also with items being and other objects possibly part of a given action but not all. It makes it a bit more complicated. Hopefully it'll all come clear after I finish my list of every action I want to be possible. I already am thinking I'll have seperate processors for item type with specific functions for the different actions / effects that can occur because of those items. Equipping as simple as it seems is a perfectly good example of an action. // basic action infotype = action_equipuser = selftarget = self// equip specificequipment equipment The action processor can read that its an equip actions which would then pass all the data to equipmentProcessor.actionEquip(....), these could simply be utility functions. I'm thinking of using a base class action that has the basics, user, target, type, but will also have a processor id. This id will simply tell the main action processor which sub processor to pass the action off to. Then that processor will know how to recast it and process the action. My brain is brimming with ideas now... Happy coding, next time I should have a worth video ;) Prototyping, designing and videos Design and learn by prototyping. Prototyping I've found to be a great exercise in teaching myself to pump out better code, faster. If only I could find some time to be less tired and motivated to do more than a 1-2 hour burst of coding. Only one project have I ever wrote or actually developed a fleshed out design or layout of the application. My framework has been the only big enough and intricate enough project to make that worth it. Everything else, I just get an idea and once I have time, I sit down and just code until I can see something on the screen. Then hack in all the other ideas I had with it. If it's looking good, I'll go back and clean it up or actually take the time to design it on paper then re code the whole thing. I still see alot of posts on gamedev that are all along the lines of "What's the best way to do this?". I've come to believe there is never a best way to design and build an application. Sure there maybe a best way to optimize a small operation, such as the best way to sort a list with specific parameters. I'm talking about full fledged applications / games. The best way can only ever be found by coding the project and seeing how your implementation works. Even then, after you've come up with the best way as you see it. Down the road you'll probably think of an even better way to do it as you learn new techniques. That's why prototyping is so great. It lets you see your design ideas in action. You get to actually see how your code interacts as a whole. I still always find so many flaws in my ideas once I've coded them. There is just way to much an project that you'll never be able to plan for everything, and why would you. All the time you spend designing and just thinking about your game, is time you could spend coding and actually seeing if it works. Believe it or not, even after you've got your design down, once you start coding. You're design is going to start changing constantly to fix flaws you can only find by running the actual application. Now I'm talking about the average coders projects, not the next "Skyrim", or projects that take on multiple programmers (which i see over done on such small projects in the help wanted so often). Most average projects at best are on the size of a platformer, or top down shooter. I'd even say a single player rts game falls in this category as well. All of these can be thrown together as a prototype with very little pre planning or designing. *Warning I get rude here* If you have to spend more than a couple hours fleshing out the basics of what you're trying to create with something this size. You really should ask your self, "Do I know enough to finish this project, let alone in a timely fashion?" If you find yourself perusing the internet for solutions or asking how to implement your concepts on the forums every time you sit down to code. You're probably biting off more than you can chew. There is a huge difference between asking "How do I move a sprite in the direction it's facing?" and "How do I add a technology tree to my RTS project?". I see the later style question, asked alot on the internet. If that is you, you should go back to the basics and work on alot simpler projects until you can come up with solutions to these questions on your own. *rudeness hopefully over ;)* Prototyping is also great practice in churning out lot's of code that works. Programming is one of those things that is best learned kinetically, the more you do it, the better you're going to be at it. Code, code, code and code some more. You don't have to prototype a whole game. You want to be prototyping features and ideas. Take for example, the trading card based game I've been working for a while. Everything in it was prototyped by myself, before I ever designed the complete system. This took me through several iterations, were even at one point I ditched the whole idea of using the card mechanics. I prototyped the idea of loading a series of cards as a deck and being able to draw cards into your hand and place them on the game board. I prototyped the interaction of the actual game pieces and how the went from being cards to being an interact able object on the game board. I prototyped the animation system. I eventually even prototyped the idea of using a whole different base engine for the game. Doing so helped me see how the actual concepts fit together, how they might fit better if I did things different. I got to play the game in different ways and which forms were boring and which were fun to me. I found lots of organizational flaws and the bugs they caused. All of which has helped me put together a far more accurate design of the game. One that so far, I have yet to go to implement something in the design and said "Nope that design sucks, how am I going to fix this without breaking the rest?". a couple quick vids to demonstrate a little of what I'm talking bout here. Note this game was originally intended to be a blend of tower defense and trading card games. My first prototype was fleshing out the card system. Loading a deck of cards, and drawing them into your hand then placing them onto the game board. [media] [/media] Shortly after a couple more prototypes I put together one that ditched the cards and used a set selection of towers. This eventually felt boring to me and started realize my brain was just copying another famous tower defense game ;) [media][/media] The most recent I threw together a version that uses pathing rather than the row based design. [media][/media] In short what I want to get across to all you beginners, instead of asking how to do make a game. Just start coding the smaller parts of it. then code, code, and code some more till your idea is complete. You can clean the spaghetti mess up later. Happy coding and have fun... Ill leave you with this [media][/media] Long time no see, gamedev As I last left off I was working on a little game trying to use trading card game mechanics. At one point I took this to the tower defense genre. It didn't fit very well and when it did, it seemed way too much like a rip off of 'Plants vs Zombies'. So back to the four player version using a game board not unlike a chess board. The basics of the game is each player plays his turn out one at a time. Everything is turn based, each game piece can execute one action a turn. Each turn the player resource pool grows slightly allowing them to play tokens to the battlefield or cast spells from the tokens in hand. To win a player must get his token within reach of the players home squares and attack (move onto) them to deal damage to the player. By moving tokens to a square they consume it, converting it to one their tiles. The point here is that tokens can only be placed on the tiles you own. Your home tiles of course can never be converted or occupied by your opponent. Therefore there is always somewhere to place your tokens. Bolting your tokens straight for a player can be beneficial too, as it lets you play new tokens closer to the enemy's home. I'm still implementing the base structure of the game, and making sure the mechanics work correctly and are fun. The player cant play it at the moment, but I have a rudimentary AI in place to test things out. I also picked up some crack from Blizzard North earlier this month, and boy was I dissapointed. Only time will tell if it was really worth it... so far nah. It did create this itch in me to mess around with procedural generated worlds again though. Spent a couple hours yesterday and through together simple dungeon generator. Only floors and walls at the moment. Each piece of wall is a game object though. Which every object in the game from a breakable barrel to the player will be based off of. I think Ill throw together a simple gauntlet/zelda clone to get my feet wet in component based designs. Right now it uses an inheritance design, as that was a lot easier to throw together for something I was just messing around with. Got an hour or two before work, gonna throw together some code to populate the dungeon with things like barrels, bookcases, candles and other little nick-nacks that will all be interact-able with the player. Oh and both these little projects are using SFML, instead of my homebrew engine. I got to say it's very nifty and fast to get up and running. Though lacks alot of the specialized things I had incorporated into my framework. Losing my hardrive, I lost all the new build of my framework and decided to put the refactoring on hold for now. It has given me some great ideas on how to overcome some of my shortcomings though. and saved me the probably 3 months of recoding to get it were I wanted it. hope you enjoy, and all have a good week. Nothing dev about it - Taking care of your self So lets all put those demons away, put on a fat smile and walk towards the sun. Not asking for a world of narcissist, but if not for our own lives, what's the point. I'll leave it with my next ink... painful back arm to boot... going to be a great release. "The demons that distract us" Code on dev world! ZECloud::RenderList Cause you You're so calm I don't know where you are from You You're so young I don't care what you've done wrong Too me it screams the essence of childhood, being an adult and whatching children grow up. To not take the world so seriously that you never do anything. Everyone makes mistakes and it's not the end of the world. I constantly find my self reminding my own kids to not worry so much. Seeing how easily people get fustrated these days, it's not what I remmeber as a kid. "Tokyo Police Club - Shoulders and Arms" is the song for anyone interested. Well I started doing a little more remodeling then I thought I was going to. The following snippet is the current and probably old renderList. These lists are the back bone of my renderer and do all the heavy lifting for me. The list stores a struct called drawcall, which is the basic information about what is trying to be drawn and how; position, size, texture, shader, rotation/origin, and diffuse coloring. Which actually isnt the original layout either. The old drawcall, stored the actual vertecies along with the texture, shader and so forth. A year or two ago, I switched to the concept that all I needed was rectangles/quads. Why store all the extra data when I could just translate them later? Since then I've had countless times where a trangle or line would've been so much more helpfull, then trying to aligned a quad textured with the shape just right. If you poked around my last post, you probably remember the DrawQuad function, there did used to be DrawTriangle, DrawLine.... hence why I didn't call it something more associative like DrawSprite. Once all drawcalls have been made for this frame, everything move onto the renderer which mainipulates the render lists and presents them to the screen. Starting at the preFill. Any sorting of the list happens here. I sort all my calls so that transparent textures are at the back, orderer back to front, and opaque textures orderer front to back. Using the z depth of the textures, This allows alot of flexibility in game design. THeres no need to design you code around the fact that certain images have to be drawn at certain times to appear right. Aslong as they're positions are set right, they'' appear correctly. After sorting any render target swapping happens here. This is one aspect that is being remodeled. Currently the renderList only supports one render target per renderlist. This meant I had to hardwire a seperate pipeline for doing lighting, else suffer the fact that Id have to have a seperate renderList withe the exact same data for each buffer required by the lighting system, and relock the vertexbuffer for each list and everything else that has to be done for each list, added up pretty fast. The new list is going to store chains of effects. each link in the chain is simply a renderTarget and shader to be used. This can even be the backbuffer just rendering with different shaders each time. Also each link can have a set of secondary textures that should be included into the stream. Such as passing the gBuffer along with what is being rendered to do lighting. It will be a bit experimental, as honostely some of the functionality I've never used before but always wanted to. Once things are sorted, the rendertarget is set. Fill, lcoks the vertex buffer for the current texture and shader combo. Using the drawcall iterator, fill walks through the drawcall vector, any call that matchs the current state gets translated and placed into the vertexbuffer. If a new state is found, fill stops immediately. This is all to take advantage of the iterator and try to not process any drawcall more than once... ie the pipeline should walk through the list only once. All the translation though is being moved to the point when the drawcall is made, except that or view -> screen translation which happens in the shader using an orthogonal projection. The pipeline uses it's own camera to tranlate the call from world to view space. Cull, then rotate if nessecary. Culling is really generic, I use an oversized view (bigger than the screen), and just do a quick bounds check with the center of the quad and the screen. Aslong as you're not trying to draw a 512x512 or bigger texture it shouldn't have any popping as it leaves the screen. PostFill is pretty much the opposite of prefill. it resets the rendertarget to the back buffer if it was changed, then resets the drawcall iterator so new drawcalls will start by overwriting the old ones before pushing new ones on the stack. namespace ZEGraphics { class RenderList { public: LPDIRECT3DSURFACE9 backBufferCopy; // used to save the backbuffer while a secondary buffer is being used. LPDIRECT3DSURFACE9 tempRenderTarget; IDirect3DVertexBuffer9* vertexBuffer; // dynamic vertexBuffer used by this renderList only. std::vector renderTargets; // list of secondary targets that will get rendered to in order withe the assigned shaders. std::vector drawcalls; std::vector::iterator drawcallIterator; // the iterator is used by the fill process only, to save the readers position inbetween state changes. public: RenderList(); RenderList(ZEGraphics::Device* device); // Basic constrctor, sets iterator and reserves space for 5K drawcalls. ~RenderList(); // releases all data including vertexBuffer, drawcall vector... etc. void Reset(); // resets the drawcallIterator, and other temp settings. /********************************************************************************************** * PreFill - sets up the list for converting to vertexbuffer. Sorts by state change, * resets the iterator and prepares the render target to be used. **********************************************************************************************/ void PreFill(ZEGraphics::Device* device); /********************************************************************************************** * Fill - Enters the vertex data for the drawcalls of the same state. **********************************************************************************************/ void Fill(ZEGraphics::Camera* camera, IDirect3DTexture9*& _texture, DWORD& _shader, UINT& _primitiveCount, bool& _isTransparent, bool& _isEmpty); /********************************************************************************************** * PostFill - Resets the renderTarget and clears the list for re entering new draw calls. **********************************************************************************************/ void PostFill(ZEGraphics::Device* device); /********************************************************************************************** * AddDrawcall - pushs a single drawcall onto the stack. **********************************************************************************************/ void AddDrawcall(ZEGraphics::Drawcall _drawcall); bool Full(); // Return true if the drawcalls have exceeded the vertexBuffer capacity. 100K calls (200K triangles). private: /********************************************************************************************** * helper functions to split the big functions up and make them easier to read and edit, blah blah blah. **********************************************************************************************/ bool NextDrawcall(); // move iterator to next available drawcall. return false if no drawcalls available. void GenerateQuad(ZE::VEC3 pos, ZE::VEC2 size, ZE::VEC3& vert1, ZE::VEC3& vert2, ZE::VEC3& vert3, ZE::VEC3& vert4); void SetVertex(DWORD index, ZEGraphics::Vertex*& vertexData, ZE::VEC3 pos, ZE::VEC3 normals, ZE::VEC2 uv, ZEGraphics::COLOR diffuse); }; }; #endif The new renderList will be moving one from the all in one bucket for every drawcall. I'm also going back to the old way of string vertecies in the drawcall to facilitae the difference between triangles, quads and lines. The later being the only one that is actually a special case. Circles I use a precision variable to determine how many triangles I use to build them. Drawcall buckets, that's what I'll be calling it now. The idea is to store drawcalls in seperate buckets for each state combo. So shader and texture will be stored in the bucket instead of individual drawcalls. No need to check if a drawcall is valid anymore, if it's in the bucket then use it with this state. Shaders migh even be removed to only the renderlist and the render chains. So if a shader is going to be used by a rednerList it's going to apply it to all of them. Off the top of my head I can't think of any time I used a seperate shader within the same renderList. So buckets will probably be sorted just by texture. Render chains will be another big change, already mentioned, a chain is a render target and shader to be applied. it will also have a way of assigning secondary textures to be applied to the stream. Say you build you color buffer, normals and other gBuffer goodies. Then need to render your scene to the screen using all that light data, but you can only use one texture at a time. Makes it kinda impposible right? Well the skeleton is in place and some of the dta types are finished, I still got alot of coding before it's what I want. I am trying to keep the old pipeline untouched though, so all my old projects still work just fine. They just cant use any new features. But now I'm all typed out... good luck and happy coding dev world... YORDLES! zoloEngine - Cloud and a walking disease Wasn't expecting to start this till tomorrow, but with my situation, I started today anyways. Torn my old render interface a new one and put together the skeleton for the new interface. Be proud of me, I kept the old interface and everything in the engine is still completely intact. There's actually two interface to choose from at the moment. The only thing lacking from the new one, is font and particleSystem support. The old system had a hackish way of using fonts and the particle system was very reliant on the old interface to work properly. Funny I'm looking at the two headers... old header 145 lines ~15 of it comments... the new header 95 lines ~ half of it comments. That's also including that I incorporated the window, input, and frame timer into the new interface aswell. new interface #ifndef _ZECloud_H #define _ZECloud_H #include #include "ZEMath.h" #include "ZESystemWindow.h" #include "ZESystemTimer.h" #include "ZEInputInterface.h" #include #include #include "ZEGraphicsDevice.h" #include "ZEGraphicsTextureCache.h" #include "ZEGraphicsEffectCache.h" #include "ZEGraphicsCamera.h" #include "ZEGraphicsRenderList.h" #include "ZEGraphicsInterfaceParameters.h" /*********************************************************************************** ZoloEngine - Cloud 2/2012 - Corey Marquette ***********************************************************************************/ namespace zecloud { class Cloud { public: /*********************************************************************************** Default Constructor - This constructor should not be used and will popup a warning message if it is. ***********************************************************************************/ Cloud(); /*********************************************************************************** Contructor(...) - This constructor must be used to initialize the engine. It initializes everything based on the data from a config file. ***********************************************************************************/ Cloud(HINSTANCE instance, std::string configFilepath); /*********************************************************************************** Destructor - all cleanup is done here. releasees everything from textures to renderLists. ***********************************************************************************/ ~Cloud(); /*********************************************************************************** Update - calls update on all sub components, and returns false if the engine has shutdown from any errors. If it has shutdown, the engine should not be used. ***********************************************************************************/ bool Update(); /*********************************************************************************** Render - processess all renderLists and present to the screen, use this function after all drawcalls have been made for the frame. ***********************************************************************************/ bool Render(); /*********************************************************************************** These accessors are meant to be used to create resources for the engine to use instead of doubling implementations in the interface. ***********************************************************************************/ ZEGraphics::TextureCache& TextureCache(); ZEGraphics::EffectCache& EffectCache(); /*********************************************************************************** CreateRenderList - adds a new renderList to the queue and gives a pointer back to be used for rendering. ownership of the RL's still remains with the engine interface. RenderList are never released until the engine is. ***********************************************************************************/ bool CreateRenderList(ZEGraphics::Sprite* renderTarget, bool clearTarget, ZEGraphics::RenderList*& renderList); private: ZESystem::Window window; ZEInput::Interface input; ZESystem::Timer frameTimer; // Renderer specific data ZEGraphics::Device device; ZEGraphics::Camera camera; ZEGraphics::TextureCache textureCache; ZEGraphics::EffectCache effectCache; std::vector renderLists; LPDIRECT3DVERTEXDECLARATION9 vertexDecl; }; }; #endif simple, just the way I like it. Starting a new project is as easy as adding a couple directies and creating a winmain file like so. #include #include "ZECloud.h" int WINAPI WinMain(HINSTANCE _instance, HINSTANCE _prevInstance, LPSTR _cmdLine, int _cmdShow) { /********************************************************* Engine initialization *********************************************************/ srand(timeGetTime()); zecloud::Cloud cloud(_instance, "data/config.xml"); /********************************************************* GAME SPECIFIC ASSETS *********************************************************/ /********************************************************* GAME LOOP *********************************************************/ while (cloud.Update()) { cloud.Render(); } return 0; } super simple... me really like. I have the tendency to be creating new projects constantly just to test out new ideas all the time. So having a fast setup proccess is very crucial to me. The old interface took on a master type role... every sub component had a double set of functions in the interface that would call the actual functions in the sub components. Don't know if that would be considered "PIMPL" or not, eitherway it was cumbersome. Anytime I wanted to add new functionaliy, I basically had to do it twice. Then ownership started to get blurry, which brings up the truth behind font rendering and the particle system. because they are seperate from the render interface, and they should be. They required the interface to render anything. This meant any changes to how the DrawQuad function worked, would effect how those system worked aswell. Especially when I added diffuse coloring to allow quickly changing the fonts colors. I had to dig deep into the rendeerlist, then change things in the interface, then had to change thins in how textures were loaded... it was just a mess. This time I've centralize, the act of drawcalls to renderList only. Similar to how XNA uses spriteBatch. The interface is no longer required to draw sprites to the screen. It is instead the point were resources are created, and manages the work flow of the system. It's still a major joint in the system, but should facilitate changes better. The one hurdle I'm still contemplating, is the difference between translated and untranslated coordinates. My camera is run in software, so its simple enough to just skip it when process translated coordinates. ie for things like the UI or other screen aligned sprites. I'm not sure if I want to store a tranlated flag in the drawcall struct or make it global and keep it in the renderList class. The former would allow alot of flexibility, but require an if branch for every drawcall (upwards of ~150K per frame). Keeping it in the renderList would mean only one if branch per renderList, but also means more work for the user (me). Creating seperate renderList for translated and untranslated rendering. I'm leaning towards the later, since I kinda already do this... but not always. and could easily lead to renderlist with only 10-20 calls stored, but requirring that many state changes. Defeating the purpose of batching. This will also problably mean more clutter in the draw function. I've been trying to minimize the require arguments over time, so I really didn't want to add more. Should I just create seperate function for translated and untranslated? Probably not. RenderList.DrawQuad(position, size, sprite, shader, rotation, diffuse, translatedFlag) not too bad I guess. Another big change for me, is moving from only translated coordinates, to using world coordinates then letting the camera translate these. I've been comtemplating letting the engine do all the culling instead of doing it in game logic. Should this be done early at the point when drawQuad is used or later when the renderer starts processing the renderLists? I'm thinking early, but this can have problems. Take for example, we make some drawcalls that get rejected for being outside the camera, but before processing the renderLists, we move the camera and the old calls would be in view now. This would cause were popping, were objects aren't visable for a split second, then suddenly are. But would also keep the RenderLists at resonable sizes. They have always been a bottle neck in the pipeline, so anthing to make them a little quicker is beneficial... it's just the drawbacks are icky. for the weirdos out there, heres the old interface for comparison and a look into my madness. #ifndef _ZEGRAPHICS_INTERFACE_H #define _ZEGRAPHICS_INTERFACE_H #include #include #include #include "ZEGraphicsInterfaceParameters.h" #include "ZEGraphicsTextureCache.h" #include "ZEGraphicsEffectCache.h" #include "ZEGraphicsDevice.h" #include "ZEGraphicsCamera.h" #include "ZEGraphicsRenderList.h" #include "ZEGraphicsSprite.h" #include "ZEGraphicsColor.h" #include "ZEVector3.h" #include "ZEVector2.h" namespace ZEGraphics { /** DEBUG information struct. */ struct DI_Interface { DI_Interface() : drawPrimitiveCalls(0), trianglesDrawn(0), smallestBatch(0), largestBatch(0) { }; void Clear() { drawPrimitiveCalls = 0; trianglesDrawn = 0; smallestBatch = 0; largestBatch = 0; }; DWORD drawPrimitiveCalls; DWORD trianglesDrawn; DWORD smallestBatch; DWORD largestBatch; }; /** Graphics API main Interface. */ class Interface { public: ZEGraphics::Device device; ZEGraphics::Camera* camera; ZEGraphics::TextureCache* textureCache; ZEGraphics::EffectCache* effectCache; std::vector renderLists; IDirect3DVertexBuffer9* vertexBuffer; LPDIRECT3DVERTEXDECLARATION9 vertexDecl; ZEGraphics::DI_Interface dInfo; /** Data used by the deferred lighting pipeline. */ ZEGraphics::Sprite gbNormals; ZEGraphics::Sprite gbPositions; ZEGraphics::Sprite gbColors; DWORD gbShader; DWORD dirLightShader; public: ZEGraphics::InterfaceParameters parameters; Interface() { }; ~Interface() { this->Release(); }; int Create(HWND _windowHandle, ZEGraphics::InterfaceParameters& _paramters); /** Display will process all the renderlists. Internally it will run these lists through the desired pipeline, specified by the renderList. */ void Display(); /** Specific pipeline for rendering the drawcalls. */ void DefaultPipeline(DWORD rl, IDirect3DTexture9* _texture, DWORD _shader, UINT _primitiveCount, bool _isTransparent); /** Release... releases all the allocated resources. */ void Release(); /** Drawing functions for drawing textured primitives. */ void DrawQuad(DWORD _renderList, ZE::VECTOR3 _pos, ZE::VECTOR2 _size, ZEGraphics::Sprite* _sprite, DWORD _shader, ZE::VECTOR3 _rot, ZEGraphics::COLOR _diffuse); void DrawQuad(DWORD _renderList, ZE::VECTOR3 _pos, ZE::VECTOR2 _size, ZEGraphics::Sprite* _sprite, DWORD _shader, float _alpha); /** Texture loaders. */ ZEGraphics::Texture* CreateTexture(std::string _filename, D3DFORMAT _format, bool _transparent); ZEGraphics::Texture* CreateRenderTarget(int _width, int _height, D3DFORMAT _format, bool _transparent); /** Effect Loaders */ bool CreateEffect(std::string _file, DWORD& _index) { if (effectCache == NULL) return false; return effectCache->CreateEffect(device.direct3DDevice, _file, _index); }; /** Deferred Lighting interface. */ bool SetupDeferredLighting(); void BuildGBuffer(LPDIRECT3DTEXTURE9 _texture, UINT& _primitiveCount); void ProcessLights(); /** The renderer stores pointers to a light source, so that it can be moved easly outside the renderer. The renderer won't cull the lights so it's up to the user to remove lights that are no longer visible. */ void AddLight(); void RemoveLight(); void ClearLights(); /** Misc resource loaders. */ DWORD CreateRenderList(ZEGraphics::Sprite* _renderTarget, bool _clearTarget); /** Resource access. */ ZEGraphics::EffectCache* EffectCachePTR() { return effectCache; }; /** Debug info. */ ZEGraphics::DI_Interface& DebugInfo() { return dInfo; }; void ClearDebugInfo() { dInfo.Clear(); }; /** anchors return the screen position at the specified position. */ ZE::VECTOR3 anchorTopLeft(); ZE::VECTOR3 anchorTopCenter(); ZE::VECTOR3 anchorTopRight(); ZE::VECTOR3 anchorCenterLeft(); ZE::VECTOR3 anchorCenter(); ZE::VECTOR3 anchorCenterRight(); ZE::VECTOR3 anchorBottomLeft(); ZE::VECTOR3 anchorBottomCenter(); ZE::VECTOR3 anchorBottomRight(); }; }; #endif Tomorrow I bother you again about the renderLists, the backbone to my renderer... its a yordle, run! Memory Blocks release - zoloEngine Cloud Playing the game - You're given a set amount of turns, each time you reveal a colored square it takes a turn away. Everytime you reveal two matching colors you get points and your turns back. Each pair starts a combo timer (4 sec), everytime you reveal another pair in that time, the clock gets reset and a chain counter goes up. Wghen the time runs out you get bonus points and turns based on the combo chain. Each pair revealed, reveals the surrounding blocks for a few seconds, but also explodes blinding the area. good luck and have fun. Download the game here - For those who don't have visualStudio 2010 installed, can get the runtime (~5mb), here - The game also requires directX9.c or higher to run, you can find that here - Ok I feel better now, a quick project out of the way. Time to focus on a few other thingies. I've been dieing to do some rework on my framework. Mainly making it a little easier to translate from the physics portions into the graphics portion and vice versa. As they were designed so far apart, they used different variable layouts let alone different coord systems. ie my graphics always conciders the position to be the upper left corner and the bounding box to stretch outwardfs from that. My physics component took on the convention, that position is the center of the bounding box, and size stretchs in all direction from the center. This is mostly because of my choice to use Box2D, and it uses a similar approach. A few major rethinkerings of my renderer interface needs to be done to finally get me back on track with "Kylar's Valcano". The current system has some major limitations on how much pixels I can push. A quick couple refactorings already pushed me from ~20K triangles @ 30fps to 250K triangles @ ~40fps. This ofcourse is all at 1280x720x32 using my particle emitter as a testing ground. Which brings me to another portion that did get rewrote, but lost when my pc decided to die. A few of the changes were me hacking around with how the emitters handle generating new particles. Wasn't nearly as efficient before, and actually account for alot of the performance improvement. Lots of redundancies, that I can only assume were done the way they were for the sake of getting it coded faster. Always fun reading comments couple years old, and seeing the programmer mind set I was in back then. Boy do I see things differently now. Like every one else, I've always had the idea that I would eventually release my framework to the masses. Then they would all praise me like a programming god. "Oh Corey you're a genius... how is this even possible", "Oh it was nothing, now throw away that useless UDK you got over there." I can dream right?.... right? Currently as it stands, the framework is split into several major sections with their own interfaces. Non included components are all third party, such as sound. Graphics::Intrerface Input::Interface Physics::Interface System Math Misc helpers But as this is for me, and I alwas strive for simplicity, since my a.d.d. has me starting new project every other day. I always like to design for fast prototyping. So I'm going back to more of an all in one interface. "Cloud", is the new name I'm dubbing this interface. Though the only thing I'm rewritting is the graphics interface and renderer pipeline (sounds like alot, but really it ain't). I'm incorporting the other interfaces into the graphics interface basically. So I can have an all in one intitializer and container for the interfaces. While you're puzzeled I'll take my leave and bid you all farewell... whatch out for the turnips. Nextime - Talk about the rendering pipeline and Day 4 (a 2 imager to finish of the tease). restart...refresh...rehash..re...re Well while trying to geth things caught back up... re coding what I lost... and getting used to the switch from vs2008 to the o10 version. I actually kinda like the new one..but project directories still pisses me off. If there's a way to globaly set the directories so when I start a new solution they are already added... I'm all ears. Heres a quick vid a little 5 hour project. I'll clean it up, plug a few leaks and release it later this week. It's a mix between "Same Game" and "Concentration", you got to find matching color pairs and remove them for points. Getting 4 seconds each time you find a pair to remove another and create a combo that gives you more turns. Each time you reveal a color it takes a turn away from you, so you got to be careful not to just click willy nilly. Oh I killed not one, but two birds with my truck yesterday... go figure, just get it back, 5 minutes into finally driving it and a swarm of birds leap in from of my truck. Thump... wo... thump... holly crap, two!. Oh and what ever you do, don't try to convince a three year old, that annakin skywalker is darthvader.... "No darth vader has a black mask", he says.... "He will, yes.. but.... Oh why did I start this?" Ardens last four days - day 3 A quickie Achievements -- Updated and filled missing documentation for every header within the project. Only game specific headers. -- Implemented base instances of all phases (upkeep, first setup, pre action, post action, second setup, and cleanup). -- Identified several depricated implementations, from original design. -- Finished basic implementation of TokenController to allow dragging and dropping, queing actions for active tokens. -- Implemented graphical que for queued actions. Goals, milestones -- Allow zooming in of tokens to select abilities during pre action and setup phases. -- Clean out depricated code that is no longer needed. -- Update dependecy tree and indentify spaghetti code that can be rearranged. -- Implement graphical que for current phase(s). -- Fix bug that you can't move to a spot that another token is moving out of. till next time .. ponder this... Guitar, Tatoos and a little programming Achievements - Implemented Phase system (my glorified state machine for turn based games), I've rewritten this boiler plate for the last 3-4 projects, I really should abstract it for future use after this project. - Implemented UpkeepPhase --- Draws a token --- Resets resources --- Processes auto abilities (resource generation only) - Implemented SetupPhase --- Allows playing tokens from hand, onto the game board --- Properly purchases and validates tokens cost --- no restrictions on where tokens can be placed yet --- no restrictions on the amount of a type of token that can be played per turn yet. - Token, hand (deck, library, graveyard), ability, resource, player and game board boiler plate completed. Future goals and milestones - Implement pre-action phase --- Allow assignment movement and attack orders --- allow playing ability only tokens, but restrict permanent tokens. - Implement post-action phase --- process all move and attack orders --- cleanup dead tokens - implement secondary setup phase --- All I should have to do with this, is push another copy of setupPhase onto the stack. - implement Cleanup phase --- clean up dead tokens --- regenerate tokens still alive. Summary With those four phases complete, I should be able to hit my major milestone of being able to play through a complete turn. Now take a break from code speak... here's a small tune I've been messing around with for some time now. Yes I know I play guitar like a french dude drinks tea... If I could just keep my pinky down, I might actually be able to up my speed. It's just a habit I can't break though... Here's my work in progress for the next section of my right sleeve... titled 'The Great Struggle', Gonna redue it so he's coming out of the flames, instead of what is supposed to be dust kicked up from him fighting with the vines to move forward. And the overseer, has to be repositioned to go on the part of my body I want to go on. And the colors, were just me messing around... none of my tats have color and probably never will. Till next time... hey watch it with the thumb. P.S. day three is coming soon, waiting for my next milestone before I move the story along. What? huh, what? Well that was two days ago, feeling somewhat intact, so thought I'd do some refactoring, and reassessing my projects. Reevaluated milestones, and tried to convince myself I'm on schedule. Another swarm of resumes, yet only one phone call returned... dead end. Yet it made me feel pretty good, someone thought I was worth calling back. Just got to brush up on my professional conversation skills. My typing without looking at the keyboard is getting ever faster, yet I noticed, my brain is still substituting words incorrectly... don't know how many times I typed things like "could" instead of "good". Darn brain, what is your problem, or "have" instead of "hope", come on now, those doesn't even sound alike. Maybe some paint chips will teach you a lesson. Nothing fancy, but messing around with the ol' ge-tar + my digitech processor (things been broken for years, finally fixed it) and some art stoof, so say bye to 30 seconds of your life. Still contemplating a way to incorporate the abilities of tokens, without having to hard code them... but it seems the best I'll be able to do, is incorporate the abilities as functions, and use some sort of tagging to identify what abilities the token has, and call that function when it gets used. This means I'll have to rebuild the exe every time I want to add new abilities though. Ahh I'll find a way eventually. After 5 hours of refactoring, got to say, this is now the cleanest project I've ever done. Till next time... don't eat the pudding. Another one The game board is just like that of a checkers/chess board. Each 4x4 corner is owned by one of the players. The player can play the cards in they're hand to place tokens onto the game board, but can only place tokens in their area. once on the game board each token can move once a turn, moving into an occupied space will make each token fight. The beginning of each game, each player can pick were in their area they want to place a castle. The goal being to attack the other players castles until they get destroyed. Like everything it still isn't finished, and is missing some fundamental mechanics for attacking and using the abilities on cards. So you basically just place tokens and move them around till you're out of resources. Till next time ... uh.. I forget. Skyrim 12 hours in and I've barely covered 2% of the map. The land is huge, and deadly. The graphics are beautiful despite what so many complain about. The first time I went treking into the mountains and got hit by a snow storm took my breath away. I don't know maybe I'm just simple, but I love the fact that heavy rain actually makes it hard to see far away. The snow storm brought my line of sight to barely 5-7 feet. When I was attacked up there, surrounded by 3 bandits, swing my sword everywhere I could. I was just hoping to hit something. I was strafe and backing up all over the place, and then. SSSHHHHH************, backed off the frickin' cliff. At least I didn't let the bandits kill me ;) . I have yet to try the radiant story feature out... still playing my High Elf warrior. But I can already tell the improvement from oblivion. The areas in the map are split into categories. ie dungeons, towns, forts, nests and so forth. I'm actually looking forward to my second play through. Also quests are alot more enriching, there's alot of times I head off on a quest think I've finished it, then find I'm only part way into a dungeon or fort. I decide to exlore deeper only to discover that theres a secret in the dungeon. Now I just have to go deeper and find this awesome treasure. A quest to retrieve stolen merchandise sent me into a hideout. Inside I find out that the leader has been cocooned by a giant spider. Died four times, before I managed to kill the thing. Freeing the guy, he ran deep into the hideout which turned into a two hour spelunking deep into an ancient temple. Oh an just because thing dont become hostile right away, doesn't mean you can run up to them. Ran up to a giant ogre, and to have him to around and one shot me with his club before I could draw my sword. The dragons are awesome and hard. Especially if you insist on fighting them in the mountains near their nest. Even a weakling one (green) whooped my level 10 before I could take half his life. My companion turned into a retard during the fight to. "What? Use a bow. Nah I'll just run circles with my mace and burn like a shiskabob", at least that's what I think she was trying to say. Running into two wolves, I dismounted from my horse. Too my shock, the horse trampled both wolves before I could encircle him to attack. Thank you Ed, you did more than Lydia who was still running around, cause she won't draw her weapon unless I draw mine. Dumb AI, but she's a good meat shield. MMMM, meat, I'm off to have some fill-it-mig-non. For anyone that liked the other elder scrolls, gothic, risen or fallout games. I'd definitely recommend this one. My only complaints so far is the fact that the ai's pathfinding still never takes in account the fact that you can jump. I jump across a small gap to take a shortcut, look back and realise I'm now on my own as my companion is never going to find a way to get to me, since they also don't know how to open doors. zolonoid 4th progression Zolonoid 3rd Progression zolonoid part 2 or 1.1? Yeah, so only 4 and half hours to be able to sleep last nite. For the sake of not sleeping in a gain and getting yelled at for it.. I once again opted to stay up well, unfortunately I couldn't focus enough to get anything done. 36 hours later I feel like a zombie with insomnia. A quick rehash of the graphics... I'm a sucker for simple clean looks. Pretty sure I've got the collision detections close enough to perfect... though I'm sure it could be simpler and faster than the way I do it. There can be an infinite amount of balls and blocks aslong as they can fit on the screen though without optimisations it seems ~2048 blocks and 32 active balls is getting close to the limits... I hit about 24-32 FPS. I'm sure some simple spatial bounds checks to eliminate the amount of collision test done would speed things up dramatically. as for now it's doing 65,356 tests just to see if any ball collided with a block. I don't see why I couldn't bring that down to around 2000 tests... but it's not needed and I've never seen a game of Arkanoid with more than 5 balls active. I also disabled the effect of balls dieing when they hit the bottom of the screen... as it just causes the game to exit right away. To finish it off, I cleaned a fair bit of the code, splitting it into actions(functions) for easier reading. Also to make it less likely I'll edit or place any code in the wrong place. Next just need to slap together a title screen, gameover state and a simple editor for visually designing levels. Tomorrow is magic day, so wish me luck. Last time I got my rear end handed to me over and over again. Think I got to comfortable playing casual games and wasn't ready for half the cards I went up against. I had to revive my burn deck and revamp it to counter the stupid "no win, no loose" cards and fast control decks that were being used. So far it hasn't taken me more than 10 turns to win, and if they can't remove artifacts, I can usually take the game by turn 6. motivation through myxomatosis And I see black, black, green,and brown, brown, brown and blue, yellow, violets, red. And suddenly a light appears inside my brain Back to coding along with the hundred other projects going on in my life. Been several weeks if not a month or two since I've done more than tweak numbers. So I thought I'd put together a quick Day-Project. A clone of Arkanoid... or breakout as the young ones call it. No need for pimple jokes now. I hit the ground running on this one. Coded everything up to the point I'm at without creating a single asset, or even hitting the build button. Told myself not to over think it, just code the damn thing. So my fingers trembled on. An hour later I hit the build button... slapped a couple squares and a circle together in Photoshop... filled out a couple config files... setup a quick font with BMFont... and hit play. This is what came out. Like I said its a day project, so I plan to finish it later after work. Since I usually only have 4 hours to sleep on Fridays... I tend to just not, and take a nap on Saturday. I'll see if I can't finish it within the 4 hours. Finished meaning to me that it should have a title screen, allow the player to enter their name, keep a persistent score across several levels. It must have a minimum of three levels different in they're own way, probably just by layout. The player should be able to loose and it should say they won, after the last level and show of their awesome score. I think I'll take this project when it's done as a way to brush up on my math knowledge. As you probably notice it uses a very hackish way of detecting collisions and tends to respond on horrible unrealistic ways (it works though). I think I'll expand my vector3 class beyond being a struct with three floats, and make it more of a true vector rather than a free vector. I would like to use a more mathematically accurate way of detecting collisions, the point they happen and how an object should react to them. The big one, is detecting collision along an objects path, instead of detecting collision at the end of it's movement like I do now. That's what causes the instances were it hits more than one block or gets stuck inside the walls and blocks. It's because the ball can move so fast it can pass through a block before a collision is checked... and so on. Hey if I can keep this high going.. I'll sit down and put some time into Kylar's Volcano since I haven't even looked at the project in a month. The engine should've been ready 3 months ago... go figure. Now atleast I know more of my limitations... just got to figure how to get over them and improve. Also for yall intellect types that can read the books... I know ya out tur. If you like technology and the ways of the past... checkout 'Timeline' by Michael Chriton. It's a good read, I finally finished it. For someone who reads a book every five years it was a big accomplishment ;P. Now I got to buckle down and read the "Song of Fire and Ice" series (Game of Thrones quadrilogy). Having already watched the show, the books move at such a slow pace which is hard to get over when I know what is going to happen. I'm constantly thinking in the back of my head... "Oh I know he's going to die in the next episode... Just get to it already." Music - a blast from the 90's. Just a couple bands I recently rediscovered after not hearing them in 10-15 years. Check em out; The rentals - "Barcelona" or "The Man With Two Brains" Meat Puppets - "Backwater" Pixies - "Something Against You" - "Gigantic" (Yes that is what she's talking about) Breeders - (wasn't going to mention them, but I mentioned the pixies, which was her first band) Face to Face - "Resignation" moist Currently TDMTG The game would be turn based. Each player would start out with a hand of cards, or tokens. Each of these tokens can be anything from, units to abilities or boosts, debuffs and the sorts. Some of these will be energy producers, and generate a small amount of energy to use each turn. Which doesn't carry over each turn and instead resets. Energy is used to play tokens. The play area would also be a grid, probably 7-9 units tall. Allowing each player to place tokens 2-4 units deep on their own side. Each turn, if a token can move it moves the stated amount. Tokens can never land on another token, If to opposing tokens would occupy the same tile, the moving token would attack. If your unit reaches the opponents side, it deals damage to the player. The goal being to reduce the players health to 0. [quote name='Zethariel' timestamp='1318110878'] I don't know if that counts as a TD. A TD includes towers and enemies in a lane going towards a point -- this would be more or a Tower Wars. Instead of turn based, maybe real-time? With creep cards that could travel one of the lanes/a single lane and fight oncoming monsters, with tower cards that can be placed beside the lane(s) and fire at incoming enemies. Buffs could include stuff like faster card drawing (as in, twice/trice per "tick"), making the enemy throw away one of his cards etc. In any case, do explore the idea, it seems interesting [/quote] Then maybe more of a RTS-TD. I just like the idea of useing abilities and other more strategic pieces, instead of just plopping down towers and hoping for the best. Let's explore the idea of real time combat. Though that would throw out the idea of drawing from a stack of cards, I think you'd either have to have a huge stack or run out eventually. So say the player can choose a group of tokens to use, say 10 - 15 tokens, these would randomly popup as available. The player would have a 5-7 token tray of currently available tokens. Every time one is used another takes it's spot. Choosing randomly from the token types the player selected. When a player plays a unit(creature) token it immediatly begins to move along the column it was placed in. If in range it fires upon oncoming units. If it reaches the end it damages the player it hit. The other part of that. I wanted to make long range units seem important, so what if they were able to attack units in adjacent columns? Instead of being able to jump over units. Then abilities or boosts as I was thinking of. Would be things like, +10% attack power. Also things like "Deal 50 damage to target unit", "Gain 5 life". Could help your units do more, damage or kill enemy units to allow yours to get through. Debuffs could be things like, "Slowed for 10 seconds", "-15% attack power". As a theme I'm thinking more a futuristic look, tanks, helicopters mass amounts of robots and lasers. - Advertisement
https://www.gamedev.net/blogs/blog/1116-walking-towards-the-sun/
CC-MAIN-2018-39
refinedweb
10,260
73.27
Converting Enumerations You can converting enumerations to other values and vice versa. Enumeration values are actually numeric values that are just masked by their names so that they can easily be understood. Let’s take a look at an example. using System; namespace ConvertingEnumerationsDemo { enum Direction { North, East, South, West } public class Program { public static void Main() { Direction myDirection = Direction.East; int myDirectionCode = (int)myDirection; Console.WriteLine("Value of East is {0}", myDirectionCode); myDirection = (Direction)3; Console.WriteLine(" Direction: {0}", myDirection.ToString()); } } } Example 1 Value of East is 1 Direction: West In line 16, we assigned the variable myDirection with the value East from the Direction enumeration. With the default values for the items in the enumeration, East should have a value of 1. Line 17 shows you how to convert an enumeration item into its equivalent integer value using casting. The syntax is as follows. variable = (DestinationDataType)enumerationVariable; Since myDirectionCode variable is of type int, it is expecting an int to be its value. We simply put the destination type inside a set of parentheses and put that group besides the enumeration variable. The result returned is the converted value. In line 21, we do the reverse. We convert an integer number into an enumeration value. The value 3 equates to the enumeration item West. To convert that, we used the same syntax of converting a Direction to an integer. Take note that if the number you will be converting is not in the bounds of the enumeration, it will still be converted but no equivalent enumeration item will be assigned for it. For example: myDirection = (Direction)10; Console.WriteLine("Direction: {0}", myDirection.ToString()); Direction: 10 Since 10 is not a value of any of the enumeration items, the resulting output in the console would be the number itself. Converting a String into an Enumeration Value You can convert a string into an enumeration value. Say for example you have a string “West” and you want to convert it to Direction.West, then you have to do a more complex technique. You need to use the Enum class of the System namespace of .NET Framework. Direction myDirection = (Direction)Enum.Parse(typeof(Direction), "West"); Console.WriteLine("Direction: {0}", myDirection.ToString()); Direction: West The Enum.Parse() method has two parameters. The first is the type of the enumeration. We used the typeof operator to return the type of the enumeration.The second parameter is the string to be converted to the enumeration type provided in the first argument. The value returned is of type object so a conversion to the proper enumeration type is necessary. With those details, we can now make a syntax for converting a string into an enumeration item. enumType name = (enumType)Enum.Parse(typeof(enumType), string); If the string you passed to the method cannot be mapped or is not a member of the enumeration, then an error will occur.
https://compitionpoint.com/converting-enumerations/
CC-MAIN-2021-21
refinedweb
483
50.84
2007 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 e Problem of Marxist Centralism ere is a paradox about Marxism. Its goals are similar to anarism: a classless, coop- erative, society, self-managed by the freely associated producers, with the replacement of alienated labor by cra-like creativity, and the replacement of the state by the democratic self-organization of the people. Yet in practice Marxism has resulted in the Social Demo- cratic) is would include measures su aracter…” (1974, p. 87) at is, they assumed there would no longer be a state — a specialized, bureaucratic, coercive body standing above the rest of society. However, there would be a centralized “vast association.” Presumably su a centralized national association would be run by a few people at the center — whi is what makes it centralized. Everybody else would be in those industrial armies. What if the masses in the industrial armies resented the few central planners and rebelled against them? e central planners would need coercive power to keep the system working. In other words, they would need a state, whatever Marx and Engels wanted. Aer the 1871 rebellion of the Paris Commune, Marx and Engels anged their aitude toward the state. e old bourgeois state of the capitalists could not be simply taken over by the workers in order to carry out the above program, they wrote. e state of the capitalists would have to be destroyed. A new association would have to be put in its place, something like the Paris Commune, whi was nonbureaucratic and radically democratic. Sometimes they called su a Commune-like structure a “state” and sometimes they denied that it was a “state.” But this does not mean that they rejected centralization. Some people read Marx’s e Civil War in France (his writings on the Commune) as decentralist. e Revisionist (re- formist) Bernstein said that Marx’s views on the Commune were federalist, similar to the views of Proudhon (Bernstein was trying to discredit Marx as almost an anarist). officials, no full-time, long-term representatives with big salaries, but recallable del- 3 egates paid the wages of ordinary workers. ese ideas are good, but at most they point to a beer, more-democratic, but still centralized, representative democracy. It is as if the local people had nothing to do but to elect or recall their representatives, who would be political for them. e proposals do not deal with the need for local, face-to-face, directly-democratic, councils, in neighborhoods or workplaces. If the people were not to be passive spectators at their own revolution, if they were to manage their own lives, they had to set up su self- governing councils (as both Bakunin and Kropotkin commented). In fact su neighborhood assemblies were created during the Paris Commune (as they had been during the Fren revolution of 1789). ey included almost daily meetings to make decisions, to organize the community, and to organize the fight against the counterrevolution. But there is nothing of this in Marx’s writing. Similarly, in Lenin’s most libertarian work, State and Revolution, he reviews Marx’s con- clusions. e anarists ampioned commied democrat, a leader of the most extreme wing of the 19ʰ century German democratic movement. He was the editor of the most radi- cal democratic newspaper of Germany. His paper fiercely criticized the moderate democrats for their capitulation to the monarist regime. But even extreme German democrats were centralists. ey fought against Germany’s dismemberment into dukedoms and lile king- doms, ea with its own court, money system, and tolls on roads. ey wanted a unified republic, ruled by one central elected government. ey were impressed by the history of the Fren revolution, in whi the most revolutionary bourgeois forces were the centralizing Jacobins (they thought). is was the opposite of the U.S. revolution. In the U.S., it was the most conservative forces (the Hamiltonian “Federalists”) who were centralizers, and it was the more popular, democratic, forces (the Jeffersonians) who were for a more decentralized federation. Jefferson greatly admired the New England town councils and wished he could import them into the rest of the country. (is decentralist political trend was to fail with the growth of the national state, until it was only used as a defense of racial segregation.) Aer the failed 1848 German revolution, Marx and Engels decided that it was a mistake to expect the liberals to create a democratic republic. ey proposed an alternate strategy in their 1850 Address of the Central Commiee to the Communist League. ey called this strat- egy “permanent revolution.” Without going into all of what this meant to them, it included the idea that, during a revolution, the workers should organize revolutionary councils or clubs to wat over the bourgeois-democratic governments. e “workers’ councils” should try to push them further, to win over the whole of the working class and the oppressed, and to overthrow the capitalist state in a socialist revolution. is strategy could have been in- terpreted in a decentralist fashion, and is not far from what Bakunin and Kropotkin were to 4 advocate. But Marx and Engels gave it a centralizing form. “e [pro-capitalist] democrats will either work directly toward a federated re- public, or at least…they will aempt to paralyze the central government by granting the municipalities and provinces the greatest possible autonomy and independence. In opposition to this plan, the workers must not only strive for the one and indivisible German republic, but also…for the most decisive central- ization of power in the hands of the state authority… As in France in 1793, it is the task of the genuinely revolutionary party in Germany to carry through the strictest centralization.” (1974, p. 328–329) However, 35 years later, and aer the experience of the Paris Commune, Engels repub- lished this Address but added a footnote to precisely this passage. He wrote that he and Marx had been wrong to accept the standard view of the Fren revolution as having been central- izing. e revolution had had a great deal of federalist looseness. It was only Napoleon who set up centralist rule through appointed prefects, as “simply a tool of reaction.” (1974, p. 329) Instead Engels wrote that he would prefer a federalist approa similar to that of the U.S. (at a time when the U.S. was a lot more decentralized than today). is is far beer than the original advocacy of “the strictest centralization.” But, among other things, it still focuses on elected officials and says nothing at all about localized direct democracy. e last sentence is puzzling. He may simply mean “unification” when he writes “centralization,” meaning that local self-government would not prevent overcoming the feu- dal divisions of old Germany, creating a unified nation, whi was needed at the time. But the statement is ambiguous at best. In any case, this footnote (and a few other comments) by Engels had lile effect on the overall pro-centralism of the Marxist movement. Marxism has made many contributions and anarists have mu su problem is its consistent 5 centralism, even at its most democratic. In this area, anarism has been right in its advocacy of a decentralized federalism, what today has been called “horizontalism.” is is one of the great strengths of anarism. References Marx, Karl (1974). Political Writings Vol. I: e Revolutions of 1848. (David Fernba, ed.). NY: Vintage Books/Random House. 6 T A L October 1, 2010 Anti-Copyright. hp://theanaristlibrary.org Author: Wayne Price Title: Decentralism, Centralism, Marxism, and Anarism Publication date: 2007
https://tr.scribd.com/document/44223038/Wayne-Price-Decentralism-centralism-marxism-and-anarchism
CC-MAIN-2019-30
refinedweb
1,262
53.92
I'm trying (for the first time) to set up dynamic resource allocation for some jobs on a local Galaxy instance. I've read the docs here and can get a toy example working, but I know very little Python and don't fully understand how to access the job parameters from within the rules script. The tool is the phyml wrapper - there's a pruned-down snippet here. My rules file looks like this: from galaxy.jobs import JobDestination import os def phyml(job): # Allocate extra time/cores to bootstrap jobs #support = job.parameters.???? - how to access branchSupport value? #if support > 0: #param_str = "-l walltime=168:00:00 -l nodes=1:ppn=60 -q batch" #else: param_str = "-l walltime=24:00:00 -l nodes=1:ppn=1 -q batch" return JobDestination(runner="torque", params={"nativeSpecification": param_str}) This works but obviously doesn't do any actual decision-making as is, and even though I'm sure it's very basic python I can't figure out the correct syntax to access the "branchSupport" value from the job. I'd greatly appreciate any help. Thanks, Jeremy
https://biostar.usegalaxy.org/p/17886/index.html
CC-MAIN-2022-33
refinedweb
185
51.78
Make Your Own Start-up Screens in Mac OS X - Change the plist, change the Message Shown - What About the Remote User? - Summary Want a custom banner screen to show up during your startup process? Mac OS X offers a simple way: If you change a few lines of a text file, the OS will make simple message displays for you at startup. Larry Loeb takes you through the process step-by-step. Summmary: OS X offers a simple way to have banner information displayed during startup compared to the process needed for previous systems. In this article, Larry takes you through the process step-by-step. In "Classic" Macs running pre-Mac OS X systems, it was pretty easy to have a custom banner screen show up during the startup process. Of course, you usually had to create the screen that you wanted with Resource Editor, but that wasn't such a big deal. ResEdit was always an approachable system tool that gave you great feedback on what you were specifying. Once the screen was created, it could be easily recognized by the Classic OS during startup and displayed. But OS X doesn't have that kind of special mechanism any more. Things that will be recognized by the system for display (or much else, for that matter) have to use a common interface specification—that of the XML language. XML uses text lists in a very specific order to pass data. In many ways, it's like the HTML code that produces Web pages. So, instead of using a Resource Editor to create a startup screen for OS X, it's necessary to use a text editor. Fortunately, there is one already built into OS X. The Terminal application (which can be found in Volume/Applications/Utilities) can access the "pico" text editor used by many UNIX systems for just such a text-wrangling task. Change the plist, change the Message Shown The file we need to edit is called a property list, and has a suffix of '.plist'. This file determines the content of the login window that OS X displays as the system tries to login a user. The name of the file we need to edit is 'com.apple.loginwindow.plist'. It can be found by the path :Volume/Library/Preferences/com.apple.loginwindow.plist. Start up the Terminal application and type the following commands (all on one line without any returns between the words): sudo pico /Library/Preferences/com.apple.loginwindow.plist You are then prompted for the administrator password, which is typed in and followed by a return (see Figure 1). Figure 2 shows what Terminal should then look like. Note the <dict> tag that is visible at the end of the plist. Just after that tag, we have to add new lines that will contain <key> and <string> entries. The new <key> tag must contain 'LoginwindowText' as its data, but the new <string> can contain whatever text you desire. Figure 3 shows how the cursor is first moved downward in pico by using the arrow keys. The cursor is then moved to the end of the <dict> section (just before the </dict> tag) with the arrow keys, and a return typed. Pico should then look like Figure 4. Now, type in the data that is boldface below. You can type return at the end of the lines, if you want. The XML parser ignores them, looking for the tag identifiers, but you can still make the file human-readable. Insert your own favorite saying for the data in the string tag (see Figure 5). <key>LoginwindowText</key> <string>Your banner text goes here</string> You're almost done. You must write the file out by pressing the control (ctrl) key at the same time you press the O key. After that, you can exit pico by pressing the control key and the X key. You can now quit the Terminal application.
http://www.peachpit.com/articles/article.aspx?p=400994&amp;seqNum=2
CC-MAIN-2018-51
refinedweb
659
71.75
Hi, working on my first CMF-based buildout and I'm getting an error in CMFCore.DynamicType Advertising from zope.app.publisher.browser import queryDefaultViewName ImportError: No module named publisher.browser My buildout is almost exactly the same as the CMF sandbox but I can see that I have zope.app.publis...@3.9.0 in the sandbox but zope.app.publis...@3.10.0 in my new buildout - both have been freshly updated and bootstrapped. The error is caused by the removal of the BBB imports in zope.app.publisher.browser.__init__.py from zope.publisher.defaultview import queryDefaultViewName et al. Two questions: 1) How do I tie my buildout to 3.9.0? 2) Should we be updating for the change in CMF? Charlie _______________________________________________ Zope-CMF maillist - Zope-CMF@zope.org See for bug reports and feature requests
https://www.mail-archive.com/zope-cmf@zope.org/msg00152.html
CC-MAIN-2017-04
refinedweb
141
54.39
Author of this tutorial and NetBeans plugin: Denis Anisimov This page isn't Vaadin tutorial. This is Vaadin plugin for NetBeans tutorial and its functional description. The purpose here is to enable comfortable code writing without need to google information about various configuration aspects and avoid most popular and hard-detectable mistakes made by newbies and experienced programmers. Some Vaadin framework usage details will be covered here as well but please see Vaadin section for the links. Vaadin - Web application framework for rich Internet applications. Framework link : vaadin.com Download link : current release NetBeans plugin: current release. Book: Book of Vaadin Vaadin tutorial: tutorial Demo: examples There is no need to download any binaries except NetBeans and Vaadin plugin to be able to create Vaadin based project. Download nbm file, go to Tools->Plugins, Downloaded tab, and press "Add Plugins..." button. Choose downloaded file and click "Install" button. Once the plugin is installed there will be "Vaadin" category in the "New Project" wizard. Choose this category and "Vaadin Web Application Project" template. The project is going to be Maven based so it's possible to set up maven related attributes on the second step panel. If you are not familiar with maven then you don't have to change default values here except name of the project. Maven is used transparently for the user so there is no need to worry about it. All build and deploy actions are performed via NetBeans UI. Project will be created after wizard finishes. All libraries will be downloaded for you transparently by maven. See the resulting project tree on the right side. Project template contains GWT module file AppWidgetSet.gwt.xml which is required for custom widgets development. There are a lot of ready to use widgets in the Vaadin framework. So there is no need to create your own custom widget unless you have some specific requirements. Here you can find all available components and usage code snippets. If you want to use an Add-On component or a custom component then project has to contain mentioned GWT module file (see below for custom component case). We don't need it at the moment so let's delete it. Module file deletion allows to speed up project assembly: if project contains client-side code (custom widgets) then it has to be compiled onto the JavaScript. This compilation is done transparently on the project build phase if there is a GWT module file. Core Vaadin framework components are already compiled so there is no need to compile them. Compilation process takes some significant time and slows down the development process. Waring: Use "Refactor->Safe Delete" action to delete GWT module file. "Delete" action only deletes a file but don't update references. There is a reference to the file in the servlet configuration. Either @WebInitParam parameter of @WebServlet or widgetset parameter of @VaadinServletConfiguration has value with GWT module fully qualified name. Broken servlet configuration doesn't produce any error on the project build stage but web applicaiton will be broken at the runtime. This is the first easy and hard-detectable issue which may be a stumbling for newbie. Plugins for other IDEs don't support GWT module file refactoring. So you have to be always aware of this configuration issue. But NetBeans plugin shows an editor error even if you have used plain "Delete" action. Once you open MyVaadinUI (which is an entry point for Vaadin application and contains servlet configuration) you will see an editor error glyph. You can build an run the project after module file deletion. Use "Build" action from the project's popup menu. Target J2EE application server could be either IDE registered server (Services tab, Servers subnode) or embedded lightweight Jetty server. Make sure you have selected J2EE server in your project properties to be able to use IDE registered server (Project properties ->Run category->Server setting). If there are no servers available then you have to register one via Services tab f.e. Use "Run" action to start a project. Project will be deployed and it's URL will be opened in the browser. Embedded Jetty server could be used instead of IDE registered server as well. Select "Embedded Jetty->Run" popup menu action to start Jetty. Jetty server will be run via Maven plugin. In that case browser won't be opened in addition to project deployment. So you should open application URL in your browser manually. Application URL is "{context_path}" where {context_path} is the name of your project. Your can find this value in Jetty server console output. Look up for string Context path = xxxxx. Notice: you can switch off Jetty if you don't use it. Go to Options dialog: Tools->Options. Choose Java and Vaadin tab. Uncheck "Show Jetty Popup Menu Action" option. The application created above consists of button, which appends text labels below the button. This is simple application generated by Maven archetype. We are not going to extend this example further on the server side because this text is not about Vaadin as it has been already mentioned. If you don't have any special requirements for your application and server-side development is enough then any plugin is good for this purpose. Functionality described above could be found in any Vaadin IDE plugin. There are a lot of implemented component available and ready to use for you in Vaadin core framework. So perhaps there will be no need to need to create custom client-side component. But there are a number of configuration requirements where you can go wrong once you realized that such custom component is required. Mistake could be made due to the fact that you forgot something to write in the source code or add some artifacts. As a result you will spend a time to find a reason this mistake. Plugin allows to you to avoid such mistakes. So let's assume you want to extend your application in the way that it shows a tooltip when mouse is over the label text. It could be done without client-side code implementation. Let's create new class LabelEx which extends the Label class and set tooltip text in its constructor. The result is: There is an issue: it's not convenient to show tooltip text if it's big. So an additional requirement will be: show short tooltip text when mouse is over and show full text tooltip on mouse click inside tooltip only (this is similar how Eclipse Javadoc tooltips work). It can be done in many ways and two of them are going to be used to show small mistakes, which lead to broken application. Let's discuss them in details. The most natural way to start is extending existing class Label especially that its subclass already has been created (LabelEx). The expandable tooltip functionality cannot be implemented via extending server-side Label component only. Client side coding is required here. There is a way to start with client side widgets via wizards. Those wizards are useful when you create component from scratch. One of the wizard templates will be used later on to show the second way to implement our usecase. But now we already have LabelEx class extending Label and it's reasonable to start with it. Wizards are not convenient here: you will have to go through created classes and correct them all. To extend client side code for Label class one needs to create connector and override its getWidget() method, which has to return your own client side widget. Let's create dedicated package "com.mycompany.vaadin.labelex" and class LabelExConnector inside it. Add an "extends" clause to it once it created so that it's being derived from LabelConnector. The latter class is used to connect Label class on the server side and its client side code. Method getWidget() is inherited by created LabelExConnector from its superclass. This method returns VLabel widget class available in Vaadin core framework. So we've not added any new functionality for Label class but now it seems that we have at least a client side entry point where further development could be done. It means that after project rebuild everything should work in the same way as previously but based on the new LabelExConnector. It turns out hat the latter class won't be used at all. So what's the issue ? There are a lot of issues here actually. And they are not obvious in that specific case. package com.mycompany.vaadin.labelex; import com.vaadin.client.ui.label.LabelConnector; public class LabelExConnector extends LabelConnector{ } Source code snippet here is used instead of image to hide actual errors because NetBeans plugin will show all the problems in the editor. Let's show here all the issues with detailed description and then show how they are marked as editor error: The issue described in the last item is one of the most popular problems made accidentally or by beginners who don't know all Vaadin framework details. It's a result of an agreement defined by GWT compiler that you just have to remember and know. In that way we get the most popular and very hard-detectable mistake. There is a reasonable question: why did not we use wizards from the beginning if it handled all possible cases. Of course we could. The only issue here is the way how the wizard works. It generates files by the templates and there is no way to configure generated code flexibly. Otherwise it would be required to set a lot of parameters and UI would be oversaturated by control elements. But wizard has to be as simple as possible to be able to use it quickly. On the other hand it turns out that it's necessary to use the wizard for any action related to writing client side code. Otherwise there is always a risk to do a simple mistake. The requirement to always use the wizard is quite strange. IDE should deal with those kind of mistakes. It should help to developer solve routine issues quickly in a such a way that developer could care only about application business logic. That is exactly why we use IDEs. Fortunately, the plugin provides all this functionality for you. If you open the connector file in the editor it will show errors and offer to fix them. It's impossible to detect client package without GWT module file so you will see its absence error mark and connect annotation absence mark. You will see wrong client package error mark after fixing previous issues. One can go through all the errors shown in the editor and use suggested ways to fix them via the assistant. It allows do not keep all framework configuration details in your mind. IDE generates all necessary artifacts and corrections transparently. But let's stop here with this approach and move forward to the next one. Just clean up the source code created at the moment and delete LabelEx and LabelExConnector classes. Let's go by the second way and use the wizards now. The wizard allows you to create components from scratch as well as based on existing components. There are two templates to create a custom widget: full fledged template which creates server side component, client side component and all classes that could be used to communicate between them (RPC interfaces and their implementations, shared status class) and simple template which creates only server-side class and its connector. The first template is useful only when you know in advance that you will need the full interaction between client and server in both directions. Typically it's not a case and development process is iterative and is performed step by step. Therefore this pattern is inconvenient because it generates all at once but it's not required: stubs remain unimplemented for a long time until you get round to it. This is exactly our case so let's use the second template that will create a minimum basis on which to continue to develop. To generate classes using the selected template one can click "Finish" button right on the current wizard page. The other option is to go to the next page via "Next>" and select an existing component to inherit. The first option is useful for fast component creation from scratch. In that case generated server side component extends AbstractComponent and its connector extends AbstractComponentConnector. To adopt generated classes to our requirements we have to go through generated classes and change their extends clauses. The second option could be used when you want to extend existing component functionality. This approach is preferable in our specific case because it generates the required code immediately and there is no need to do any corrections. Just choose class com.vaadin.ui.Label on the last step. As a result the wizard will generate server side component class and its correctly annotated connector in the right package. GWT module file will be created as well (if there is no one) and servlet configuration will be correctly updated. Wizard generates three files as the result of its work: LabelEx server side class, its connector LabelExConnector and widget class LabelExWidget. package com.mycompany.vaadin; import com.vaadin.ui.Label; public class LabelEx extends Label { public LabelEx() { } } package com.mycompany.vaadin.client; import com.vaadin.client.ui.VLabel; public class LabelExWidget extends VLabel { } LabelEx will behave now in the same way as a standard class Label if you build the project and run it. Tooltip on the text will not appear because the LabelEx generated constructor does not call setDescription() method. We are not going to use this method because it will require Vaadin tooltip implementation details knowledge. Let's take a static text instead and extend the client side class: implement required functionality in the LabelExWidget class. We have already all the necessary binding between server side and client side classes. So the only remaining thing is just put the code inside LabelExWidget class. Let's skip the deep implementation details. Below is the code based on GWT API and Vaadin client classes which does what we need. package com.mycompany.vaadin.client; import com.google.gwt.event.dom.client.ClickEvent; import com.google.gwt.event.dom.client.ClickHandler; import com.google.gwt.event.dom.client.MouseOutEvent; import com.google.gwt.event.dom.client.MouseOutHandler; import com.google.gwt.event.dom.client.MouseOverEvent; import com.google.gwt.event.dom.client.MouseOverHandler; import com.google.gwt.user.client.Timer; import com.google.gwt.user.client.ui.Label; import com.google.gwt.user.client.ui.TextArea; import com.google.gwt.user.client.ui.Widget; import com.vaadin.client.ui.VLabel; import com.vaadin.client.ui.VOverlay; public class LabelExWidget extends VLabel { private static final String SHORT_TOOLTIP = "Short tooltip"; private static final String FULL."; private final Timer timer = new Timer() { @Override public void run() { overlay.hide(); } }; private final Label tooltip; private final TextArea textArea; private VOverlay overlay; LabelExWidget(){ overlay = new VOverlay(true); tooltip = new Label(SHORT_TOOLTIP); overlay.add(tooltip); overlay.setOwner(this); textArea = new TextArea(); textArea.setText(FULL_TEXT); textArea.setReadOnly(true); MouseListener listener = new MouseListener(); addHandler(listener, MouseOverEvent.getType()); addHandler(listener, MouseOutEvent.getType()); overlay.addDomHandler(listener, MouseOverEvent.getType()); overlay.addDomHandler(listener, MouseOutEvent.getType()); overlay.addDomHandler(listener, ClickEvent.getType()); } private class MouseListener implements MouseOverHandler, MouseOutHandler, ClickHandler{ @Override public void onMouseOver(MouseOverEvent event) { timer.cancel(); if ( !overlay.isShowing()){ overlay.clear(); overlay.add(tooltip); overlay.showRelativeTo(LabelExWidget.this); } } @Override public void onMouseOut(MouseOutEvent event) { timer.schedule(100); } @Override public void onClick(ClickEvent event) { Widget widget = overlay.getWidget(); if ( tooltip.equals(widget)){ overlay.clear(); overlay.add(textArea); } } } } The only detail that is important here to note for the subsequent text: both short and long tooltip text are statically defined in the class and can not be changed dynamically. It differs f.e. from the way how the content text could be set for Label class. The latter text can be set on the server side and changed dynamically at runtime. Let's implement the same behavior for our custom tooltip: we need the ability to set short and full text values on the server side. Required interaction between server and client could be solved by using shared state class (subclass of SharedState). We would like to create methods in the LabelEx class that allow to set short and long tooltip text. Those methods should initialize fields of state class. The latter state will be transferred to the client on the server request. But what is the most convenient way to do that? Of course you can create a new class and add "extends" clause to it. But you have to care about correctness of the superclass. Our custom LabelEx component extends Label component and the latter component already has its own state class. It means that new custom state class has to extend Label state class. So that state class creation requires several steps. There is no wizard to do that but it wouldn't solve several steps issue: you would require to call the wizard, choose the file type, select a location for it and choose the superclass. And all this information except its name is already available in the context of the server component or its connector. So editor assistant offers a hint to create this class and this hint is available as an editor annotation attached to the server component class and its connector class. In order to get this hint you should choose the class as an editor context (cursor must be somewhere inside the body of the class), f.e. put the cursor on its name in the declaration. There are two fixes available for the hint. The first one creates state class and define a getState() method only in the current context class. The second fix defines the latter method in the paired class as well. You can specify a class name in the dialog which appears when fix is chosen. As a result of the fix one will get a state class which has a correct superclass and method getState() will be defined in the class itself and its paired class in case if appropriate fix has been chosen. Let's add the state class usage logic into the server side component and client side to transmit its content in between. The code below allows you to set tooltip values on the server side. Thus we have implemented the functionality that allows to set tooltips text on the server side. Let's assume now we want to get notification when short tooltip text gets clicked so that one can add listeners for this event on the server side. It could be useful in order to set long tooltip text dynamically exactly at the moment when click happens instead of doing it beforehand. We are not going to delve into the details here how it could be done exactly. Only sketch of implementation will be shown here based on features provided by IDE for developing RPC communication. We do already have client side method onClick() implemented in the private MouseListener class (nested class in LabelExWidget). The latter method is called when mouse is clicked. The only thing we should do is notify the server about the event. To do that we have to use RPC communication. We need an interface that extends com.vaadin.shared.communication.ServerRpc. This interface represents the server side interaction. The interface method calls will be transmitted via the network to the methods implemented by the server side class. Once can create RPC interface and register it manually on the client side in a few steps. There is a wizard which creates a new component from scratch with all interaction classes (full fledged template). This wizard has already been mentioned above and it's inconvenient to use it here because we want only one side interaction and we already have a component class. It just has no sense to use the wizard to generate classes only for code templates (to copy the snippets and delete all generated artifacts). That's exactly the way of iterative component development which you often have to deal with. The situation we have here is similar to that class state creation case. We don't need a wizard to do it because the context provides all information except RPC interface name. Server component class or connector class could be used as a context (its reference is required to be able to create a proxy class for RPC interface, see below). You can choose any of them but it's reasonable to start with the connector in this case. Editor assistant offers hints and fixes for the connector class as a context just as with the state class. Empty interface along with a new method in the connector class will be created as a result of the proposed fix for the hint. The latter method returns a reference to the interface proxy to be able to use the interface by calling its methods on the client side. All you need to do now is add the appropriate method in the generated interface and call it somewhere in the connector code. That's the only thing IDE is not able to on its own because this is the application business logic. All binding routine work has been done within minimum required steps. So the only remaining action on the client side is RPC interface method call (of course you have to create one there) from nested MouseListener class. You just have to provide somehow an access to the RPC proxy reference which is available inside connector but not inside LabelExWidget widget class. That's a trivial implementation detail and could be solved in a many different ways. Let's omit those remaining details here and move forward on to the server component. It's necessary to create RPC interface implementation class on the server side and write the appropriate business logic there. When you open server component class LabelEx in the editor then the assistant will show a hint attached to it. The hint reports an absence of the registered server RPC interface implementation. Editor assistant finds an existing server RPC interface and offers either use it or create a new one from scratch. Fix based on the existing interface creates anonymous stub class inside LabelEx class and assign a reference to it to the rpc field. Also registerRpc method call will be added into the LabelEx class constructor. This call registers the anonymous class reference in order to call its methods when they are requested from the client side. Thus all the routine work has been made by an assistant again. All that's left to do is implement generated anonymous class whose reference is assigned to rpc field. The latter class is empty because we haven't added here a method to the RPC interface before applying the fix. The method should be added and implemented inside the anonymous class (if the interface had had the method before fix has been applied then generated anonymous class would have been generated with a stub method implementation). Well written implementation should notify the listeners registered to the server component with a custom event. But these are business logic details and component API which are out of scope this article and could be done easily by the developer. The main purpose of the article is show how to do iterative development following the editor assistant without using wizards. There are a lot of different cases where one can forget to add required artifacts or doesn't take into account something while developing client/server communication. The editor assistant is your best friend for such cases. Here you can find a few of these tips which will be useful to work on implementation of this interaction. In particular the same method described above can be applied to make RPC interaction from the server to the client, state class related issues (like serialization f.e.), RPC interfaces methods and their arguments. Besides the Dev Mode there are a number of options that allows to speed up compilation process and change its behavior. Those options are available via project properties: Vaadin category contains several sub-categories, which allow you to configure the compiler. You could realize at some point that there is no standard Vaadin core component for your purpose. Try to search existing component before starting to write your own custom component. Add-Ons Directory contains a large number of Add-Ons, which are UI components and libraries extending application functionality. Find required Add-On via specifying search parameters. Copy XML snippet and insert it into your POM file. All Add-On files will be downloaded from the remote maven repository and added to the project as dependencies at build [phase. The other option is available for you here instead of using an external web browser application to search for add-ons. There is an Add-Ons Browser dialog which could be opened via the action "Vaadin-> Open Add-Ons Browser". The dialog that you get as a result of the action allows you search Add-Ons in a similar way to Add-Ons Directory web page mentioned previously. This browser shows almost the same information as the original directory. You can get basic information about Add-On and quickly go to the complete Add-On description in web page via the available link. Search results can be sorted by columns in the table and the selected addition can be added to the project as a dependency via the button "Add". POM file will be updated with appropriate Add-On snippet in the result of this action and all dependency libraries will be downloaded from the maven repository transparently for you. You can use Add-On in your project even easier and faster if you know exact name of Add-On classes (f.e. you have already used it previously or just guessing). Just use standard IDE code completion for this purpose. Code completion list is extended with classes available via the Directory. So even if there is no class in your project classpath but its name starts with the prefix typed in the editor it will be listed among other code completion items. To do that type the class prefix, press Ctrl-Space and you will see matched classes from the Directory. They are rendered using different color. As a result of selected Directory code completion item standard behavior is applied along with appropriate POM file modification and extending project dependencies with downloaded artifacts. Code completion class items are classified into three categories: server side classes, client side classes and classes for tests. Server side classes are shown only for code inside the server component classes, client side classes and tests based classes are shown only for client code and tests respectively. All information about the available Directory Add-Ons from is taken from the local cache, so there is no need to request the remote server each time to obtain this information and search it. and hence no possible problems with speed and performance related to communication over the network. Relevance same information maintained periodic update the cache, which is configured via the global settings. Therefore there is no network related performance issues and server response delays. Information relevance is managed by cache periodic update, which is configured via the global settings. All cache synchronization settings have five values: These settings give you a full control of information transmitting over the network. It's possible to completely disable cache update if you do not want to send remote Internet request. You can completely disable the Directory code completion as well if you do not want to use this functionality and it obstructs your work.
http://wiki.netbeans.org/VaadinPlugin
CC-MAIN-2017-17
refinedweb
4,636
55.84
Less! In this article I’ll explore some of those gems, no pun intended, that could deliver you straight to Ruby zen. Kernel The Kernel module provides a variety of methods which are available to all objects that inherit from Object, since Object mixes in Kernel. Why do I make this distinction? Well, as of Ruby 1.9 the BasicObject class has been added and is now the parent of all classes. The BasicObject does not have the methods provided by the Kernel module. __callee__ This oddly named function returns the name of the current method as a Symbol. def add(a, b) log("#{a} + #{b}", __callee__) a + b end def log(msg, caller) puts "##{caller}: #{msg}" end In the example above the log method takes two parameters: a message and the caller. The add method is making use of the __callee__ function to pass the name of its method to the log function. If you fire up the irb console, type in those two methods followed by add 3,5 you’d end up with: #add: 3 + 5 => 8 Ok, so that’s pretty basic but kinda nifty, right? Sure you could just pass :add to the log method instead of using __callee__, after all you know you’re in the add method right? The slight advantage in the use case provided above is that if you were to rename the add method to my_super_add_method you wouldn’t have to update your call to the log method. Feel free to come up with some other handy uses and post them in the comments section of this article. at_exit If you ever needed to run some code just before your program exits then you should check out at_exit. It registers a block as a handler which gets invoked when a Ruby process is about to exit. In fact you can register multiple handlers as shown in the following example, at_exit_example.rb. at_exit { puts "the program is exiting" } at_exit { puts "the program is really exiting" } puts "doing some stuff" raise "Uh oh, something bad happened!" The handlers will be invoked in last in, first out (LIFO) order. If you put the above code into a file, perhaps named at_exit_example.rb, and ran it ruby at_exit_example.rb, you’d get: doing some stuff the program is really exiting the program is exiting at_exit_example.rb:10:in `<main>': Uh oh, something bad happened! (RuntimeError) As you’ve seen, at_exit handlers will be invoked when the Ruby process is about to exit, even if it’s because of an unhandled exception. This could be useful for cleaning up some files that might have been created by your program, closing connections to servers, or logging some context that might help debug an issue with the program (if it’s a program that shouldn’t normally exit on its own). block_given? This may be one function you’ve run across already, but if not, now’s a good time to learn about it. If you’ve ever passed a block to a function then there’s a good chance the function being called is using block_given?. Below is an example of how block_given? could be used. def sum(values) error = result = nil begin result = values.inject do |total, value| total += value end rescue Exception => ex error = ex end if block_given? yield result, error else return result unless error raise error end end sum((1..5)) do |result, error| puts "Error: #{error}" # => nil puts "Result: #{result}" # => 15 end The sum function takes a collection of values, calculates their sum and returns the result. The function checks to see if a block has been given, hence the block_given? call, and if it has then it will yield the result and error. If an exception occurred while performing the sum then it will be caught and given back to the block. However, if a block isn’t given the sum function will calculate the result and return it. If an exception occurs in this scenario then the sum function will raise it instead of returning nil or some other value to indicate a problem. If you want to allow for a block to be passed in to your functions, try giving block_given? a whirl! Object The Object class is oddly described as: Object is the root of Ruby’s class hierarchy. Its methods are available to all classes unless explicitly overridden. I say oddly because, as I mentioned earlier, BasicObject is the new root, top-level class in Ruby’s class hierarchy. I suppose the documentation just needs to be updated, something you or I could volunteer to do ;). The Object class gets a lot of its behavior by mixing in the Kernel module, but also has some of its own goodies such as method described below. method Ever wonder how you might get a reference to a method? Many languages provide this capability but check out how concise it is with Ruby: m = Calculator.method(:sum) # => assume "sum" is a class method m.call 3, 4 # => 7 Try doing that in Java and you’ll easily add more code including multiple catch clauses to handle the myriad of potential exceptions that might be thrown. ObjectSpace The ObjectSpace module is described as follows: contains a number of routines that interact with the garbage collection facility and allow you to traverse all living objects with an iterator. Cool, huh? I thought so which is why I’ll show you each_object and _id2ref. each_object The each_object function provides an iterator over all live objects (for this Ruby process). The enumerator can be filtered by type in case you’re only interested particular types of objects. You might use each_object, and the ObjectSpace module in general, to aid in debugging or profiling. class Foo; end ObjectSpace.each_object(Foo) do |foo_obj| puts foo_obj end # => 0 ...there are no instances yet. # create a new instance and print the object_id puts Foo.new.object_id # => 2160309640 ObjectSpace.each_object(Foo) do |foo_obj| puts foo_obj.object_id # => 2160309640 end Of course you can iterate over any type of object, not just your own types. Go ahead, take a look at all the String objects! _id2ref If, for some reason, you happen to know the object_id of an object running in your Ruby process, then you’re in luck. You can grab a reference to that object using ObjectSpace._id2ref. f = Foo.new.object_id # => 2160309640 ObjectSpace._id2ref(2160309640) == f # => true Plenty More Hopefully you’ve picked up something new and cool after reading this, but I bet you want more, right? If so you might want to check out some of these: The point is, Ruby has lots of interesting code available for your exploration. Do you have experience with some less-common Ruby classes, modules or functions? If so, post a comment and share with everyone! - Marnen Laibow-Koser
http://www.sitepoint.com/less-used-ruby-apis/
CC-MAIN-2015-06
refinedweb
1,140
71.95
The C++ language invites for object oriented programming. The Win32 API is entirely based on the C programming language. Writing software for the Windows platform always requires the use of the Win32 API. Many developers who prefer C++ and object oriented programming would wish to have available appropriate C++ class libraries, to give their software a consistent object oriented look and feel. The immense popularity of Java, and now .NET, is mostly based on the large number of classes that in fact make up the programming platform. Java and .NET application programmers just simply write their applications utilizing these classes whereas, by contrast, C++ programmers first write an infrastructure and then use it to write the applications. In this article, I will show you how to write a simple C++ class that wraps the Win32 thread related APIs. The Java and .NET platform already have proposed some very good models and so we might as well make our model look similar. The advantage of it is that anyone familiar with Java or .NET can easily relate to it. The threading models in Java as well as in .NET require that a thread object accepts a class method as its thread procedure. Here is an illustration: // define class with a threadable method public class MyObject implements Runnable { // the thread procedure public void run() { // TODO: put the code here } } MyObject obj = new MyObject(); Thread thread = new Thread(obj); tread.start(); // define class with a threadable method public class MyObject { // the thread procedure public void Run() { // TODO: put the code here } } MyObject obj = new MyObject(); Thread thread = new Thread(new ThreadStart(obj.Run)); tread.Start(); The models are remarkably similar. Java requires the threadable object to implement the Runnable interface, and .NET, in a way, requires the same thing because the Thread classes on either platform expects a threadable procedure to be of this form: public void run(). The Java specification is rather simple. Just one simple interface exposing one simple method. The .NET specification is more sophisticated. The 'delegate' concept lends greater flexibility to the writing of multi-threaded programs. Here is an illustration: // create a threadable object public class MyObject { // first thread procedure public void ThreadProc1() { // TODO: } // second thread procedure public void ThreadProc2() { // TODO: } } MyObject obj = new MyObject(); // create first thread Thread thread1 = new Thread( new ThreadStart(obj.ThreadProc1) ); thread1.Start(); //create second thread Thread thread2 = new Thread( new ThreadStart(obj.ThreadProc2) ); thread2.Start(); The .NET threading model offers more advantages. Any class method that is compatible with the ThreadStart delegate can be run as a thread procedure. And as the code snippet above illustrates, a single object instance can concurrently be accessed and manipulated by multiple threads. This is a very powerful feature. We naturally prefer a C++ threading model to be as simple as that of Java and as flexible as that of .NET. Let us focus first on the Java-like simplicity. Here is a proposal: // define the interface struct IRunnable { virtual void run() = 0; }; // define the thread class class Thread { public: Thread(IRunnable *ptr) { _threadObj = ptr; } void start() { // use the Win32 API here DWORD threadID; ::CreateThread(0, 0, threadProc, _threadObj, 0, &threadID); } protected: // Win32 compatible thread parameter and procedure IRunnable *_threadObj; static unsigned long __stdcall threadProc(void* ptr) { ((IRunnable*)ptr)->run(); return 0; } }; We can now write a multi-threaded program as elegantly as the Java folks can do. // define class with a threadable method class MyObject : IRunnable { public: // the thread procedure virtual void run() { // TODO: put the code here } } MyObject *obj = new MyObject(); Thread thread = new Thread(obj); tread->start(); It is so simple because we have buried the Win32 API call into a wrapper class. The neat trick here is the static method defined as part of our Thread class. We have thus emulated the simpler Java Thread class. The .NET Thread and ThreadStart approach is a little harder to emulate. But we can still realize it in a way by using pointers to class methods. Here is the example: // define class with a threadable method class MyObject : IRunnable { // pointer to a class method typedef void (MyObject::* PROC)(); PROC fp; // first thread procedure void threadProc1() { //TODO: code for this thread procedure } // second thread procedure void threadProc2() { //TODO: code for this thread procedure } public: MyObject() { fp = threadProc1; } void setThreadProc(int n) { if(n == 1) fp = threadProc1; else if(n == 2) fp = threadProc2; } // the thread procedure virtual void run() { (this->*fp)(); } }; MyObject *obj = new MyObject(); obj->setThreadProc(1); Thread *thread1 = new Thread(obj); thread1->start(); obj->setThreadProc(2); Thread *thread2 = new Thread(obj); thread2->start(); The actual threadable method run() now uses a pointer to a class method to run the appropriate thread procedure. That pointer must be correctly initialized before a new thread is started. Wrapping the Win32 APIs into C++ classes is the preferred practice. The Java and .NET platforms provide us with well defined models. And by comparison, these models are so similar that defining C++ classes for a thread class, socket class, stream class, etc. should just be a matter of following the provided documentation. You may download the Thread class and try it out. I have designed it to be as simple as possible but you may enhance it by wrapping an additional number of thread related APIs, e.g. SetThreadPriority, GetThreadPriority etc. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/threads/ThreadClass.aspx
crawl-002
refinedweb
888
53.41
this is of course the source of much confusion in JavaScript. The reason being that this depends on how the function was invoked, not where the function was defined. JavaScript without this looks like a better functional programming language. this losing context Methods are functions that are stored in objects. In order for a function to know on which object to work, this is used. this represents the function’s context. this loses context in many situations. It loses context inside nested functions, it loses context in callbacks. Let’s take the case of a timer object. The timer objects waits for the previous call to finish before making a new call. It implements the recursive setTimeout pattern. In the next example, in nested functions and callbacks, this loses context: class Timer { constructor(callback, interval){ this.callback = callback; this.interval = interval; this.timerId = 0; } executeAndStartTimer(){ this.callback().then(function startNewTimer(){ this.timerId = setTimeout(this.executeAndStartTimer, this.interval); }); } start(){ if(this.timerId === 0){ this.executeAndStartTimer(); } } stop(){ if(this.timerId !== 0){ clearTimeout(this.timerId); this.timerId = 0; } } } const timer = new Timer(getTodos, 2000); timer.start(); function getTodos(){ console.log("call"); return fetch(""); } this loses context when the method is used as an event handler. Let’s take the case of a React component that builds a search query. In both methods, used as event handlers, this loses context: class SearchForm extends React.Component { handleChange(event) { const newQuery = Object.freeze({ text: event.target.value }); this.setState(newQuery); } search() { const newQuery = Object.freeze({ text: this.state.text }); if (this.props.onSearch) this.props.onSearch(newQuery); } render() { return ( <form> <input onChange={this.handleChange} value={this.state.text} /> <button onClick={this.search}Search</button> </form> ); } } There are many solutions for these issues : the bind() method, the that/self pattern, the arrow function. For more on how to fix this related issue issues, take a look at What to do when “this” loses context. this has no encapsulation this creates security problems. All members declared on this are public. class Timer{ constructor(callback, interval){ this.timerId = "secret"; } } const timer = new Timer(); timer.timerId; //secret No this, no custom prototypes What if, instead of trying to fix this losing context and security problems, we get rid of it all together? Removing this has a set of implications. No this basically means no class, no function constructor, no new, no Object.create(). Removing this means no custom prototypes in general. A Better Language JavaScript is both a functional programming language and a prototype-based language. If we get rid of this, we are left with JavaScript as a functional programming language. That is even better. At the same time, without this, JavaScript offers a new, unique way, of doing Object Oriented Programming without classes and inheritance. Object Oriented Programming without this The questions is how to build objects without this. There will be two kind of objects: - pure data objects - behavior objects Pure Data Objects Pure data objects contain only data and have no behavior. Any computed field will be fill-in at creation. Pure data objects should be immutable. We need to Object.freeze() them at creation . Behavior Objects Behavior objects will be collections of closures sharing the same private state. Let’s create the Timer object in a this-less approach. function Timer(callback, interval){ let timerId; function executeAndStartTimer(){ callback().then(function makeNewCall(){ timerId = setTimeout(executeAndStartTimer, interval); }); } function stop(){ if(timerId){ clearTimeout(timerId); timerId = 0; } } function start(){ if(!timerId){ executeAndStartTimer(); } } return Object.freeze({ start, stop }); } const timer = Timer(getTodos, 2000); timer.start(); The timer object has two public methods: start and stop. Everything else is private. There are no this losing context problems as there is no this. Components without this this may be required by many components’ frameworks, like React or Vue for example. In React, we can create stateless functional components, without this, as pure functions. function ListItem({ todo }){ return ( <li> <div>{ todo.title}</div> <div>{ todo.userName }</div> </li> ); } We can also create stateful components without this with React Hooks. Take a look at the next example: import React, { useState } from "react"; function SearchForm({ onSearch }) { const [query, setQuery] = useState({ text: "" }); function handleChange(event) { const newQuery = Object.freeze({ text: event.target.value }); setQuery(newQuery); } function search() { const newQuery = Object.freeze({ text: query.text }); if (onSearch) onSearch(newQuery); } return ( <form> <input type="text" onChange={handleChange} /> <button onClick={search}Search</button> </form> ); }; Removing arguments If we get rid of this, we should also get rid of arguments as they have the same dynamic binding behavior. Getting rid of arguments is pretty simple. We just use the new rest parameter syntax. This time the rest parameter is an array object: function addNumber(total, value){ return total + value; } function sum(...args){ return args.reduce(addNumber, 0); } sum(1,2,3); //6 Conclusion The best way to avoid this related problems is to not use this at all. JavaScript without this is a better language. And more specifically, JavaScript without this is a better functional programming language. We can build encapsulated objects, without using this, as collections of closures. With React Hooks we can create this-less stateful components.
https://learningactors.com/removing-javascripts-this-keyword-makes-it-a-better-language-heres-why/
CC-MAIN-2020-45
refinedweb
849
53.27
Maths › Approximation › Regression › Linear Calculates the linear regression parameters and evaluates the regression line at arbitrary abscissasController: CodeCogs Interface C++ HTML Class LinearLinear regression is a method to best fit a linear equation (straight line) of the form Example 1 - The following example displays the slope, Y intercept and regression coefficient for a certain set of 7 points. #include <codecogs/maths/approximation/regression/linear.h> #include <iostream> #include <iomanip> using namespace std; int main() { double x[7] = { 1.5, 2.4, 3.2, 4.8, 5.0, 7.0, 8.43 }; double y[7] = { 3.5, 5.3, 7.7, 6.2, 11.0, 9.5, 10.27 }; Maths::Regression::Linear A(7, x, y); cout << " Slope = " << A.getSlope() << endl; cout << "Intercept = " << A.getIntercept() << endl << endl; cout << "Regression coefficient = " << A.getCoefficient() << endl; cout << endl << "Regression line values" << endl << endl; for (double i = 0.0; i <= 3; i += 0.6) { cout << "x = " << setw(3) << i << " y = " << A.getValue(i); cout << endl; } return 0; }Output: Slope = 0.904273 Intercept = 3.46212 Regression coefficient = 0.808257 Regression line values x = 0 y = 3.46212 x = 0.6 y = 4.00469 x = 1.2 y = 4.54725 x = 1.8 y = 5.08981 x = 2.4 y = 5.63238 x = 3 y = 6.17494 Authors - Lucian Bentea (August 2005) Source Code Source code is available when you agree to a GP Licence or buy a Commercial Licence. Not a member, then Register with CodeCogs. Already a Member, then Login. Members of Linear LinearInitializes the class by calculating the slope, intercept and regression coefficient based on the given constructor arguments. Note - The slope should not be infinite. GetValue GetCoefficientThe regression coefficient indicated how well linear regression fits to the original data. It is an expression of error in the fitting and is defined as: and , then r is considered to be equal to 1. Linear Once This function implements the Linear class for one off calculations, thereby avoid the need to instantiate the Linear class yourself. Example 2 - The following graph fits a straight line to.
http://www.codecogs.com/library/maths/approximation/regression/linear.php
CC-MAIN-2018-43
refinedweb
341
61.83
In contrast, type instantiation of a CLR generics happens at runtime, and things like speed and type-safe verifiability are the order of the day. A short piece by Don Box demonstrating the difference between generics in C# and C++. Given the similarity in syntax, I wonder how MS will add .NET generics to C++.NET Posted to implementation by Dan Shappir on 5/12/03; 12:36:19 AM I'm not sure I understand your question. Are you asking why the VM should know about compile-time genericity, e.g. C++ templates, or why the VM should support run-time genericity? With regard to C++ templates, I don't think the VM should be aware of them at all. The question I raised was that presumably MS will want to use a similar syntax for .NET genericity in C++.NET as they do in C#. However, that will conflict with the existing C++ syntax for templates. Alternately, they could use the same facility with the compiler determining which type instantiations could (and should) be accomplished at compile-time and which must be differed to run-time. Given the previous thread on the evolution of C++, and its relatively glacial pace, it's interesting to see just how many changes to that PL MS will package into C++.NET You probably need to use same fancy casting or reflection to get this wrongfull assingment, but still. Java is (and I suppose C# will be) increasingly used as a systems programming language (e.g. target of code generators, higher-level models, run-time manipulation, etc.) I would suggest reflection is increasingly used. And so type-defeating assignment is increasingly possible. A good reason to have full type information available at run-time, since the time the failure is caught could be significantly after the time the error is made. So you have the following options: - Keep the C#/Java style type system and disallow anything that's not statically sound. Safe, but inconvenient. - Allow such unsafe casts, and check for failure at runtime. Convenient, but sometimes goes horribly wrong. - Implement all of the language facilities you need to combine the best of both worlds, static safety and convenience. This is more difficult than it first seems, especially when you try to implement such things with performance similar to C#/Java. You need a type system and syntax that lets you express the (typewise) different concepts of "arrays of integers", "mutable arrays of integers", "arrays of mutable integers", "mutable arrays of integers", and all of their subtyping rules and fine structure. This brings up the issues of structural vs nominal identity, etc. You have to go way off the beaten C#/Java path to solve these problems. Here is an example with "safe covariant arrays", using the syntax of Nice: <T> void printAll(T[] array) { ... } // usage: int[] intArray = [ 1,2,3 ]; printAll(intArray); // usage: int[] intArray = [ 1,2,3 ]; printAll(intArray); Actually, this is not exactly immutability: you can modify the elements, as long as you put the correct type. For instance, you can safely write a generic swap method: <T> void sawp(T[] array, int index1, int index2) { T tmp = array[index1]; array[index1] = array[index2]; array[index2] = tmp; } In these examples, T has no bound (or the implicit Object bound if you prefer), but it works in exactly the same way with a bound. So this idiom does not implement immutability. This would be the job of a 'const' specifier. But it gives the best of both worlds: type safety together with flexibility. Java will surprise you sometimes. I'm not sure about C#, but Java sure doesn't provide static type-safety for arrays. As an example, this program compiles perfectly fine, but throws an ArrayStoreException at runtime. public class UnsafeArray { public static void main( String[] args ) { Object[] obj = new String[ 10 ]; obj[ 0 ] = new Object(); } } C++, on the other hand, treats arrays much more safely - that code would never compile. Because you can express the same generic function as Daniel wrote, you also have the same flexibility. On another note, I'm disappointed (but not surprised) to see that C# is going to treat generics in almost the exact same manner as Java. I personally have a hard time ascribing the term, "generic", to the virtual-subtyping generics systems in Java, C# (and apparently Eiffel too). I know I'm not on my own here, for example, PolyJ lets you express type parameter bounds through where constraints, and C++ programmers just laugh when shown Java's generics system. While languages like Smalltalk don't have an explicit generics system, they have a lot of the same power, without the static type-checking. (I find the structural conformance in C++ templates and message handling in Smalltalk to be surprisingly similar). How do other languages handle generics (i.e. Ada, Haskell, etc...)? Do other people find virtual-subtyping generics acceptable? It does not seem so strange that this would pass the type-checker; it seemed stranger that it raises an exception. But when I asked my officemate about this he gave a variant where aliasing would render it unsafe: public class UnsafeArray2 { public static void main( String[] args ) { String[] x = new String[ 10 ]; Object[] obj = x; obj[0] = new Object(); String s = x[0] + x[0]; // kaboom! } } While languages like Smalltalk don't have an explicit generics system, they have a lot of the same power, without the static type-checking. Like I always say, you can do anything in a dynamically typed language: that's the problem. How do other languages handle generics (i.e. Ada, Haskell, etc...)? Haskell and ML have always had `generics' but there they are called parametric datatypes. I don't think any language has generics as powerful or convenient as Haskell's. It seems obvious to me that this raises an exception because an Object is not a String. Here's an equivalent representation of the same problem. Collection<String> strList = new ArrayList<String>(); strList.add( new Object() ); Java is smart enough to recognize this as a type error at compile time, though. If you look through my posting history, you'll find that I strongly agree with you in matters of static typing. I was simply trying to survey and draw comparisons between the different means of producing generic code for different systems. In particular, I was trying to raise the aspect of structural conformance as a differentiating feature of generics systems. Haskell and ML have always had `generics' but there they are called parametric datatypes. I don't think any language has generics as powerful or convenient as Haskell's. I wasn't trying to imply that neither Ada nor Haskell had generics - quite the contrary. I mentioned both languages, because I knew that both supported generics, but I have no idea how they support them. In general, I'm interested in knowing which languages support generics via virtual subtyping and which languages support generics via some other mechanism (i.e. structural conformance, where constraints, etc...) It sounds like Haskell might be fun to investigate further. No, but String is a subtype of Object. An easy-to-imagine semantics for: Object[] obj = new String[ 10 ]; is that each String gets upcasted to an Object either when the assignment is done or when each an element is read from the array. However, the first approach does not work because of aliasing, and the second does not work because something like: obj[ 0 ] = new Object(); would require being able to downcast. But, what is `obvious' to one person is not necessarily obvious to another, so pardon my thick-headedness... Your `equivalent' example, however: looks totally `inequivalent' to me. This is clearly unsafe because a client using strList expects a String. In your first example, a client of obj, however, only expects an Object, and String supports all the operations that Object does, so the problem is not obvious. Maybe I don't understand something, then: How can there be any issue of generics in a dynamically typed language? Generics address a static typing problem. I wasn't trying to imply that neither Ada nor Haskell had generics - quite the contrary. Yes, I know; I was trying to stimulate your interest. In general, I'm interested in knowing which languages support generics via virtual subtyping and which languages support generics via some other mechanism (i.e. structural conformance, where constraints, etc...) Haskell supports them via parametric polymorphism (and qualified types, which is the name for the mechanism behind type classes, but I avoid type classes whenever possible). I don't think it's thick-headedness at all... This issue can lead to many subtle problems that only very thorough testing will identify. The upshot of the whole thing is that it's never really safe to assign to an array that you didn't create yourself, unless it's an array of a final class (e.g., a class that is guaranteed to have no subclasses). So, unless I'm willing to prove to myself that this is OK, and incur the risk that I might be wrong (or that another programmer might use this in an unanticipated way), I should never write anything like: void replaceAll(SomeNonFinalClass[] array) { for (int i = 0; i < array.length; i++) array[i] = new SomeNonFinalClass(); } Some might argue that this is just one more argument against mutable values... Ah, good point! Some might argue that this is just one more argument against mutable values... No way! I never use destructive update but even I wouldn't stoop to such an argument. This is a problem with the type system, not the dynamics. Forgive me, the 'equivalent' example should have been: Collection<Object> objList = new ArrayList<String>(); objList.add( new Object() ); Again, this causes a compiler error. It assumes that there's supposed to be some implicit conversion between a container of one type to a container of another type. How can there be any issue of generics in a dynamically typed language? Generics address a static typing problem. I don't know. I guess it depends upon what "generics" and dynamic typing means to a person. Do all dynamically typed languages check type based on the same level of structural conformance? I've had the experience where some languages like JavaScript (at least some implementations) will allow you to pass more parameters to a function then what it declares. In one sense, that makes the functions more generic. I was trying to stimulate your interest. Goal achieved. :) Haskell supports them via parametric polymorphism Parametric polymorphism doesn't seem like a very precise way of describing a generics mechanism, considering that both C++ templates and Java generics fall under that umbrella, and they have practically nothing in common with one another. Perhaps it's just my history with C++ that made this seem obvious to me. Apparently, because questions about this are commonly asked as people stumble across the typing errors, it's well documented in C++ circles. No way! I never use destructive update Well, I guess we can't agree on everything. I was never very fond of programming in Lisp, because it felt that programming without side-effects was orders of magnitudes more constraining than any static typing I might ever face. (Not that I didn't know that destructive update was possible in Lisp - I just felt that it was the idiomatic way to program in that language). Mutable values can be made to work safely, but a safe implementation doesn't look quite like C++, C#, or Java. The pure functional view of mutables is that the "heap" is passed in and out of every function; all code is executed eagerly in a well-defined order (i.e. normal order evaluation); there is a "ptr(t)" type constuctor describing the type of pointers to items of type t; the only heap operations are "new pointer from value", "read from pointer", "write to pointer" and in the presence of subtyping, for any pair of unequal types t and u, the types ptr(t) and ptr(u) are disjoint, even when t and u are in a subtyping relationship. In this framework, the mysterious notions of lvalues, rvalues, mutable variables, null pointers, value identity vs referential identity, and mutable array coercions all go away. Obviously one wouldn't implement a real runtime by passing a heap in and out of every function, but if you take this conceptual view of mutability, then it's easy to understand exactly how C# and Java differ from this model and how doing so breaks static type safety. It's an interesting project to start with a safe system like the above and, purely with syntactic sugar and typesafe program transformations, how close you can get to the "look and feel" and performance of Java and C# mutability. It turns out you can get pretty close, and that the things you can't quite mimmic turn out to be unsafe or not well-defined.
http://lambda-the-ultimate.org/classic/message6788.html
CC-MAIN-2017-43
refinedweb
2,174
62.17
Deploy on the platform made for Next.js → Deploy Next.js in seconds → After 26 canary releases and 3.4 million downloads, we are proud to introduce the production-ready Next.js 7, featuring: Finally, we are excited to be sharing this news on the all-new Nextjs.org One of Next.js's primary goals is to provide the best production performance with the best possible developer experience. This release brings about many significant improvements to the build and debug pipelines Thanks to webpack 4, Babel 7 and many improvements and optimizations on our codebase, Next.js now boots up to 57% faster during development. Thanks to our new incremental compilation cache, changes you make to the code will now build 40% faster. These are some example figures we have collected: As a bonus, when developing and building you will now see better realtime feedback thanks to webpackbar: Rendering accurate and help errors is critical to a great development and debugging experience. Until now, we would render the error message and its stack trace. Moving forward, we make use of react-error-overlay to enrich the stack trace with: react-error-overlay This is a comparison of the before and after of our errors: As a bonus, react-error-overlay makes it easy to open your text editor by just clicking on a specific code block. Since its very first release Next.js has been powered by webpack for bundling your code and re-using the rich ecosystem of plugins and extensions. We're excited to announce Next.js is now powered by the latest webpack 4, which comes with numerous improvements and bugfixes. Among these we get: .mjs Another new feature is WebAssembly support, Next.js can even server-render WebAssembly, here is an example. Note: this upgrade is fully backwards-compatible. However, if you are using custom webpack loaders or plugins via next.config.js, you might have to upgrade them. With webpack 4, a new way of extracting CSS from bundles was introduced, called mini-extract-css-plugin. @zeit/next-css, @zeit/next-less, @zeit/next-sass, and @zeit/next-stylus are now powered by mini-extract-css-plugin. @zeit/next-css @zeit/next-less @zeit/next-sass @zeit/next-stylus mini-extract-css-plugin The new version of these Next.js plugins solves 20 existing issues related to CSS imports; As an example, importing CSS in dynamic import()s is now supported: import() // components/my-dynamic-component.js import './my-dynamic-component.css' export default function MyDynamicComponent() { return <h1>My dynamic component</h1> } // pages/index.js import dynamic from 'next/dynamic' const MyDynamicComponent = dynamic(import('../components/my-dynamic-component')) export default function Index() { return () { <div> <MyDynamicComponent/> </div> } } A major improvement is that you are no longer required to add the following to pages/_document.js: pages/_document.js <link rel="stylesheet" href="/_next/static/style.css" /> Instead, Next.js automatically injects the CSS file. In production, Next.js also automatically adds a content hash to the CSS URL, so that if there are changes, to ensure that your end-users never get stale versions of the file and have the ability to introduce immutable permanent caching. In short, all you have to do to support importing .css files in your Next.js project is to just register the withCSS plugin in your next.config.js: .css next.config.js const withCSS = require('@zeit/next-css') module.exports = withCSS({ /* my next config */ }) Next.js has had support for dynamic imports through next/dynamic since version 3. next/dynamic As early adopters of this technology, we had to write our own solution for handling import(). As a consequence, Next.js was beginning to diverge from the support that webpack later introduced for it and some features were missing. Because of this Next.js did not support a few import() features webpack has introduced since. For example, naming and bundling certain files together manually was not possible: import(/* webpackChunkName: 'my-chunk' */ '../lib/my-library') Another example is using import() without being wrapped in the next/dynamic module. Starting with Next.js 7 we no longer touch the default import() behavior. This means you get full import() support out of the box. This change is fully backwards-compatible as well. Making use of a dynamic component remains as simple as: import dynamic from 'next/dynamic' const MyComponent = dynamic(import('../components/my-component')) export default function Index() { return ( <div> <MyComponent /> </div> ) } What this example does is create a new JavaScript file for my-component and only load it when <MyComponent /> is rendered. my-component <MyComponent /> Most importantly, if it is not rendered, the <script> tag is not included in the initial HTML document payload. <script> To further simplify our codebase and make use of the excellent React ecosystem, in Next.js 7 next/dynamic was re-written to make use of react-loadable behind the scenes (with some minor modifications). This also introduces two great new features for dynamic components: react-loadable timeout delay Next.js 6 introduced Babel 7 while it was still in beta. Since then the stable version of Babel 7 has been released, and Next.js 7 now using this version. For a full list of changes, you can refer to Babel its release notes. Some of the main features are: @zeit/next-typescript <> babel.config.js overrides If you do not have a custom Babel configuration in your Next.js project, there are no breaking changes. If you do however have a custom Babel configuration, you have to upgrade the respective custom plugins/presets to their latest version. In case you are upgrading from a version below Next.js 6 you can run the excellent babel-upgrade tool. babel-upgrade In addition to upgrading to Babel 7, the Next.js Babel preset (next/babel) now defaults to setting the modules option to commonjs when NODE_ENV is set to test. next/babel modules commonjs NODE_ENV test This configuration option was often the only reason for creating a custom .babelrc in a Next.js project: .babelrc { "env": { "development": { "presets": ["next/babel"] }, "production": { "presets": ["next/babel"] }, "test": { "presets": [["next/babel", { "preset-env": { "modules": "commonjs" } }]] } } } With Next.js 7 this becomes: { "presets": ["next/babel"] } At this point, the .babelrc can be removed, as Next.js will then automatically use next/babel when there is no Babel configuration. As Next.js pre-renders HTML, it wraps pages into a default structure with <html>, <head>, <body> and the JavaScript files needed to render the page. <html> <head> <body> This initial payload was previously around 1.62kB. With Next.js 7 we've optimized the initial HTML payload, it is now 1.5kB, a 7.4% reduction, making your pages leaner. The main ways we have reduced size are: __next-error __NEXT_DATA__ nextExport assetPrefix In Next.js 5 we introduced assetPrefix support, a way to make Next.js automatically load assets from a certain location, usually a CDN. This option works great if your CDN supports proxying, you request an URL like<buildid>/pages/index.js Typically, the CDN checks if it has the file in its cache, or otherwise requests it directly from the origin. Proxying assets is precisely how the Edge Network works. However, some solutions require pre-uploading a directory directly into the CDN. The problem in doing this is that Next.js its URL structure did not match the folder structure inside the .next folder. For example our earlier example .next<buildid>/pages/index.js // mapped to: .next/page/index.js With Next.js 7 we have changed the directory structure of .next to match the url structure:<buildid>/pages/index.js // mapped to: .next/static/<buildid>/pages/index.js While we do recommend using the proxying type of CDN, this new structure allows users of a different type of CDN to upload the .next directory to their CDN. We are excited to introduce styled-jsx 3, the by default included CSS-in-JS solution in Next.js is now ready for React Suspense. One thing that was often unclear was how to style a child component if that component is not part of the current component scope, for example, if you included a child component that needed specific styles only when used inside the parent component: const ChildComponent = () => ( <div> <p>some text</p> </div> ) export default function Index() { return ( <div> <ChildComponent /> <style jsx>{` p { color: black; } `}</style> </div> ) } The above code tries to select the p tag does not work because styled-jsx styles are scoped to the current component, they do not leak into child components. One way to get around this was using the :global method, removing the prefix from the p tag. However, this introduces a new issue, which is that styles do leak across the page. p :global In styled-jsx 3 this issue has been solved by introducing a new API, css.resolve, which will generate the className and <style> tags (the styles property) for the given styled-jsx string: css.resolve className <style> styles import css from 'styled-jsx/css' const ChildComponent = ({ className }) => ( <div> <p className={className}>some text</p> </div> ) const { className, styles } = css.resolve` p { color: black; } ` export default function Index() { return ( <div> <ChildComponent className={className} /> {styles} </div> ) } This new API allows for transparently passing through custom styling to child components. Since this is a major release for styled-jsx, there is one breaking change that improves bundle sizes if you are using styles-jsx/css. In styled-jsx 2 we would generate a "scoped" and "global" version of external styles, even when only the "scoped" version was used we would still include the "global" version of those external styles. styles-jsx/css With styled-jsx 3 global styles have to be tagged with css.global instead of css, so that styled-jsx can optimize bundle size. css.global css For all changes, please refer to the release notes. Starting from Next.js 7 we now support the new React context API between pages/_app.js and page components. pages/_app.js Previously it was not possible to use React context in between pages on the server side. The reason for this was that webpack keeps an internal module cache instead of using require.cache, we've written a custom webpack plugin that changes this behavior to share module instances between pages. In doing so we not only allow usage of the new React context, but also reduce Next.js's memory footprint when sharing code between pages. Together with the Next.js 7 release, we are launching a completely redesigned nextjs.org. The blog post you are currently reading is already part of the new blog section on nextjs.org. This blog will be the new home for communication related to Next.js, for example, new version announcements. As the amount of apps using Next.js is continuously growing, we needed a way to show all these beautiful apps in one overview. Meet the new /showcase page: /showcase This new showcase allows us to add new apps built with Next.js continuously. We invite you to visit /showcase to get inspired, or submit your app that uses Next.js! Ever since it's first release Next.js has been used in everything from fortune 500 companies to personal blogs. We're very excited to see the growth in Next.js adoption. The Next.js community has nearly 2000 members. Join us!
https://nextjs.org/blog/next-7
CC-MAIN-2020-50
refinedweb
1,903
50.12
Problem: Using haskell, you want to get the current year, month and day (for the UTC time zone) as integral values. Solution: You need the time package for this task. Use cabal install time to install. Our code is similar to this HaskellWiki entry, however it provides a standalone runnable program (use runghc <filename>.hs to execute) which is more readable for beginners. UTC time: Note that the UTC time might differ from your local time depending on your timezone. import Data.Time.Clock import Data.Time.Calendar main = do now <- getCurrentTime let (year, month, day) = toGregorian $ utctDay now putStrLn $ "Year: " ++ show year putStrLn $ "Month: " ++ show month putStrLn $ "Day: " ++ show day Local time: It is also possible to get your current local time using your system’s default timezone: import Data.Time.Clock import Data.Time.Calendar import Data.Time.LocalTime main = do now <- getCurrentTime timezone <- getCurrentTimeZone let zoneNow = utcToLocalTime timezone now let (year, month, day) = toGregorian $ localDay zoneNow putStrLn $ "Year: " ++ show year putStrLn $ "Month: " ++ show month putStrLn $ "Day: " ++ show day Daylight saving time is also taken into account using this method.
https://techoverflow.net/2014/06/13/get-current-year-month-day-in-haskell/
CC-MAIN-2018-05
refinedweb
182
55.74
Details Description We need to enable opening an IndexReader with termInfosIndexDivisor set in solrConfig. We need to enable setting the termIndexInterval in SolrIndexConfig. Activity - All - Work Log - History - Activity - Transitions Committed revision 814160. Moved the termInfosIndexDivisor up to the abstract class. Implemented Hoss's Expose Writer option. Added unit tests for the IndexReaderFactory. I'm not against exposing something package private for tests - anyone that jumps the fence to use that should know what they are getting themselves into. For a test I agree it's not worth exposing IW publicly however there should be a simple way to access it as a package protected variable? A great example of a public Lucene API being exposed from Solr that can easily break the system is getWrappedReader. IR is as canonical to Lucene as IW. And calling close on IR will also cause numerous errors for users. Why is it public, it's only used internally to Solr? The Solr policy as it's being described isn't making sense to me. An answer to the second question? do we really need solrconfig-termindex.xml We probably don't want all the tests to have different behavior. Can you describe the worst case scenario you imagine will happen if IndexWriter is exposed? Sure, someone who thinks they know what they are doing closes the IW when it shouldn't be closed causing exceptions, etc and emails to solr-user, etc. and wasting the communities time. However, the impetus is not on me to defend why it shouldn't be exposed, it's on you to show why it is proper to take something that is currently private and make it public just to pass a test. If it needs to be public for other use cases, fine, but generally speaking, I don't think variables, etc. should be made public just for testing purposes. That's bad OOD. People may start to use it and count on it, and we will have to needlessly support that. A public class in Lucene will still occasionally break back compat, and come with maintenance/deprecation as well. Just because its public in Lucene, that doesn't mean it should be public in Solr. For each little thing, its arguably never that big a deal - but a good policy keeps a bunch of little things from creeping. -1 on exposing anything just for tests. Because it's not publicly exposed in Solr and exposing it just for the sake of testing doesn't seem wise. Can you describe the worst case scenario you imagine will happen if IndexWriter is exposed? Why would we go through the effort? IW is a public Lucene class. Because it's not publicly exposed in Solr and exposing it just for the sake of testing doesn't seem wise. This should do the trick... Why would we go through the effort? IW is a public Lucene class. do we really need solrconfig-termindex.xml We probably don't want all the tests to have different behavior. This should do the trick... public class ExposeWriterHandler extends DirectUpdateHandler2 { public ExposeWriterHandler() { super(h.getCore()); } public getWriter() { forceOpenWriter(); return writer; } }; IndexWriter writer = (new ExposeWriterHandler()).getWriter(); ...since all that maters is you get a writer using the configsfrom the core. If i'm missing something then the next obvious solution would be to changed the <updateHandler class="..."/> to pointat a concrete public class created for this test. BTW: do we really need solrconfig-termindex.xml ? .. why not just make these changes to the solrconfig.xml that TestConfig already uses? Looks like solrconfig-termindex.xml was not included in the patch. Also, not sure about exposing getIndexWriter() on the DUH2 just for testing purposes. Is there another way to get at testing this? - Test case placed into TestConfig - Created solrconfig-termindex.xml - IndexReader.getTermInfosIndexDivisor is deprecated and probably can be turned on again in Lucene trunk - termInfosIndexDivisor is set in StandardIndexReaderFactory as an optional parameter - termIndexInterval is obtained from SolrIndexConfig and set in SolrIndexWriter - Needs a test case Bulk close for Solr 1.4
https://issues.apache.org/jira/browse/SOLR-1296
CC-MAIN-2017-51
refinedweb
678
59.19
I am trying to export some audio files with Sound Properties > Export for ActionScript. However, for some reason the "Class" box is greyed out and unaccessible. I've tried out a number of different files: .mp3, .wav, .aif, but Class is still greyed out. What is the problem? Is there some sort of workaround? What version of Flash are you using? What's the bitrate of the sound? I assume it is in the library and you are doing the following: Right Click -> Properties -> ActionScript -> Trying to check the "Export for ActionScript" checkbox and it is grayed out? What you are doing should definitely work as far as I know because I am pretty sure the sound doesn't even import into Flash if the bitrate is too high but still something to check. i'm not sure where you see Sound Properties but you should be right clicking your sound in the library panel>click properties>tick export for actionscript and the class and base class fields should be filed in and editable. I just found out that the reason the reason why it was not editable is because the document has been set up for Actionscript 1.0 and 2.0, not 3.0. I tried to follow this video: on how to export sound to actionscript. I've been trying to find the Actionscript 1.0/2.0 version of that workflow to set up a sound, volume, play, frame range, play increment, etc. Is it essentially the same coding? *********** import flash.media.SoundTransform import flash.media.SoundMixer SoundMixer.stopAll() // The section below imports the sound you just "Exported for Actionscript" // Replace "sound" with the "Class" name you put in the ActionScript tab for the audio clip // ** If there is more than one sound, then copy & paste the line below and rename the "sound1" and "sound" to each new audio. // ** Do the same process with the ".play" line at the bottom of the program var sound1 = new sound(); // Section below controls the audio volume. Range is between 0-1. // Example: .5 >>>> 50% volume var myvolume:SoundTransform = new SoundTransform(.5); // Below controls the // (<starting frame within audio track>, <frame increment or how fast the track plays>,<Connects to Volume - ignore it>) sound1.play(0, 1, myvolume); *********** no. the coding will be substantially different for as2. to start, you'll need to use a linkage id (eg, soundID) for your sound, not a class name. then you'll need to apply attachSound to a sound instance. var s:Sound=new Sound(); s.attachSound("soundID"); s.start(); and then the differences continue: Thanks for the reply! I am wondering if it is possible to use actionscript to properly stop ALL the sound in a scene when a button is pressed. Both a "forward" and "back" buttons are set up and are each in their own respective layers. There are also images that have been set up as buttons that produce a sound when pressed. However, even when I put the code below in the forward/back buttons, the scene audio from those image buttons continues until finished. Note that the sound files are on the timeline on their own layers within the symbols. on(release) { stopAllSounds(); { Are only the sounds that are imported/linked into ActionScript able to be stopped by the "stop" functions? stopAllSounds() will stop all sounds, no matter how they started, that are playing at the time it executes. however, it does not stop a sound from starting and continuing to play after it executes.
http://forums.adobe.com/thread/1164579
CC-MAIN-2014-15
refinedweb
588
73.98
Debate:.: - Poor metadata in the central repository - If dependencies are fixed based on user feedback, then the repository metadata will become much cleaner - POM.xml as the source of project metadata - XML and the verbosity that pom.xml requires makes the build difficult to parse, and the ability to specify the project metadata in other formats such as Groovy would ease this Raible also added:. Some other quotes: - boomtown15 -- I can't believe that people give up on Maven so easily. It is actually really simple if you follow the conventions. Ok, that first project setup is a bit rough and I agree the learning curve is a bit steep, but I believe the benefits outweigh this. - Xavier Hanin -- So, am I a Maven advocate? Yes and no. I like the idea behind Maven, being able to build a project in a standard way is really neat. What I dislike is the lack of documentation, flexibility and robustness, and that there's just too much black magic. - Les -- [...] it never ceases to amaze me at the blow back against maven. Maven has a higher initial learning curve, but the payoff is more simplicity as things get more complex. - Tech Per -- While I do agree with you, that maven can be complex, is over engineered and lacks good documentation, I still have a feeling, that maven is the best thing around. I use maven each and every day, and yes, I bitch about it. But I have also tried to go back to ant (which I also know quite well), ... and it was a PAIN. Then, I saw how much I get from maven. - Ortwin -- - Ste. - Kevin Menard -- The SNAPSHOT system just doesn't work, IMHO. When used properly, it can be a great way for a project to let early adopters test things. All too often, though, it's simply a crutch preventing a formal release. - distiller -- [...] - Jay -- [Howard Lewis Ship],. - Jon Scott Stevens -- [...] take this as a note... we have been interviewing a lot of people for java developer positions and not one single person that we have interviewed has had a truly good thing to say about maven. - Peter Backlund --. - Matt Raible -- I don't think anyone thinks that Maven is a bad idea - it's just a poor implementation of a good idea. - Rick Hightower -- Every day I curse Maven. Every day I praise Maven. I hate it and love it all of the time. I remember although it could be better it is a far cry from using Ant. Since I travel a lot and consult/develop a lot.... I have seen so many snarly Ant build scripts. At least with maven, I have to just tame one beast and one philosphy. With Ant, it is random beast with many heads. - Bryan Taylor -- There's certainly some innovative ideas in Maven, but there's also some things that scare me. It's funny that Maven's reliance on "convention over configuration" easily predates the Ruby on Rails fad, but the latter gets all the credit for inventing it. There's a reason for that. What do you think? Umm... "Dan" Brown? by Don Brown It is not that difficult by Vikas Hazrati Vikas Hazrati vikashazrati.wordpress.com Documentaton is the key by Luca Botti. Mostly works but.. by Kristof Jozsa. Great tool by Ivo Limmen I admit that Maven has it's flaws too. Nothing is a 100% bug free. There are a lot of plugins from Maven 1 I would like to see ported to Maven 2 (vainstall for instance). Re: Umm... "Dan" Brown? by Alex Popescu ./alex -- .w( the_mindstorm )p. Alexandru Popescu Senior Software Eng. InfoQ Techlead/Co-founder Re: Umm... by Don Brown What about a decent GUI by Cristian Pascu. Re: It is not that difficult by Evan Wor Javen by Thomas Mueller import org.javen.*; public class Build { public static void main(String[] args) { setAppName("acme"); build(); } } Re: Javen by Tom Adams. Re: Javen by Thomas Mueller Re: Javen by Martin Gilday buildr is for Ruby. Buildr is not for Ruby. It is written in Ruby to build Java projects. It uses the Maven directory structure and repositories. Re: Javen by Alex Popescu ./alex -- .w( the_mindstorm )p. Re: Umm... "Dan" Brown? by Ryan Slobojan Please accept my apologies for the typo - I've updated the article to correct that, and your nemesis "Dan Brown" no longer has the credit. :) Thanks, Ryan Slobojan Too rigid by Ken DeLong As for documentation, take a look at the assembly plugin. Every single goal has the same description: "Assemble an application bundle or distribution from an assembly descriptor." Great. Have fun everyone. Maven makes the hard things easy and the easy things hard. Re: Too rigid by Shane Witbeck Critical mass by Andrew Clifford Re: Mostly works but.. by Ivan Luzyanin Here the link: cargo.codehaus.org/Maven2+plugin#Maven2plugin-g... multiple Environments by Arash Bizhanzadeh ? Maven has two things wrong: by Dominic Mitchell - mvn - mvn help - mvn --help | grep -i goal - mvn --help | grep -i phase - man mvn - mvn pakage The errors are completely worthless. And of course, when you do figure out what the hell you're doing, it only gets better. Screens and screens full of idiotic crap that you don't care about, errors go flying past (yet don't appear to be a problem and don't stop the build). It's a complete shambles. Secondly, it's the failures. When something goes wrong with maven, it's usually near-impossible to tell what the actual problem is. This is all very frustrating, as I can see what Maven's trying to do for me, and I want that. But as a tool, it's bloody awful. Re: multiple Environments by Roman Pichlik Re: Documentaton is the key by Renato Cavalcanti. Re: multiple Environments by Arash Bizhanzadeh Re: multiple Environments by Erick Dovale Re: multiple Environments by Arash Bizhanzadeh Mailing list is helpful by Evan Worley maven.apache.org/mail-lists.html Re: Javen by Eelco Hillenius I don't get it by Eelco Hillenius Re: Umm... by Michael Bushe Is there any viable alternative for complex projects? by Frank Hardisty. Re: Mostly works but.. by Kristof Jozsa Re: multiple Environments by Erick Dovale Re: Umm... "Dan" Brown? by Musachy Barroso Two words: Use Artifactory by Ryan Gardner Re: Critical mass by Rainer Eschen part: icefusion.googlecode.com/ Lacks simplicity by venkataramana madugund
http://www.infoq.com/news/2008/01/maven-debate/
CC-MAIN-2014-52
refinedweb
1,073
75.71
I need to loop over a utf-8 string and get each character of the string. There might be different types of characters in the string, e.g. numbers with the length of one byte, Chinese characters with the length of three bytes, etc. I looked at this post and it can do 80% of the job, except that when the string has 3-byte chinese characters before 1-byte numbers, it will see the numbers also as having 3 bytes and print the numbers as 1** where * is gibberish. To give an example, if the string is '今天周五123', the result will be: 今 天 周 五 1** 2** 3** where * is gibberish. However if the string is '123今天周五', the numbers will print out fine. The minimally adapted code from the above mentioned post is copied here: #include <iostream> #include "utf8.h" using namespace std; int main() { string text = "今天周五123"; char* str = (char*)text.c_str(); // utf-8 string char* str_i = str; // string iterator char* end = str+strlen(str)+1; // end iterator unsigned char symbol[5] = {0,0,0,0,0}; cout << symbol << endl; do { uint32_t code = utf8::next(str_i, end); // get 32 bit code of a utf-8 symbol if (code == 0) continue; cout << "utf 32 code:" << code << endl; utf8::append(code, symbol); // initialize array `symbol` cout << symbol << endl; } while ( str_i < end ); return 0; } Insert memset(symbol, 0, sizeof(myarray)); before utf8::append(code, symbol); If this for some reason still doesn't work, or if you want to get rid of the lib, recognizing codepoints is not that complicated: string text = "今天周五123"; for(size_t i = 0; i < text.length();) { int cplen = 1; if((text[i] & 0xf8) == 0xf0) cplen = 4; else if((text[i] & 0xf0) == 0xe0) cplen = 3; else if((text[i] & 0xe0) == 0xc0) cplen = 2; if((i + cplen) > text.length()) cplen = 1: cout << text.substr(i, cplen) << endl; i += cplen; } With both solution, however, be aware that multi-cp glyphs exist, as well as cp's that can't be printed alone
https://codedump.io/share/J7COQDdF9q0k/1/c-iterate-utf-8-string-with-mixed-length-of-characters
CC-MAIN-2017-17
refinedweb
333
63.73
Guys, this item has been on my TODO list for quite sometime and it has to do with the constructs like this one (from libavutil/mem.h): ------------------------------------------------------------------------ #ifdef __GNUC__ #define DECLARE_ALIGNED(n,t,v) t v __attribute__ ((aligned (n))) #else #define DECLARE_ALIGNED(n,t,v) __declspec(align(n)) t v #endif ------------------------------------------------------------------------- The problem I have with the above is that we're not testing for the right thing here. The fact that GCC is being used is irrelevant, since all we really care about is whether compiler supports 'aligned' attribute. But that's just the first layer this particular onion has. The second one is the fact that whether GCC-style aligned attributes are supported or not is also irrelevant. All we're looking for is ANY kind of C99 language extension that would allow us to declare aligned variables (which by the way makes this particular case very broken for any compiler that doesn't claim to be __GNUC__ but also doesn't happen to be Microsoft Visual Studio). So, in order to address the second issue I propose that we create a dedicated header file: ffmpeg/c99_extensions.h where all things like DECLARE_ALIGNED would be declared. In order to address the first issue I propose that we don't test for particular compilers even within the ffmpeg/c99_extensions.h but we delegate the testing to ./configure script. So that ANY compiler that supports, lets say, GCC-style attributes would be treated as a first class citizen without resorting to bogus pre-defines of __GNUC__). Thoughts? Thanks, Roman.
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-July/028798.html
CC-MAIN-2016-30
refinedweb
259
58.11
pyRserve 0.9.1 A Python client to remotely access the R statistic package via network What It Does pyRerve is a library for connecting Python to an R process (an excellent statistic package) running Rserve as a RPC connection gateway. Through such a connection variables can be get and set in R from Python, and also R-functions can be called remotely. In contrast to rpy or rpy2 the R process does not have to run on the same machine, it can run on a remote machine and all variable access and function calls will be delegated there through the network. Furthermore - and this makes everything feel very pythonic - all data structures will automatically be converted from native R to native Python and numpy types and back. Status of pyRserve The question behind that usually is: Can pyRserve already be used for real work? Well, pyRserve has been used at various companies in production mode for over three years now. So it is pretty stable and many things work as they should. However it is not complete yet - there are a few loose ends which should still be improved. Changes - V 0.9.1 (2017-05-19) - Removed a bug on some Python3 versions - Added proper support for S4 objects (thanks to flying-sheep) - Added support for Python3 unitests on travis (thanks to flying-sheep) - V 0.9.0 (2016-04-11) - Full support for data objects larger than 2**24 bytes - Maximum size of message sent to Rserv can now be 2**64 bytes - V 0.8.4 (2015-09-06) - fixed missing requirements.txt in MANIFEST.in - fixed bug in installer (setup.py) - V 0.8.3 (2015-09-04) - Fixed exception catching for Python 3.4 (thanks to eeue56) - Some pep8 cleanups - explicit initialization of a number of instance variables in some classes - cleanup of import statements in test modules - Allow for message sizes greater than 4GB coming from R server - V 0.8.2 (2015-07-11) - Added support for S4 objects (generated when e.g. creating a db object in R) - V 0.8.1 (2014-07-17) - Fixed errors in the documentation, updated outdated parts - For unittesting run Rserve on different port from the default 6311 to avoid clashes with regular Rserve running on the same server - Fixed but when passing a R-function as argument to a function call (e.g. to sapply), added unittest for this - V 0.8.0 (2014-06-26) - Added support for remote shutdown of Rserve (thanks to Uwe Schmitt) - Added support for Out-Of-Bounds (OOB) messages (thanks to Philipp alias flying-sheep) - V 0.7.3 (2013-08-01) - Added missing MANIFEST.in to produce a complete tgz package (now includes docs etc) - Fixed bug on x64 machines when handling integers larger than 2**31 - V 0.7.2 (2013-07-19) - Tested with Python 3.3.x, R 3.0.1 and Rserve 1.7.0 - Updated documentation accordingly - Code cleanup for pep8 (mostly) - Marked code as production stable - V 0.7.1 (2013-06-23) - Added link to new GitHub repository - fixed URL to documentation - V 0.7.0 (2013-02-25) - Fixed problem when receiving very large result sets from R (added support for XT_LARGE header flag) - Correctly translate multi-dimensional R arrays into numpy arrays (preserve axes the right way) Removed ‘arrayOrder’ keyword argument as a consequence. THIS IS AN API CHANGE - PLEASE CHECK AND ADAPT YOUR CODE, ESPECIALLY IF YOU USE MULTI-DIM ARRAYS!! - Support for conn.voidEval and conn.eval and new ‘defaultVoid’-kw argument in the connect() function - Fixed bug in receiving multi-dimensional boolean (logical) arrays from R - Added support for multi-dimensional string arrays - added support for XT_VECTOR_EXPR type generated e.g. via “expression()” in R (will return a list with the expression content as list content) - windows users can now connect to localhost by pyRserve.connect() (omitting ‘localhost’ parameter) - V 0.6.0 (2012-06-25) - support for Python3.x - Python versions <= 2.5 no more supported (due to Py3 support) - support for unicode strings in Python 2.x - full support complex numbers, partial support for 64bit integers and arrays - suport for Fortran-style ordering of numpy arrays - elements of single-item arrays are now translated to native python data types - much improved documentation - better unit test coverage - usage of the deprecated conn(<eval-string>) is no more possible - pyRserve.rconnect() now also removed - V 0.5.2 (2011-12-02) - Fixed problem with 32bit integers being mistakenly rendered into 64bit integers on 64bit machines - V 0.5.1 (2011-11-22) - Fixed improper DeprecationWarning when evaluating R statements via conn.r(…) - V 0.5 (2011-10-03) - Renamed pyRserve.rconnect() to pyRserve.connect(). The former still works but shows a DeprecationWarning - String evaluation should now only be executed on the namespace directly, not on the connection object anymore. The latter still works but shows a DeprecationWarning. - New kw argument atomicArray=True added to pyRserve.connect() for preventing single valued arrays from being converted into atomic python data types. - V 0.4 (2011-09-20) - Added support for nested function calls. E.g. conn.r.t.test( ….) now works. - Proper support for boolean variables and vectors - V 0.3 (2010-06-08) - Added conversion of more complex R structures into Python - Updated documentation (installation, manual) V 0.2 (2010-03-19) Fixed rendering of TaggedArrays V 0.1 (2010-01-10) Initial version Supported Platforms - This package has been mainly developed under Linux, and hence should run on all standard unix platforms, as well - as on Mac OS X. pyRserve has also been successfully used on Win32 machines. Unittests have been used on the Linux and Mac OS X side, however they might just work fine for Win32. It has been tested run with Python 2.6, 2.7.x, 3.2, and 3.3. The latest development has been tested with R 3.0.1 and Rserve 1.8.0, but it also should work with R 2.13.1 and newer in that series. Rserve is suppported from version 0.6.6 on. License pyRserve has been written by Ralph Heinkel () and is released under MIT license. Quick Installation Make sure that Numpy is installed (version 1.4.x or higher). Then from your unix/windows command line run: pip pyRserve For manual installation download the tar.gz or zip package. After unpacking, cd into the pyRserve directory and run python setup.py install from the command line. Actually pip pyRserve should install numpy if it is missing. Source Code repository pyRserve is now hosted on GitHub at. Documentation Documentation can be found at. Support For discussion of pyRserve issues and getting help please use the Google newsgroup available at. Missing Features - Authentication is implemented in Rserve but not yet in pyRserve - Author: Ralph Heinkel - Documentation: pyRserve package documentation - License: MIT license - Platform: unix,linux,cygwin,win32 - Categories - Development Status :: 5 - Production/Stable - Environment :: Console - Intended Audience :: Developers - License :: OSI Approved :: MIT License - Operating System :: Microsoft :: Windows - Operating System :: POSIX - Programming Language :: Python - Programming Language :: Python :: 2 - Programming Language :: Python :: 3 - Topic :: Scientific/Engineering :: Information Analysis - Topic :: Scientific/Engineering :: Mathematics - Topic :: Software Development :: Libraries - Topic :: System :: Networking - Package Index Owner: ralph.heinkel - DOAP record: pyRserve-0.9.1.xml
https://pypi.python.org/pypi/pyRserve/
CC-MAIN-2018-13
refinedweb
1,216
57.47
{-# .storedValue' always -- returning 'Nothing'. Useful to undo the results of 'AssumeClean', for benchmarking rebuild speed and -- for rebuilding if untracked dependencies have changed. This assumption is safe, but may cause -- more rebuilding than necessary. | AssumeClean -- ^ /This assumption is unsafe, and may lead to incorrect build results in this run, and in future runs/. -- Assume and record.storedValue' operations. deriving (Eq,Ord,Show,Data,Typeable,Bounded,Enum) -- |. well. ,shakeVersion :: String -- ^ Defaults to @"1"@. The version number of your build rules. -- Change the version number to force a complete rebuild, such as when making -- significant changes to the rules that require a wipe. The version number should be -- set in the source code, and not passed on the command line. ,shakeVerbosity :: Verbosity -- ^ Defaults to 'Normal'. What level of messages should be printed out. ,shakeStaunch :: Bool -- ^ Defaults to 'False'. Operate in staunch mode, where building continues even after errors, -- similar to @make --keep-going@. ,shakeReport ::. ,shakeFlush :: Maybe Double -- ^ Defaults to @'Just' 10@. How often to flush Shake metadata files in seconds, or 'Nothing' to never flush explicitly. -- It is possible that on abnormal termination (not Haskell exceptions) any rules that completed in the last -- 'shakeFlush' seconds will be lost. ,shakeAssume :: Maybe Assume -- ^ Defaults to 'Nothing'. Assume all build objects are clean/dirty, see 'Assume' for details. -- Can be used to implement @make --touch@. ,shakeAbbrevi -- ^ Default to 'False'. Print timing information for each stage at the end. (Just 10) Nothing [] False True False (const $ return ()) (const $ BS.putStrLn . BS.pack) -- try and output atomically using BS fieldsShakeOptions = ["shakeFiles", "shakeThreads", "shakeVersion", "shakeVerbosity", "shakeStaunch", "shakeReport" ,"shakeLint", "shakeFlush", "shakeAssume", "shakeAbbreviations", "shakeStorageLog" ,"shakeLineBuffering", "shakeTimings", "shakeProgress", "shakeOutput"] tyShakeOptions = mkDataType "Development.Shake.Types.ShakeOptions" [conShakeOptions] conShakeOptions = mkConstr tyShakeOptions "ShakeOptions" fieldsShakeOptions Prefix unhide x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 = c = k $ k $ k $ k $ k $ k $ k $ k $ k $ k $ k $ k $ k $ k $ k $ z unhide toConstr ShakeOptions{} = conShakeOptions dataTypeOf _ = tyShakeOptions instance Show ShakeOptions where show x = "ShakeOptions {" ++ intercalate ", " inner ++ "}" where inner = zipWith (\x y -> x ++ " = " ++ y) fieldsShakeOptions $ gmapQ f x f x | Just x <- cast x = show (x :: Int) | Just x <- cast x = show (x :: FilePath) | Just x <- cast x = show (x :: Verbosity) | Just x <- cast x = show (x ::
http://hackage.haskell.org/package/shake-0.10.6/docs/src/Development-Shake-Types.html
CC-MAIN-2014-35
refinedweb
370
56.45
Sorry if this is a double post, because I thought I posted this before, but I came back to see if anyone replied and the thread isn't here so I think I was just hallucinating ;) Anyway, a nice addition to DrPython would be if Ctrl+F4 closed tabs. This is fairly standard in tabbed environments. The bug has to do with spacing. DrPython asked if I wanted to convert a file to unix spacing/newlines as opposed to windows spacing. So I said yes. Then my program kept giving me mysterious errors and I couldn't figure out what was wrong. Thinking it might be a bug I opened the file in IDLE -- and noticed that the spacing was different. DrPython was not displaying the spacing that the file actually had (at least not how the interpreter saw it). Here's the python file: The difference in how IDLE and DrPython display the file is in the class constructor: [code] def __init__(self): # Define synonyms self.depart = self.leave = self.walk = self.run = self.exit = self.enter = self.og = self.go for x in DirectionList: # You can just type in a direction to go there exec "self." + x + " = self.go" self.take = self.grab = self.get self.dump = self.throw = self.drop self.lookat = self.study = self.examine self.fight = self.punch = self.kick = self.hurt = self.attack self.Inventory = self.Items = self.ListAllItems self.SaveGame = self.save [/code] In IDLE, only the exec line is in the for loop (as it should be) while in DrPython all the self assignments are also in the loop. I'm pretty sure this is a DrPython bug and not an IDLE bug because I opened the file in a few other text editors and firefox and they all show the self assignments correctly being outside the loop. Daniel Pozmanter 2004-10-24 You always know you are in for a good post when it starts with a reference to hallucinating ;) You can set the shortcut for "Close" to "Ctrl + F4". So onto the spacing and the file: This is not a drpython bug. You have mixed spacing. The for loop is indented with a tab. The exec statement has a single tab, then 4 spaces. The next line is indented with 8 spaces. This is screwy. You will note that the file has mixed indentation. DrPython shows this in the statusbar by default. It also provides mechanisms for switching the file to space or tab indentation (You can also set this in preferences). P.S. Are you sure you don't want to classify this as a bug? I would think that above all you would want DrPython's display of the spacing to correlate to the way the interpreter is going to look at it (as IDLE seems to in this case). Daniel Pozmanter 2004-10-25 I am not inclined to classify this as a bug. If you set tab size to 8 in prefs, you will find the file looks exactly the same as it does in idle. This is why mixed indentation is not a good mix with python code. I could make tab size 8 by default, and spaces the default indentation type. I will post this to the open discussion, and let people weigh in. Cheers, Dan Daniel Pozmanter 2004-10-31 Cool beans BTW, on Ctrl+F4 thing. I've set the hotkey and it works great, but I still think it's strange that this isn't a default. It is in every other tabbed app I've worked with (firefox, eclipse, etc.) Daniel Pozmanter 2004-11-05 This could easily be made a default for the next version. (I don't think it is currently used for anything else). Daniel Pozmanter 2004-11-08 I wam not going to make it default. On linux, this is used for workspace switching. People can change it easily enough.
http://sourceforge.net/p/drpython/discussion/283804/thread/eb24b209
CC-MAIN-2013-48
refinedweb
654
71.95
In .htaccess there are basically two things you can do regarding url's. Rewriting url's You can rewrite url's internally. If the user visits yoursite.com/forum/ you can rewrite the url internally to use yoursite.com/forum/index.php. The user will see yoursite.com/forum in their address bar. In an .htaccess file, this would be written as: RewriteRule ^forum /forum/index.php [L] RewriteRule indicates that it should rewrite the url. ^forum matches the forum part in yoursite.com/forum/ and /forum/index.php says that it should rewrite the url to /forum/index.php. Last [L] means that if this RewriteRule matches, it should stop matching rules. Redirecting url's You can redirect an url too. This will send an header to the client to load a different page instead. If the user visits yoursite. Seen a few people using disqus as a forum and I'm actually planning to do so myself. There's which can be used as a commenting system and also as a forum, but I don't quite like a few things about it. Regarding setting disqus up as a forum board, you need to make a small app that would allow your users to create pages / topics / posts, whatever you want to call them. The forum ID is the disqus_shortname used in the embed script. I found this in the documentation: Looks up a forum by ID (aka short name) In php the syntax is: } elseif( /* conditions */ ) { Although if there are no conditions, simply do } else { UPDATE: See comments if( date('Ymd') == date('Ymd', strtotime($mytime)) ){ $day_name = 'This day'; } else { $day_name = 'Another day'; } echo $day_name; In retrospect, paying closer attention to the sql statements and experimenting a bit could've solved the problem. in your create_topic.php on line 33 <input type="hidden" name="cid" value"<?php echo $cid; ?>" /> Your are missing the "=" <input type="hidden" name="cid" value="<?php echo $cid; ?>" /> Start by integrating your legacy database and then build the Admin site. You'll see how the model information is available to the Admin app and every other app you write. The key is to import models in your apps. I can not see you data so it's a guess use onyl ==0 not ===0 if (logged_in() === true) { if ($mythisql1['locked']==0) { ?> or if $mythisql1['locked'] is a string if (logged_in() === true) { if ($mythisql1['locked']=='0') { ?> Having multiple applications (forum, wiki, ...) access the same database is not likely to have any effect on CPU usage, but there are other drawbacks: Table names used by applications might have conflicts (many of them might have a "session" or "posts" table). Some web apps have a feature to prefix table names with a string, like "wp_session" and "wp_posts" for example to get around conflicts. Yes, it's less secure. When one of the applications has a security hole and someone manages to access its database, data of all applications is compromised. Multiple databases is likely to be easier to manage when doing application upgrades, backups, removing or adding applications to the mix. Accidentally break one database, and you'll break all apps. To get the applications use the same authent AFAIK, that information is not available via the API. You can get a list of the resources followed by the authenticated user via the Following API, but that does not work for any other users (I assume due to privacy issues). You could fetch it the info you are looking for from the database via SQL statements, but that is not officially supported, as the databases could/will change with every product update. In the IBM Connections Forum at developerworks, there is an example for Communities. You have to perform this operation within your application, MongoDB does not allow joins. See the following two links for more information: What you'll want to do is query on the first document, get the id in your application, then query on your second document. If this is a common use case for you then you should consider embedding your subdocument into the main document if possible. Otherwise, MongoDB might not be the right tool for you; there's nothing wrong with using SQL if it's the right tool for the job. Assuming you are displaying the posts ordered by the id, you can do this: SELECT COUNT(*) FROM posts WHERE topic_id = 8976 AND id <= 15 This will give you the position of the post with the id 15 for example. Now you can check what page this position is on: $page = floor($position / 10); So in this example, post 15 is on page 2. $sql = "SELECT post_id, post_title FROM forum_post WHERE forum_id=1 AND post_type='o'"; should be $sql = "SELECT post_id, post_title FROM forum_post WHERE forum_id= ? AND post_type='o'"; Now, you will bind your parameter to this query. Why not update the underlying data source via SQL? Option Compare Database Option Explicit Private Sub Form_Load() DoCmd.RunSQL "UPDATE myTable SET myField = myField + 'Test'" End Sub some example 1.choose drop down // select option ALL HtmlSelect select = (HtmlSelect) currentPage.getElementByName("index"); HtmlOption option = select.getOptionByValue("All"); select.setSelectedAttribute(option, true); 2. // Enter text HtmlInput queryInput = currentPage.getElementByName("query"); queryInput.setValueAttribute(searchValue); 3. // click button HtmlSubmitInput submitBtn = currentPage.getFirstByXPath("//input[@value='Search']"); currentPage = submitBtn.click(); I think you could do a check in there of If ($total_pages > 2) { $total_pages = 2}; $c = $this->db->selectAssoc( $this->db->Select('*', 'forum_categories ,forum_thems', "`forum_categories`. `lang` = '" . $l. "' AND `forum_thems`. `id_categories` = `forum_categories`.`id`")); $total_pages = count($c) / 25; if ($total_pages >2) { //limit to two pages $total_pages = 2; } $Pages:"; if (empty($_GET['p'])) { $_GET['p'] = 1; } most popular code for forum is somehow like: BBCode Wiki Format Markdown or tookit like SELECT ... , MAX(m.date_posted) AS latest_reply ... GROUP BY t.thread_id ORDER BY latest_reply DESC ... But why is date_posted a TEXT? Shouldnt it be a datetime, or maybe a int (if a timestamp) Because will never be able to optimize running MAX on a text column, would suggest using MAX(m.message_id) AS latest_reply instead, which as messages are probably inserted in date order, should be equivient. Edited to add: The query written out in full... $query = " SELECT t.thread_id, title, MAX(m.message_id) AS latest_reply FROM forum_threads AS t LEFT JOIN forum_messages AS m ON t.thread_id = m.thread_id WHERE t.child_id = ".$board_id." GROUP BY t.thread_id ORDER BY latest_reply DESC LIMIT ".$starting.", ".$this->user['results_per_page']; echo "<META HTTP-EQUIV='Refresh' Content='0; URL=/category.php?id=" . $topic_cat . "'>"; That should work. You needed to put double quotation-marks outside the . $topic_cat . variable like so; " . $topic_cat . " This should do it. $query = " COUNT(*) FROM forum_messages A, forum_threads B WHERE A.thread_id = B.thread_id AND A.message_id != B.first_msg_id AND B.board_id = " . mysqli_real_escape_string($dbc, $board_id) . " "; $rs = mysqli_query($dbc, $query); list($count) = mysqli_fetch_array($rs); echo $count; add the /forum location in nginx location /forum { root [path to www]/forum; try_files $uri $uri/ [index location]; } I haven't used phbb3 before, so I don't know the exact index location for it. Replace if ($request_uri ~* "^/qa/") { rewrite ^/qa/(.*)$ /qa/index.php?qa-rewrite=$1 last; } with location ~ /qa/(.*)? { try_files $uri /qa/index.php?qa-rewrite=$1&$query_string; } also the block if (!-e $request_filename) { rewrite ^(.+)$ /index.php?$1 last; } is better to be moved inside the / location and converted into try_files location / { index index.php index.html; try_files $uri /index.php?$request_uri } if you still are having trouble please tell me. It's a jQuery object, that's why .replace wont work. You need to add .val() to get the actual value, then you have to set it as well: var value = $("#message").val(); value = value.replace(str,"[b]"+str+"[/b]"); $("#message").val(value); In your :topics resource, you didn't defined the index method, that's why you won't be able to get to topic's list or index page. Try to change your route like this: resources :topics, only: [:index, :show] or remove only attribute from resources, it will automatically include all your methods by default. resources :topics Also if you have relationship between models, you should define nested routes in your routes file, For example, you can define them like this, you can change them accordingly: try to change your route file like this: resources :users resources :sessions, only: [:new, :create, :destroy] resources :forums do resources :topics do resources :microposts, only: [:new, :create, :destroy] end end In above case, you can access your forums like this: http:/ OK, I got it. Foreach was the answer. And what I did was: <?php foreach ( $this->sections as $section ) : $x++; ?> <?php if ($x == 1) { echo '<li class="active">'; } else {echo '<li>';} ?> <a id="sectab<?php echo intval($section->id) ?>" href="#sect<?php echo intval($section->id) ?>" data- <?php echo $this->escape($section->name); ?> </a> </li> <?php endforeach; ?> of course before foreach I added variable $x=0 which helped me a lot. You're looking for a form field named id, $id = mss($_GET['id']);, but the only select elements you create are cat or sub_cat. Either name one of those id or change what you're looking for. You can try this, Hope it helps: function get_all_categories() { $data = array(); $get_categories = $this->db->get('categories'); $cat = $get_categories->result_array(); foreach( $cat as $key=>$each ){ $rs = $this->db->where('topic_cat_id', $each['cat_id'])->oreder_by('topic_id', 'desc')->get('Topics', 1)->row_array(); $data[$key]['cat'] = $each; $data[$key]['top'] = $rs; } echo "<pre>";print_r( $data ); return $data; } Try this way SELECT count(*) FROM forum_threads thr JOIN forum_topics top ON thr.topic_id = top.topis_id JOIN forum_categories fc ON top.cat_id = fc.cat_id WHERE thr.user_id = $user_id AND fc.role_id IN ( 0, 24, 55, 888, .... list of user roles ... ) In the list of user roles (last condition) always put 0 as a first number, then the rest of his roles. This problem sounds complicated but it actually isn't. You can select all categories based on a given group ID by selecting from "permissions" and joining in "boards" based on the given board_id and then further joining in "categories" based on the category_id of the boards. This would return each category as often as it is indirectly listed in "permissions". To avoid that, you usually group by the category ID so that each distinct category is only included once in the result set. To exclude categories with no readable boards, you now select only those boards via the "permissions" table: SELECT c.* FROM permissions AS p JOIN boards AS b ON b.board_id = p.board_id JOIN categories AS c ON c.category_id = b.category_id WHERE p.group_id = 100 AND p.can_read = 1 GROUP BY c.category_id I've r In order to do this you will need to have some programming running on the server. If you don't want to learn to do that yourself, then there are a number of free systems available that will help you. Wordpress is a good example. You can add plugins to it to get it to work in the way that you need. You can find out more, and download it, at: There is a directory of plugins at: Adblock Plus is perfect for this kind of thing. Rather than trying a userscript or extension to block an external file, just install Adblock Plus and then add a filter for. If you insist on a userscript approach, then it appears that your existing code is fighting a race condition. That <link> node is part of a document that is stored in a div and is apparently manipulated by javascript (AJAX). Sometimes your script fires in time to catch a copy and sometimes it doesn't. This is exacerbated by the fact that Chrome scripts fire at unpredictable times, by default. It might be sufficient just to add "run_at": "document_end" to your manifest.json. (Which ironically, fires before the default script execution time.) However, to be sure, us Because you only download the first page of content. Just use a loop to donwload all pages: import urllib import urlparse from bs4 import BeautifulSoup for i in xrange(3): url = "" % i pageHtml = urllib.urlopen(url) soup = BeautifulSoup(pageHtml) for a in soup.select("div.productListingTitle a[href]"): try: print (a["href"]).encode("utf-8","replace") except: print "no link" if you do'nt know the count of pages, you can import urllib import urlparse from bs4 import BeautifulSoup i = 0 while 1: url = "" % i pageHtml = urllib.urlopen(url) soup = BeautifulSoup(pageHtml) has_more = 0 for a It might be rich, but it is not the OS X "style" - I have not seen any installer do that in ages. If you would like to do a tutorial or orientation, I would launch that after the installation business has been completed. BTW, full screen etc are also discouraged on Windows. Both are possible. If your deployment package includes those products, then you might have them installed, even if they are not available per your license (SID file). That depends on what was selected at installation, and when the original deployment package was created. You (or whomever manages installations) could look at the deployment package in SAS Deployment Manager to find out. To answer the question I think you're asking, you could certainly have one deployment package that includes the modules, and use the appropriate SID file to only activate the desired products on the desired machine(s) while still using the same deployment package. You could also install with a SID file that does not contain them and then run a new SID that does contain them later, so long as your deploym The inner class in C++ is not like it is in Pascal. It's just paled in the "namespace" of the outer class, but no other thing is changed. It sees only its own members, and the instance is unrelated to the outer one. If you want the relation, you must pass an instance of the outer somehow, say in the constructor. Then you can access members through that pointer or reference.ore). OS X applications are usually distributed as self contained app bundles that don't require an installer. It seems that Mono provides a package maker that help to create .app bundles from Mono projects: You need to compile to an object file using the -c switch for gcc. This performs compilation but does not link your program, meaning that you can have a reference (such as the someFunctionDefinedInX) which is not yet resolved. Later, when you need to perform final compilation and link in that reference, use gcc myFirstUnlinkedObj.o mySecondUnlinkedObj.o. This will look at all of the object files, find all of the undefined references, and look them up in the other object (or source, if any) files. It then links them all together in your final executable. You can find a good explanation of what an object file is in this question. A makefile which could perform these steps for you could look like this (minus your install target): x-as-func.o: gcc -c x-as-func.c my-exe-file: x-as-fu
http://www.w3hello.com/questions/-asp-forum-software-
CC-MAIN-2018-17
refinedweb
2,506
66.03
git-stash - Stash the changes in a dirty working directory away). Save. manually afterwards.. Like pop, but do not remove the state from the stash list.. Remove all the stashed states. Note that those states will then be subject to pruning, and may be difficult or impossible to recover. Remove a single stashed state from the stash list. When no <stash> is given, it removes the latest one. i.e. stash@{0} Create a stash (which is a regular commit object) and return its object name, without storing it anywhere in the ref namespace.. When you are in the middle of something, you learn that there are upstream changes that are possibly relevant to what you are doing. When your local changes do not conflict with the changes in the upstream, a simple git pull will let you move forward. ... You can use git stash save --keep-index when' Written by Nanako Shiraishi <nanako3@bluebottle.com>
http://www.kernel.org/pub/software/scm/git/docs/git-stash.html
crawl-002
refinedweb
156
73.17
Recently I was tasked to convert all backend generated timestamps from the default UTC to our users device timezone. This is my process of how I encountered some issues along the way and how I solved my ticket. Flowchart This is the flow I implemented: - Get user UTC offset in hours. - Send backend timestamp & offset into a conversion function that returns the converted+formatted string to the frontend The function in step 2 would work like this: params: String: dateString Int: offset - Parse the date string dateString. - Convert data into JS Date object. - Get the current hours of the date object by using JS Datebuilt-in function getHours()method. - Set new hours on the Date object by using JS Datebuilt-in function setHours(), where we pass in the current hours and add the offset passed into the function. - Format the string to the frontend - Return the new converted timestamp Let's see that happen in code: Building the conversion function The function would be called like this: const convertedTimeStamp = formatTimeByOffset(utcStringFromBE, offset) And the function I built based on the steps above looks like this: export const formatTimeByOffset = (dateString, offset) => { // Params: // How the backend sends me a timestamp // dateString: on the form yyyy-mm-dd hh:mm:ss // offset: the amount of hours to add. // If we pass anything falsy return empty string if (!dateString) return '' if (dateString.length === 0) return '' // Step 1: Parse the backend date string // Get Parameters needed to create a new date object const year = dateString.slice(0, 4) const month = dateString.slice(5, 7) const day = dateString.slice(8, 10) const hour = dateString.slice(11, 13) const minute = dateString.slice(14, 16) const second = dateString.slice(17, 19) // Step: 2 Make a JS date object with the data const dateObject = new Date(`${year}-${month}-${day}T${hour}:${minute}:${second}`) // Step 3: Get the current hours from the object const currentHours = dateObject.getHours() // Step 4: Add the offset to the date object dateObject.setHours(currentHours + offset) // Step 5: stringify the date object, replace the T with a space and slice off the seconds. const newDateString = dateObject .toISOString() .replace('T', ' ') .slice(0, 16) // Step 6: Return the new formatted date string with the added offset return `${newDateString}` } I tested it out and boom, it works when I pass in random offsets. The time converts properly even when time goes over midnight etc. that is taken care of the JS Date setHours() method. Awesome now I just need to get the user offset and we are done. Not quite JS Date My initial thought was that I simply use this method according to the docs here: JS Date getTimeZone() method const now = new Date() const utcTimeOffset = now.getTimezoneOffset() / 60; NOTE: Divided by 60 because the method returns the offset in minutes. Gave the wrong time However, changing my timezone to the west coast in America (for instance) gave me the wrong converted timestamp by 1 hour! Daylight Savings Time If we running in a browser this probably would have worked, because the browsers these days will return you a DST adjusted offset (correct me if I'm wrong). However, since we are not running in the browser we need to figure out a different way to determine if the user is affected by daylight savings time events. Doing this manually will be tricky because not all countries use DST and when they do, they don't use the same date and time when it goes into power. So what do we do? Let's figure out the timezone of the user somehow first, even though we are not running in a browser we are running on a mobile device. There must be a way of getting the time of the device and use that to our advantage. Getting the mobile device timezone Every time I want to use a native module in react native, like using the camera, I turn to React native community on Github Fortunately for us the community has a native module which is called react-native-community/react-native-localize I went in and read the docs and found the following method: getTimeZone() it is described like this: getTimeZone() Returns the user preferred timezone (based on its device settings, not on its position). console.log(RNLocalize.getTimeZone()); // -> "Europe/Paris" Alright, good. I installed the package into my project by doing the usual: yarn add react-native-localize cd ios && pod install cd .. yarn run ios I ran the example above: console.log(RNLocalize.getTimeZone()); // -> "Asia/Shanghai" Ok great if worse comes to worst I can make some kind of lookup table where I keep track of when different timezones go into DST etc. But there's no need for that, so let's bring in the moment time-zone library Moment Timezone The moment timezone library can take the timezone value generated above and return the UTC offset. Neat! Installation: yarn add moment-timezone Combined with getting the device timezone above we can use it like this import React, {useState, useEffect} from 'react'; import {View, Text} from 'react-native'; import {formatTimeByOffset} from '../helpers/formatTimeByOffset'; import * as RNLocalize from 'react-native-localize'; import moment from 'moment-timezone'; function Component() { const [timeToDisplay, setTimeToDisplay] = useState(''); const backEndTimeStamp = '2001-04-11 10:00:00'; // get device timezone eg. -> "Asia/Shanghai" const deviceTimeZone = RNLocalize.getTimeZone(); // Make moment of right now, using the device timezone const today = moment().tz(deviceTimeZone); // Get the UTC offset in hours const currentTimeZoneOffsetInHours = today.utcOffset() / 60; useEffect(() => { // Run the function as we coded above. const convertedToLocalTime = formatTimeByOffset( backEndTimeStamp, currentTimeZoneOffsetInHours, ); // Set the state or whatever setTimeToDisplay(convertedToLocalTime); }, []); return ( <View style={{ height: '100%', width: '100%', alignItems: 'center', justifyContent: 'center', }}> <Text style={{fontSize: 22, marginBottom: 20}}>Time-Example</Text> <Text style={{fontSize: 14, marginBottom: 20}}> Time passed into the function: {backEndTimeStamp} </Text> <Text style={{fontSize: 14, marginBottom: 20}}> Converted To local timezone: {timeToDisplay} </Text> <Text>Your timezone: {deviceTimeZone}</Text> </View> ); } export default Component; Let's see that in action: Success! I think there are good ways to make this more compact and stuff, but for a tutorial, I rather go a little bit verbose than miss some detail. Let me know if you found this helpful! Discussion
https://dev.to/ugglr/react-native-getting-user-device-timezone-and-converting-utc-time-stamps-using-the-offset-3jh8?utm_campaign=React%2BNative%2BNow&utm_medium=web&utm_source=React_Native_Now_68
CC-MAIN-2020-34
refinedweb
1,029
51.48
git.haskell.org / hadrian.git / blobdiff commit grep author committer pickaxe ? search: re summary | shortlog | log | commit | commitdiff | tree raw | inline | side by side Build mkUserGuidePart with stage-0 [hadrian.git] / src / Builder.hs diff --git a/src/Builder.hs b/src/Builder.hs index 007dae3 .. 09b87cb 100644 (file) --- a/ src/Builder.hs +++ b/ src/Builder.hs @@ -1,114 +1,187 @@ -{-# LANGUAGE DeriveGeneric #-} +{-# LANGUAGE DeriveGeneric , LambdaCase #-} module Builder ( - Builder (..), builderPath, getBuilderPath, specified, needBuilder + CcMode (..), GhcMode (..), Builder (..), builderPath, getBuilderPath, + builderEnvironment, specified, trackedArgument, needBuilder ) where -import Base +import Control.Monad.Trans.Reader +import Data.Char import GHC.Generics (Generic) -import Oracles + +import Base +import Context +import GHC +import Oracles.Config +import Oracles.LookupInPath +import Oracles.WindowsPath import Stage --- A Builder is an external command invoked in separate process using Shake.cmd +-- | A compiler can typically be used in one of three modes: +-- 1) Compiling sources into object files. +-- 2) Extracting source dependencies, e.g. by passing -M command line argument. +-- 3) Linking object files & static libraries into an executable. +-- We have CcMode for CC and GhcMode for GHC. + +-- TODO: Consider merging FindCDependencies and FindMissingInclude +data CcMode = CompileC | FindCDependencies | FindMissingInclude + deriving (Eq, Generic, Show) + +data GhcMode = CompileHs | FindHsDependencies | LinkHs + deriving (Eq, Generic, Show) + +-- TODO: Do we really need HsCpp builder? Can't we use Cc instead? +-- | A 'Builder' is an external command invoked in separate process using 'Shake.cmd' -- ---?) --- TODO: add Cpp builders --- TODO: rename Gcc to Cc? --- TODO: do we really need staged builders? +-- ?) data Builder = Alex | Ar - | Gcc Stage - | GccM Stage + | DeriveConstants + | Cc CcMode Stage + | Configure FilePath + | GenApply | GenPrimopCode - | Ghc Stage + | Ghc GhcMode Stage | GhcCabal - | GhcCabalHsColour - | GhcLink Stage - | GhcM Stage + | GhcCabalHsColour -- synonym for 'GhcCabal hscolour' | GhcPkg Stage - | GhcSplit | Haddock | Happy + | Hpc | HsColour | HsCpp | Hsc2Hs | Ld + | Make FilePath + | Nm + | Objdump + | Patch + | Perl + | Ranlib + | Tar | Unlit - deriving (Show, Eq, Generic) - --- Configuration files refer to Builders as follows: -builderKey :: Builder -> String -builderKey builder = case builder of - Alex -> "alex" - Ar -> "ar" - Gcc Stage0 -> "system-gcc" - Gcc _ -> "gcc" - GccM stage -> builderKey $ Gcc stage -- synonym for 'Gcc -MM' - GenPrimopCode -> "genprimopcode" - Ghc Stage0 -> "system-ghc" - Ghc Stage1 -> "ghc-stage1" - Ghc Stage2 -> "ghc-stage2" - Ghc Stage3 -> "ghc-stage3" - GhcLink stage -> builderKey $ Ghc stage -- using Ghc as linker - GhcM stage -> builderKey $ Ghc stage -- synonym for 'Ghc -M' - GhcCabal -> "ghc-cabal" - GhcCabalHsColour -> builderKey $ GhcCabal -- synonym for 'GhcCabal hscolour' - GhcPkg Stage0 -> "system-ghc-pkg" - GhcPkg _ -> "ghc-pkg" - GhcSplit -> "ghc-split" - Happy -> "happy" - Haddock -> "haddock" - HsColour -> "hscolour" - Hsc2Hs -> "hsc2hs" - HsCpp -> "hs-cpp" - Ld -> "ld" - Unlit -> "unlit" + deriving (Eq, Generic, Show) + +-- | Some builders are built by this very build system, in which case +-- 'builderProvenance' returns the corresponding build 'Context' (which includes +-- 'Stage' and GHC 'Package'). +builderProvenance :: Builder -> Maybe Context +builderProvenance = \case + DeriveConstants -> context Stage0 deriveConstants + GenApply -> context Stage0 genapply + GenPrimopCode -> context Stage0 genprimopcode + Ghc _ Stage0 -> Nothing + Ghc _ stage -> context (pred stage) ghc + GhcCabal -> context Stage0 ghcCabal + GhcCabalHsColour -> builderProvenance $ GhcCabal + GhcPkg stage -> if stage > Stage0 then context Stage0 ghcPkg else Nothing + Haddock -> context Stage2 haddock + Hpc -> context Stage1 hpcBin + Hsc2Hs -> context Stage0 hsc2hs + Unlit -> context Stage0 unlit + _ -> Nothing + where + context s p = Just $ vanillaContext s p + +isInternal :: Builder -> Bool +isInternal = isJust . builderProvenance + +-- TODO: Some builders are required only on certain platforms. For example, +-- Objdump is only required on OpenBSD and AIX, as mentioned in #211. Add +-- support for platform-specific optional builders as soon as we can reliably +-- test this feature. +isOptional :: Builder -> Bool +isOptional = \case + HsColour -> True + Objdump -> True + _ -> False +-- | Determine the location of a 'Builder'. builderPath :: Builder -> Action FilePath -builderPath builder = do - path <- askConfigWithDefault (builderKey builder) $ - putError $ "\nCannot find path to '" ++ (builderKey builder) - ++ "' in configuration files." - fixAbsolutePathOnWindows $ if null path then "" else path -<.> exe +builderPath builder = case builderProvenance builder of + Just context + | Just path <- programPath context -> return path + | otherwise -> + -- TODO: Make builderPath total. + error $ "Cannot determine builderPath for " ++ show builder + ++ " in context " ++ show context + Nothing -> case builder of + Alex -> fromKey "alex" + Ar -> fromKey "ar" + Cc _ Stage0 -> fromKey "system-cc" + Cc _ _ -> fromKey "cc" + -- We can't ask configure for the path to configure! + Configure _ -> return "bash configure" + Ghc _ Stage0 -> fromKey "system-ghc" + GhcPkg Stage0 -> fromKey "system-ghc-pkg" + Happy -> fromKey "happy" + HsColour -> fromKey "hscolour" + HsCpp -> fromKey "hs-cpp" + Ld -> fromKey "ld" + Make _ -> fromKey "make" + Nm -> fromKey "nm" + Objdump -> fromKey "objdump" + Patch -> fromKey "patch" + Perl -> fromKey "perl" + Ranlib -> fromKey "ranlib" + Tar -> fromKey "tar" + _ -> error $ "Cannot determine builderPath for " ++ show builder + where + fromKey key = do + let unpack = fromMaybe . error $ "Cannot find path to builder " + ++ quote key ++ " in system.config file. Did you skip configure?" + path <- unpack <$> askConfig key + if null path + then do + unless (isOptional builder) . error $ "Non optional builder " + ++ quote key ++ " is not specified in system.config file." + return "" -- TODO: Use a safe interface. + else fixAbsolutePathOnWindows =<< lookupInPath path getBuilderPath :: Builder -> ReaderT a Action FilePath getBuilderPath = lift . builderPath +-- | Write a Builder's path into a given environment variable. +builderEnvironment :: String -> Builder -> Action CmdOption +builderEnvironment variable builder = do + needBuilder builder + path <- builderPath builder + return $ AddEnv variable path + +-- | Was the path to a given 'Builder' specified in configuration files? specified :: Builder -> Action Bool specified = fmap (not . null) . builderPath --- Make sure a builder exists on the given path and rebuild it if out of date. --- If laxDependencies is True then we do not rebuild GHC even if it is out of --- date (can save a lot of build time when changing GHC). -needBuilder :: Bool -> Builder -> Action () -needBuilder laxDependencies builder = do - path <- builderPath builder - if laxDependencies && allowOrderOnlyDependency builder - then orderOnly [path] - else need [path] - where - allowOrderOnlyDependency :: Builder -> Bool - allowOrderOnlyDependency b = case b of - Ghc _ -> True - GhcM _ -> True - _ -> False - --- On Windows: if the path starts with "/", prepend it with the correct path to --- the root, e.g: "/usr/local/bin/ghc.exe" => "C:/msys/usr/local/bin/ghc.exe". -fixAbsolutePathOnWindows :: FilePath -> Action FilePath -fixAbsolutePathOnWindows path = do - windows <- windowsHost - -- Note, below is different from FilePath.isAbsolute: - if (windows && "/" `isPrefixOf` path) - then do - root <- windowsRoot - return . unifyPath $ root ++ drop 1 path - else - return path - --- Instances for storing in the Shake database +-- | Some arguments do not affect build results and therefore do not need to be +-- tracked by the build system. A notable example is "-jN" that controls Make's +-- parallelism. Given a 'Builder' and an argument, this function should return +-- 'True' only if the argument needs to be tracked. +trackedArgument :: Builder -> String -> Bool +trackedArgument (Make _) = not . threadArg +trackedArgument _ = const True + +threadArg :: String -> Bool +threadArg s = dropWhileEnd isDigit s `elem` ["-j", "MAKEFLAGS=-j", "THREADS="] + +-- | Make sure a Builder exists on the given path and rebuild it if out of date. +needBuilder :: Builder -> Action () +needBuilder = \case + Configure dir -> need [dir -/- "configure"] + builder -> when (isInternal builder) $ do + path <- builderPath builder + need [path] + +-- | Instances for storing in the Shake database. +instance Binary CcMode +instance Hashable CcMode +instance NFData CcMode + +instance Binary GhcMode +instance Hashable GhcMode +instance NFData GhcMode + instance Binary Builder instance Hashable Builder instance NFData Builder Hadrian build system RSS Atom
http://git.haskell.org/hadrian.git/blobdiff/c937606629a97188500bac159d2c8882ccbac3e9..a86f2b1e97fb7fa0ef08327f083049a41b278513:/src/Builder.hs
CC-MAIN-2019-47
refinedweb
1,133
52.8
Subject: Re: [boost] [metaparse] Review period starts May 25th and ends June 7th - ongoing From: Abel Sinkovics (abel_at_[hidden]) Date: 2015-05-30 01:43:30 Hi Louis, On 2015-05-29 19:00, Louis Dionne wrote: >. Thank you for checking it. > 1. What is the `mpl_` namespace? Is it documented in the tutorial for Metaparse? > If it is a shortcut for `boost::mpl`, why the trailing underscore? It comes from boost::mpl and is not related to Metaparse: > #include <boost/mpl/int.hpp> > boost::mpl::int_<13> mpl_::int_<13> This could be mentioned the first time it pops up in the tutorial. > 2. I personally am not using Metashell for the tutorial because it does not > properly support the '<' and '>' characters on OS X (I think that's a > shellinabox bug). As a result, I find it slightly difficult to follow > the code blocks in the tutorial because of the newline escapes (`\`) > and numbered names (`exp_parser16`) required for the code to be pasteable > in Metashell. I would personnally prefer a more classical approach with > normal code blocks not expecting the user to follow along in Metashell, > but I realize this is a matter of personal taste. I'm aware of a shellinabox bug related to Firefox. Chromium seems to handle it properly on Linux & Windows. Another option is installing Metashell locally (all you need are the Boost and the Metaparse headers on your header path). Since the tutorial is about getting the right types as the result of the right metafunction calls (in this case the parsers), it is mostly about checking if you got the types you expected. You can use other, more classical approaches (eg. displaying the types in error messages, asserting for type equality, pretty printing the types, etc), but that is more involved in each iteration. (They could be probably mentioned in the tutorial though).). Regards, Ábel Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2015/05/222845.php
CC-MAIN-2020-45
refinedweb
335
65.01
This section describes how to copy file-per-table tablespaces from one database server to another, otherwise known as the Transportable Tablespaces feature. For information about other InnoDB table copying methods, see Section 14.5.2, “Moving or Copying InnoDB Tables to Another Machine”. There are many reasons why you might copy an InnoDB file-per-table tablespace to a different database server: To run reports without putting extra load on a production server. To set up identical data for a table on a new slave server. To restore a backed-up version of a table after a problem or mistake. As a faster way of moving data around than importing the results of a mysqldump command. The data is available immediately, rather than having to be re-inserted and the indexes rebuilt. To move a file-per-table tablespace to a server with storage medium that better suits system requirements. For example, you may want to have busy tables on an SSD device, or large tables on a high-capacity HDD device. The tablespace copy procedure is only possible when innodb_file_per_table is set to ON, which is the default setting as of MySQL 5.6.6. Tables residing in the shared system tablespace cannot be quiesced. When a table is quiesced, only read-only transactions are allowed on the affected table. When importing a tablespace, the page size must match the page size of the importing instance. DISCARD TABLESPACE is not supported for partitioned tables meaning that transportable tablespaces is also unsupported. If you run ALTER TABLE ... DISCARD TABLESPACE on a partitioned table, the following error is returned: ERROR 1031 (HY000): Table storage engine for 'part' doesn't have this option. DISCARD TABLESPACE is not supported for tablespaces with a parent-child (primary key-foreign key) relationship when foreign_key_checks is set to 1. Before discarding a tablespace for parent-child tables, set foreign_key_checks=0. ALTER TABLE ... IMPORT TABLESPACE does not enforce foreign key constraints on imported data. If there are foreign key constraints between tables, all tables should be exported at the same (logical) point in time. ALTER TABLE ... IMPORT TABLESPACE does not require a .cfg metadata file to import a tablespace. However, metadata checks are not performed when importing without a .cfg file, and a warning similar to the following will be issued: Message: InnoDB: IO Read error: (2, No such file or directory) Error opening '.\ test\t.cfg', will attempt to import without schema verification 1 row in set (0.00 sec) The ability to import without a .cfg file may be more convenient when no schema mismatches are expected. Additionally, the ability to import without a .cfg file could be useful in crash recovery scenarios in which metadata cannot be collected from an .ibd file. In MySQL 5.6 or later, importing a tablespace file from another server works if both servers have GA (General Availability) status and their versions are within the same series. Otherwise, the file must have been created on the server into which it is imported. In replication scenarios, innodb_file_per_table must be set to ON on both the master and slave. On Windows, InnoDB stores database, tablespace, and table names internally in lowercase. To avoid import problems on case-sensitive operating systems such as Linux and UNIX, create all databases, tablespaces, and tables using lowercase names. A convenient way to accomplish this is to add the following line to the [mysqld] section of your my.cnf or my.ini file before creating databases, tablespaces, or tables: [mysqld] lower_case_table_names=1 This procedure demonstrates how to copy a table stored in a file-per-table tablespace from a running MySQL server instance to another running instance. The same procedure with minor adjustments can be used to perform a full table restore on the same instance. On the source server, create a table if one does not already exist: mysql> use test; mysql> CREATE TABLE t(c1 INT) engine=InnoDB; On the destination server, create a table if one does not exist: mysql> use test; mysql> CREATE TABLE t(c1 INT) engine=InnoDB; On the destination server, discard the existing tablespace. (Before a tablespace can be imported, InnoDB must discard the tablespace that is attached to the receiving table.) mysql> ALTER TABLE t DISCARD TABLESPACE; On the source server, run FLUSH TABLES ... FOR EXPORT to quiesce the table and create the .cfg metadata file: mysql> use test; mysql> FLUSH TABLES t FOR EXPORT; The metadata ( .cfg) file is created in the InnoDB data directory. FLUSH TABLES ... FOR EXPORT is available as of MySQL 5.6.6. The statement ensures that changes to the named tables have been flushed to disk so that binary table copies can be made while the server is running. When FLUSH TABLES ... FOR EXPORT is run, InnoDB produces a .cfg file in the same database directory as the table. The .cfg file contains metadata used for schema verification when importing the tablespace file. Copy the .ibd file and .cfg metadata file from the source server to the destination server. For example: shell> scp /path/to/datadir/test/t.{ibd,cfg} destination-server: /path/to/datadir/test The .ibd file and .cfg file must be copied before releasing the shared locks, as described in the next step. On the source server, use UNLOCK TABLES to release the locks acquired by FLUSH TABLES ... FOR EXPORT: mysql> use test; mysql> UNLOCK TABLES; On the destination server, import the tablespace: mysql> use test; mysql> ALTER TABLE t IMPORT TABLESPACE; The ALTER TABLE ... IMPORT TABLESPACE feature does not enforce foreign key constraints on imported data. If there are foreign key constraints between tables, all tables should be exported at the same (logical) point in time. In this case you would stop updating the tables, commit all transactions, acquire shared locks on the tables, and then perform the export operation. The following information describes internals and error log messaging for the transportable tablespaces copy procedure. file. Expected error log messages for this operation: 2013-07-18 14:47:31 34471 [Note] InnoDB: Sync to disk of '"test"."t"' started. 2013-07-18 14:47:31 34471 [Note] InnoDB: Stopping purge 2013-07-18 14:47:31 34471 [Note] InnoDB: Writing table metadata to './test/t.cfg' 2013-07-18 14:47:31 34471 -07-18 15:01:40 34471 [Note] InnoDB: Deleting the meta-data file './test/t.cfg' 2013-07-18 15:01:40 34471 will Copyright © 1997, 2015, Oracle and/or its affiliates. All rights reserved. Legal Notices Note (you can read about it at) that for a long time already, since is fixed in 5.6.8, .cfg file is not necessary at least for some (not clearly identified) cases. Indeed, recent MySQL 5.6.x versions will import just .ibd, assuming its "clean" and table really has the same structure. Given a Server A and an Innodb table(t500) symlinked thus: CREATE TABLE `t500` ( `id` int(11) NOT NULL, `c` char(20) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 DATA DIRECTORY='/var/lib/mysqlw/'; Server A mysql datadir=/var/lib/mysql/data And you want to import this innodb table on server B but on a different DATA DIRECTORY clause option thus so on server B you do: CREATE TABLE `t500` ( `id` int(11) NOT NULL, `c` char(20) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 DATA DIRECTORY='/var/lib/mysqla/'; Server B mysql datadir=/var/lib/mysql/data import of t500 from Server A to B works just fine. However, trying to import this same table from server A with definition CREATE TABLE `t500` ( `id` int(11) NOT NULL, `c` char(20) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 DATA DIRECTORY='/var/lib/mysqlw/'; to server B with definition: CREATE TABLE `t500` ( `id` int(11) NOT NULL, `c` char(20) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ; will fail with the following errors: ERROR 1808 (HY000): Schema mismatch (Table flags don't match, server table has 0x6 and the meta-data file has 0x41) ...I guess a symlinked innodb table from source should also be symlinked on import on the destination server!!! By the way am using mysql 5.6.10 community version. You'll want to issue an analyze after importing, in order to notify the data dictionary about the new indexes for that table. Otherwise, they won't be used and its cardinality will be reported as 0. At least as of 5.6.20.
http://dev.mysql.com/doc/refman/5.6/en/tablespace-copying.html
CC-MAIN-2015-22
refinedweb
1,417
55.44
On Mon, 19 Feb 2001, Stefano Mazzocchi wrote: > Last time I checked, the XInclude syntax changed from elements to > attributes, following the xlink metaphor. And I believe this is a better > way of doing things since xinclusion is a behavioral property (like > link) not a structural one. > > Did the proposal went back on elements? I'll checkit out myself, but in > that case, I'll propose to drop support for Xinclude alltogether and > implement our own namespace for that, also because there is a problem > with XInclude: > > what happens if I *DON'T* want Cocoon to process my xinclude stuff? > > XInclude is mostly designed (like almost anything from the xml-linking > group) for client side operation. What happens if mozilla starts > supporting xinclude on the client side? how do I send something from > Cocoon to mozilla escaping the xinclude stuff? i think ideally, you'd lobby the w3c working group to add a new attribute to the xinclude:include element indicating where processing of the xinclude may or may not occur. realistically, i think we're better off writing our own namespace. as "nice" as it would be to be conform to standards in all aspects of cocoon, when the standards are fluctuating, aren't implemented by anyone other than us, don't fully address our issues, and are fairly trivial anyway, i think we can feel good about breaking new ground instead. - donald
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200102.mbox/%3CPine.LNX.4.30.0102191318080.25832-100000@rdu162-231-084%3E
CC-MAIN-2014-52
refinedweb
234
57.1
On 9 October 2014 13:36, Peter Zijlstra <peterz@infradead.org> wrote:> On Tue, Oct 07, 2014 at 02:13:35PM +0200, Vincent Guittot wrote:>> Monitor the usage level of each group of each sched_domain level. The usage is>> the amount of cpu_capacity that is currently used on a CPU or group of CPUs.>> We use the utilization_load_avg to evaluate the usage level of each group.>>>> The utilization_avg_contrib only takes into account the running time but not>> the uArch so the utilization_load_avg is in the range [0..SCHED_LOAD_SCALE]>> to reflect the running load on the CPU. We have to scale the utilization with>> the capacity of the CPU to get the usage of the latter. The usage can then be>> compared with the available capacity.>> You say cpu_capacity, but in actual fact you use capacity_orig and fail> to justify/clarify this.you're right it's cpu_capacity_orig no cpu_capacitycpu_capacity is the compute capacity available for CFS task once wehave removed the capacity that is used by RT tasks.We want to compare the utilization of the CPU (utilization_avg_contribwhich is in the range [0..SCHED_LOAD_SCALE]) with available capacity(cpu_capacity which is in the range [0..cpu_capacity_orig])An utilization_avg_contrib equals to SCHED_LOAD_SCALE means that theCPU is fully utilized so all cpu_capacity_orig are used. so we scalethe utilization_avg_contrib from [0..SCHED_LOAD_SCALE] into cpu_usagein the range [0..cpu_capacity_orig]>>> The frequency scaling invariance is not taken into account in this patchset,>> it will be solved in another patchset>> Maybe explain what the specific invariance issue is that is skipped over> for now.ok. I can add description on the fact that if the core run slower, thetasks will use more running time of the CPU for the same job so theusage of the cpu which is based on the amount of time, will increase>>> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>>> --->> kernel/sched/fair.c | 13 +++++++++++++>> 1 file changed, 13 insertions(+)>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c>> index d3e9067..7364ed4 100644>> --- a/kernel/sched/fair.c>> +++ b/kernel/sched/fair.c>> @@ -4551,6 +4551,17 @@ static int select_idle_sibling(struct task_struct *p, int target)>> return target;>> }>>>> +static int get_cpu_usage(int cpu)>> +{>> + unsigned long usage = cpu_rq(cpu)->cfs.utilization_load_avg;>> + unsigned long capacity = capacity_orig_of(cpu);>> +>> + if (usage >= SCHED_LOAD_SCALE)>> + return capacity + 1;>> Like Morten I'm confused by that +1 thing.ok. the goal was to point out the erroneous case where usage is out ofthe range but if it generates confusion, it can remove it>>> +>> + return (usage * capacity) >> SCHED_LOAD_SHIFT;>> +}>> A comment with that function that it returns capacity units might> clarify shift confusion Morten raised the other day.ok. i will add a comment
https://lkml.org/lkml/2014/10/9/248
CC-MAIN-2019-26
refinedweb
444
54.93
My ColdFusion User Defined Function Library Structure I was recently asked how I build my library of ColdFusion user defined functions (UDF). I sometimes post code samples that will break if you copy them directly as they contain references to parts of the library that you do not have. I cache all my UDF's in the APPLICATION scope in a single ColdFusion Component (CFC). Then, for each page request, I get a reference to it and store it in the REQUEST scope: REQUEST.UDFLib = APPLICATION.ServiceFactory.GetUDFLib(); I put it into the request scope more for personal reasons (as opposed to going to the APPLICATION scope each time). The ServiceFactory is just a cached object instance in my APPLICATION scope that houses an interface for getting and creating objects in my application. Without a ServiceFactory, you could easily do this like: <cfset APPLICATION.UDFLib = CreateObject( "component", "UDFLib" ).Init() /> Now, my UDF library is one object, but it is actually composed of many different objects. I like to house each of my UDF library categories (ie. Text, XML) in a different component. This makes adding and editing methods much easier and it forces me to think out my code more. Plus, it cuts down on naming conflicts and allows for smaller naming conventions. Here is the code for the main UDF library object: <cfcomponent displayname="UDFLib" extends="AbstractBaseComponent" output="no" hint="Houses all the user defined functions for this application."> <cffunction name="Init" access="public" returntype="UDFLib" output="no" hint="Returns an initialized user defined library instance."> <cfscript> // Since this is a library and NOT a real entity bean, we are going to // store all library references in the THIS scope so that they are // easily accessible to anyone who has an instance reference. // System contains core system functions. THIS.System = CreateObject( "component", "UDFlib.SystemLib" ).Init( THIS ); // AJAX contains AJAX utility functions. THIS.AJAX = CreateObject( "component", "UDFLib.AJAXLib" ).Init( THIS ); // Array contains array manipulation functions. THIS.Array = CreateObject( "component", "UDFLib.ArrayLib" ).Init( THIS ); // DateTime contains date and time manipulation functions. THIS.DateTime = CreateObject( "component", "UDFlib.DateTimeLib" ).Init( THIS ); // List contains list manipulation functions. THIS.List = CreateObject( "component", "UDFlib.ListLib" ).Init( THIS ); // IO contains input and output related functions. THIS.IO = CreateObject( "component", "UDFlib.IOLib" ).Init( THIS ); // Query contains query manipulation functions. THIS.Query = CreateObject( "component", "UDFlib.QueryLib" ).Init( THIS ); // Struct contains struct manipulation functions. THIS.Struct = CreateObject( "component", "UDFlib.StructLib" ).Init( THIS ); // Text contains text manipulation functions. THIS.Text = CreateObject( "component", "UDFlib.TextLib" ).Init( THIS ); // Validation contains data validation functions. THIS.Validation = CreateObject( "component", "UDFlib.ValidationLib" ).Init( THIS ); // Xml contains xml manipulation functions. THIS.Xml = CreateObject( "component", "UDFlib.XmlLib" ).Init( THIS ); // The custom library is stuff that is decidedly NOT part of the main function // library as it has been created specifically for the current application. This // library would not be used for a different application. THIS.Custom = CreateObject( "component", "UDFlib.CustomLib" ).Init( THIS ); // Return THIS object. return( THIS ); </cfscript> </cffunction> </cfcomponent> As you can see, each type of library method is stored in its own component in the THIS scope of the library. Therefore, when I need to reference a method, I cannot just call the UDF library object; I have to call a sub-component of it: <cfset qNew = UDFLib.Query.Append( qOne, qTwo ) /> I like this not only because it makes updating the library easier, it also makes the code a bit more self explanatory. Seeing what "library" you are calling will immediatly tell you more about what the method does. As you can see above, when I create my sub libraries, I always pass in a THIS reference to the main UDF library object. This way, all the child libraries can have a reference to the parent library and therefore can call methods in sibling libraries. Each Init() method of a child library looks similar to this: <cffunction name="Init" access="public" returntype="ArrayLib" output="no" hint="Returns an initialized library instance."> <!--- Define arguments. ---> <cfargument name="Library" type="struct" required="yes" /> <cfscript> // The library variable is a reference to the main UDF // library. This is going to be used to reference other // parts of the library that are not in this specific // component. VARIABLES.Library = ARGUMENTS.Library; // Return THIS object. return(THIS); </cfscript> </cffunction> The child library stores the reference to the parent library. This way, if I am in one method, I can easily call methods from another library: <cfset strText = VARIABLES.Library.Text.ToAsciiString( "-" ) /> So there you have it, my UDF library approach. Reader Comments I use a very similar structure for my liraries, with one small difference. Where you have your ServiceFactory, I have a PluginManager base class. All of the helper object CFCs then inherit from a Plugin base class, while the specific factories inherit from the PluginManager class. The PluginManager base class uses reflection to look at the inherited class' namespace and tries to load all CFCs present in a Plugins directory beneath it. Thus, no explicit loading. I just have to drop a new file into a Plugins directory and the PluginManager will load it. The plugins don't really have to inherit from the Plugin base class, but if I add the extra extends attribute in the <cfcomponent>, they get nifty things like knowing they are a plugin, figuring out who their parent Manager is, etc. Thus, visually, I have a file structure like so: \org \rickosborne PluginManager.cfc (base class) Plugin.cfc (base class) FooManager.cfc (extends PluginManager) SockManager.cfc (extends PluginManager) \foo bar.cfc (extends Plugin) quux.cfc (maybe extends Plugin) \sock red.cfc (extends Plugin) blue.cfc (extends Plugin) Then, in my Application.cfc, I only have to do one explicit object creation for each factory: <cfset FooManager = CreateObject("component", "org.rickosborne.FooManager").init()> <cfset SockMgr = CreateObject("component", "org.rickosborne.SockManager").init()> And the rest is just automagic. The PluginManager base class does a <cfdirectory> listing of the \org\rickosborne\foo directory and tries to load each of the plugins. The main benefit in my case is that if I don't want someone to have a specific plugin, I just don't give it to them. No code changes necessary. Conversely, adding new plugins to a client's site generally requires no code, just dropping a new CFC into a folder. Rick, That is pretty cool. It goes a little over my head, but I think I understand it. I like the idea though. I am slowly trying to learn more about OOP and clever CFC ideas. How do you handle the idea of passing in arguments during object instantiation? Is that part of the reflection stuff to figure out what is being asked for? Very cool though. you wrote here # // Array contains array manipulation functions. # THIS.Array = CreateObject( "component", "UDFLib.ArrayLib" ).Init( THIS ); Dose UDFlib in "UDFLib.ArrayLib" means/(map to) real folder under the cfc mapping folder or you use this because you are creating from inside UDFLib CFC? Ameen Ameen, Sorry about that. It is just a directory sturcture. I keep all of my CFC in a folder called "extensions". I don't like mappings and try not to use them. A lot of people will yell at me for that, but mappings, in my experience, only cause hardship. So, as for directory structure I have: extentions -- UDFLib So when I say that I am creating a component in path "UDFLib.Text", it merely means that it is the Text.cfc component within in the directory UDFLib. I don't need any mappings for this since this is being called from a component in the "Extensions" directory. Hence, UDFLib.Text is relative to the calling directory. Hope that helps. I got it Thank you Ben I appreciate your work and explain. Any time :) Ben asked: "How do you handle the idea of passing in arguments during object instantiation? Is that part of the reflection stuff to figure out what is being asked for?" Generally, I don't have to worry about it. Most of these plugin-type libraries, for me anyway, fall into one of two categories: 1. Filter (transform x into y, such as a crypto hash) 2. Action (take thing x and mail it to me, logging it to file y) In both cases, I then have 1 of 4 approaches: 1. I don't really need any initialization data, as the very existence of the plugin means it is doing something specifically different than all of the other filters (SHA vs MD5 vs ROT-13, etc) 2. The plugin is a Black Box, and thus persists nothing, thus not really needing any initialization data, and everything must be passed into it for each function (a Crypto manager would give you back a SHA black box, but you would then have to call a SetKey function to do any initialization) 3. If it really needs some kind of initialization data, the plugin can ask its Manager for it when it is init()ed. Thus, if all of the plugins should share one datasource, the Manager would know about it. 4. That last point can be really abstracted to the case where I have a "shim" class that extends Plugin (maybe CryptoPlugin) and then the other classes inherit from it (SHACryptoPlugin). Thus, the shim could be the one that asks the Manager for any sort of initialization data, and we're back to the plugins being very dumb/Black Box and getting their initialization data through inheritance instead of explicitly. (Certainly not my favorite option, but I've been forced to do it at least once in the name of Refactoring Mercilessly.) I know it's a quirky design decision, but I have been very much into Black Box objects lately. It means a lot more function arguments (you are explicitly passing the datasource/DAO each time), but it has also led to a lot more code reuse for me. It's certainly not for everyone, though. I'm sure there are others that would look at my code and say "wtf? why not just have one object per datasource?", but for the types of applications I'm working on, it works out very well. (Most of my apps have to work against multiple datasources of multiple radically-different DBMS types, sometimes at once.) I also like it because it leads to very simple-to-read and simple-to-upgrade code, as you're never trying to figure out where-t-f the frickin' datasource is coming from -- it's right there in the argument list. (Or Reactor object vs datasource or whatever.) The application I threw out when I got here was insidious because it looked the previous developer tried to do OOP, but then would get frustrated 4-layers deep and break the object model horribly by hard-coding a datasource or table name or something. And it was Fusebox. Really nasty Fusebox. It made debugging a living hell. Hence, my retaliation with uber-Black-Boxes. Rick, Awesome explanation. I think I am sort of understanding what you are talking about. I too like the black box idea. I am doing my very best not to break encapsulation rules. That is why I always pass my data source around and in the above example, why I pass in references to the main library to all child libraries so that all calls are made through whatever it was passed and no guess work has to be done. I am slowly learning more OOP, but it is not an easy journey. I learn very well by doing but I find it hard to find a really good, but small example of great OOP design. But as time goes on, more stuff is starting to make sense. Okay, I reread that comment and realized it was hopelessly vague. Let me give you a specific example. As I've mentioned on my blog, one of my apps is an online store. My company sells product through several different brands, each with their own web site and design, but all the same product pool. Most of this is easily accomplished through a simple CSS swap (yay CSS!), but there is still some text branding, and you need a way to add in any custom CSS or JavaScript for each site. Completely distinct from this is the problem of logging into the site(s), which can happen from multiple service providers. I went into this a little in this blog entry: The directory stucture (vastly simplified) and object model look like this: org/ . rickosborne/ . . SiteManager.cfc . . Site/ . . . site.cfc (generic baseclass/shim/fallback) . . . rickosborne.cfc (actually extends site.cfc, not plugin!) . . . rixsoft.cfc . . . corri.cfc . . IdentityManager.cfc . . Identity/ . . . noauth.cfc (generic baseclass/shim/fallback) . . . session.cfc . . . client.cfc . . . cookie.cfc . . . url.cfc . . PageManager.cfc . . Page/ . . . static.cfc . . . store.cfc . . . news.cfc . . . LoginManager.cfc . . . . local.cfc (generic baseclass/shim/fallback) . . . . inames.cfc . . . . openid.cfc . . . . sxip.cfc . . . . google.cfc . . Response.cfc The Application.cfc initializes the 3 managers, which automagically initialize all of their plugins. In the onRequestStart, I have the following code: <cfset Response=CreateObject("component","org.rickosborne.response").init()> <cfset Response.Site=SiteManager.CurrentSite()> <cfset Response.Identity=IdentityManager.CurrentIdentity()> <cfset Response.Page=PageManager.CurrentPage()> <cfset Response.Render()> The CurrentSite() function in the SiteManager just loops through each of its plugins and asks isCurrentSite(), which is actually only ever defined in the site.cfc base class, along with a HostNames array. Each of the derived classes then only adds their host names to their local copy of HostNames during init(), and with no code in the inherited classes, I automagically get the right one anyway. (I'm thinking about converting this to XML, but I'm on the fence about it.) There's absolutely zero duplicated code. The generic Site object is the only one to override this method, and always returns True, but then does not perform any branding later on (or returns an "Unknown Site" message). Sites are view-level objects. The IdentityManager is model-level code that is half an abstraction for Session and Client variables and half implementation of the various login methods. Do they have a Session that is authenticated? Maybe it's in Client variables instead? Okay, try looking for Cookies. What about a URL nonce? (The view-level parts with the actual login forms are actually plugins handled by a LoginManager, which is itself also a Page plugin.) Again, the same thing happens for the PageManager. The kicker is that CurrentPage isn't actually rendering the page, it's just returning the object that can render the page. Since the Page rendering may be different depending on the Identity and the Site, you'd have a chicken-and-egg problem unless you had all of your objects in place and rendered them all at once. The generic Page object always returns a really simple object that ignores the Identity and Site and just says "No such page". Pages are view-level objects, but can request model-level objects from the Response object (such as a Store or Cart object, which the Response object asks the Application for). The controller part is of course the Response object, which doesn't do much more than figure out the current context (Where/Site, Who/Identity, What/Page) and get them to talk to eachother long enough to render the page. Whew. But, hopefully you can see that having everything as dynamically-loaded plugins is extremely useful. If I want to add a new sitewide branding design, I create a new blank Site CFC, set the few tiny host names and branding specifics, drop it in the folder, then re-init() the SiteManager. Same goes if I want to add a new authentication scheme or page type (Blog, News, Calendar, whatever). This comes in even more handy for incremental development. I can easily create a Store_v2 object that only answers true to CurrentPage() if it knows it is me, thus I'm the only ones that sees it. I can then develop v2 side-by-side with v1 and then remove the v1 code when v2 is ready. No muss, no fuss, no worrying about "oh crap! v2 requires FooBar and I forgot to put it on the production server!". Rick, I think where you are is basically where I aim to get. Methodology or not, I am talking about higher level design. I really like plug-in idea and the ease with which things are swapped out. I am gonna let all that stew in my head for a while. Thanks for posting such in-depth explanations of your stuff. It's super helpful. Have I missed something? Wouldnt it be <cfset qNew = Request.UDFLib.Query.Append( qOne, qTwo ) /> Brendan, You are absolutely correct. My mistake. Since the UDFLib instance is cached in the REQUEST scope, I would have to use REQUEST in the code. That was a typo. Thanks. Hi Ben, I was thinking that one your reasons for not using application scope was not typing out application.UDFLib all the time, then i thought you would have to type out request.UDFLib anyway. Suppose variables scope isnt accessible from some of your code? or are there other reasons why variables scope isnt useful?, I don't want to get confused between the CFC Variables scope and the Page Variables scope. At this point, I like the way REQUEST looks (yeah, I am that shallow). I don't see VARIABLES as adding anything over the REQUEST scope. adequate for a number of reasons. I don't fully understand the CF memory model either Your post above is very helpful. As it's two years old, I'm wondering if you have an update of more recent wisdom. If so, would you share it? Many thanks @Don, I still use this basic idea. I with there was a way to make globally accessible UDFs to act like built-in functions, but I have not found a way; plus, for name conflict reasons, its probably not the best goal :) I now have a UDF object (like above) that I just pass around as needed. Library, or a "String"/String.cfc Library. I know I should just do what's easiest for me to understand. I'm just curious about your opinion on this. don't believe there's a way to calculate an answer with any specificity. As I posted above (quite some time back), the CF memory model is a bit of a mystery to me. FWIW, my function libraries total about 125k on disk and work very well, even on massively shared hosts. - Don @Don my question doesn't really have anything to do with memory or architecture... i'm more thinking about what functions to put in what libraries.... have a method that converts one type of object (let's say a "list") into another type of object (let's say an "array"), I was unsure if that was a List method or an Array method. But, now, I've got it down - basing it on the INPUT type. likely put them in your CustomLib.cfc. But, that could get big and out of control, so would you create a CheckoutLib.cfc and a MyAccountLib.cfc to store methods specific to those apps? like a charm even tho it seems like rather a strange workaround. (Dec. 8, 2009) Thanks for all of these great tricks and treats! something I had even thought about trying before... holy crap... its like christmas in here today... (Though, I wish I would have searched on this 3 years ago... I could have historically made really good use of these!)
https://www.bennadel.com/blog/257-my-coldfusion-user-defined-function-library-structure.htm
CC-MAIN-2020-40
refinedweb
3,275
66.23
I've tested with Python 2.7, under Visual Studio Code . Convert csv to tsv: import csv with open('Directory\filename.csv','r') as csvin, open('Directory\filename.txt', 'w') as tsvout: csvin = csv.reader(csvin) tsvout = csv.writer(tsvout, delimiter='\t') for row in csvin: tsvout.writerow(row) Convert csv to ttl: #!/usr/bin/env python #import the CSV module for dealing with CSV files import csv #create a 'reader' variable, which allows us to play with the contents of the CSV file #in order to do that, we create the ifile variable, open the CSV file into that, then pass its' contents into the reader variable. ifile = open('Directory\filename..csv', 'rb') reader = csv.reader(ifile) #create a new variable called 'outfile' (could be any name), which we'll use to create a new file that we'll pass our TTL into. outfile = open('Directory\filename.ttl', 'a') #get python to loop through each row in the CSV, and ignore the first row. rownum = 0 for row in reader: if rownum == 0: # if it's the first row, then ignore it, move on to the next one. pass else: # if it's not the first row, place the contents of the row into the 'c' variable, then create a 'd' variable with the stuff we want in the file. c = row a pol:Council ;\n core:preferredLabel "' + c[2] + '" ;\n core:sameAs <' + c[3] + '> .\n \n' outfile.write(d) # now write the d variable into the file rownum += 1 # advance the row number so we can loop through again with the next row # finish off by closing the two files we created outfile.close() ifile.close() Source: ttl viewer: I am impressed by the information that you have on this blog. It shows how well you understand this subject. data analytics course big data analytics malaysia big data course Thanks for such a great post and the review, I am totally impressed! Keep stuff like this coming. certification in Malaysia 360DigiTMG PMP course in Malaysia 360DigiTMG PMP course 360DigiTMG PMP training in Malaysia
https://www.computation.space/2020/06/csv-to-tsv-ttl.html?showComment=1592372055627
CC-MAIN-2021-25
refinedweb
345
74.29
Kruskal’s algorithm is used to find the minimal spanning tree for a network with a set of weighted links. This might be a telecoms network, or the layout for planning pipes and cables, or one of many other applications. It is a greedy algorithm that finds the set of links for which the total weight is a minimum, subject to the constraint that the resulting network is a tree containing no cycles. In other words, the objective is to produce a spanning tree of minimal weight. In this implementation, the Boost Graph Library is used to take an undirected graph represented by a node or edge list, along with the weights of each link in the graph, and return the edge list that comprises the minimal spanning tree. If it is just a simple C++ implementation of Kruskal’s algorithm you are after, without GUI and Boost libraries, here is another post with some downloadable code. The example implementation that I used in my own version is given below: #include "stdafx.h" #include "Algorithm.h" #include <boost/graph/adjacency_list.hpp> #include <boost/graph/kruskal_min_spanning_tree.hpp> using namespace boost; using namespace std; // // Kruskal's algorithm for finding minimum spanning tree // void Algorithm::Kruskal( Network &network, vector< Link > &mst ) { typedef adjacency_list < vecS, vecS, undirectedS, no_property, property < edge_weight_t, int >> Graph; typedef graph_traits < Graph >::edge_descriptor Edge; typedef graph_traits < Graph >::vertex_descriptor Vertex; typedef std::pair<int, int> E; const int num_nodes = network.GetNodeCount(); const int num_links = network.GetLinkCount(); Graph g( num_nodes ); property_map< Graph, edge_weight_t >::type weightmap = get( edge_weight, g ); for ( size_t j = 0; j < num_links; ++j ) { Edge e; bool inserted; tie( e, inserted ) = add_edge( network.GetLinkStartNode( j ), network.GetLinkEndNode( j ), g ); weightmap[ e ] = (size_t) network.GetLinkWeight( j ); } property_map < Graph, edge_weight_t >::type weight = get( edge_weight, g ); std::vector< Edge > spanning_tree; kruskal_minimum_spanning_tree( g, std::back_inserter( spanning_tree ) ); std::cout << "Print the edges in the MST:" << std::endl; for ( vector::iterator ei = spanning_tree.begin(); ei != spanning_tree.end(); ++ei ) { Link l( (int) source(*ei, g), (int) target(*ei, g), (int) weight[*ei] ); mst.push_back( l ); } } This implementation also provides a graphical representation of the nodes and their interconnecting links using standard MFC calls to draw the network links and nodes. Nodes are added by a left mouse click and their integer identifiers are allocated in ascending numerical order Set the ‘weights’ of the interconnecting links by right-clicking the link and choosing ‘Link Properties’ from the drop-down menu. The default value for each link added is 1.0 units. The user also has the option to delete selected links via the same drop-down menu: Once the link connection weights have been set, select ‘Kruskal’ from the main menu to give a graphical representation of the minimal spanning tree: Sample Visual Studio 2003 code here: Please note: for this to build properly and avoid linker errors you will need to have the Boost libraries installed and set up for your Visual Studio project. Here’s how. Using the software in Visual Studio 2010 / Other Issues 1. After you have downloaded and unzipped KruskalAlg.zip and tried to build it in VS 2010 the compiler whinges about “This file requires _WIN32_WINNT to be #defined at least to 0x0403”. Go into the stdafx.h file and change: #define _WIN32_WINNT 0x0400 to #define _WIN32_WINNT 0x0500 2. Check you have downloaded and installed Boost and check your project dependency settings. In Project Configuration Properties select C/C++ -> General tab and check the setting for “Additional Include Directories”: A Visual Studio 2010 version of the same is downloadable from here. hey bro! didn’t know u moved to wordpress! “Graph1Dlg.h” file is missing in your KruskalAlg.zip file! i can’t get Graph1.cpp to run because of the missing header. can u please check that? thanks! i am learning about boost libraries to write a program about spanning tree detection problem. 🙂
http://www.technical-recipes.com/2011/minimal-spanning-tree-kruskal-algorithm-in-c/
CC-MAIN-2017-09
refinedweb
640
62.07
RDPMan is a simple utility to manage your remote shell connections. RDPMan Ver.2 supports the following: To create a connection within RDPMan, simply drag and drop a saved RDP or VNC file into the main window. The connection will be added to the main window. Remember, once you have added the configuration for RDP, VNC config for that connection object has still to be entered. While you will still be able to connect with VNC, your optimal ouput will not be reflected until you drop a VNC file onto your created object. You can also simply type in an internet address and click the Connect button which will add your new connection to the main window and connect using RDP. In future versions, RDPMan will have a default connection and allow changing of the default connection option. Right now, RDPMan connects by default as RDP. A compiled Exe file is available at:. RDPMan was built using VB.NET. The guts of the application is in the connection.vb file in which the connection class is built. The class contains properties for all the possible .rdp and .vnc file options. The connection class can be created without any options, like: dim c as new connection or created via an already created Registry key as: dim c as new connection(subkeyname as string) I have put together notes and what not for the main connection class. The presentation layer is pretty standard stuff that is probably covered much better than I could ever hope to; however, if you see anything in the downloadable source from above, please email me and I will do my best to provide insight. Onto the main functions... Within the connection class, the following methods for filling connection objects exist: FillFromVNC works the same way as the RDP version of this method, except it is used to read the VNC file formats instead of RDP. FillFromVNC Public Sub FillFromVNC(ByVal f As FileInfo) 'Read all the lines from the VNC File 'and put them into a string array. Dim a As String() = File.ReadAllLines(f.FullName) 'Run a loop on the string array that we just created Dim lc As Integer = 0 Do Until lc = a.Length Dim str As String = a(lc) 'Since the VNC file contains 'header sections as in [options], the 'If str.contains("=") is needed to make 'the following str.indexof("=") 'work correctly for reading in the options. If str.Contains("=") Then 'A Select statement is run on the lines 'to get each option and put 'the data into our collection object. Select Case str.Remove(str.IndexOf("=")) Case "Host" Me.FullAddress = _ str.Substring(str.LastIndexOf("=") + 1) Case "UseLocalCursor" Me.UseLocalCursor = _ str.Substring(str.LastIndexOf("=") + 1) Case "UseDesktopResize" Me.UseDesktopResize = _ str.Substring(str.LastIndexOf("=") + 1) Case "FullScreen" Me.FullScreen = _ str.Substring(str.LastIndexOf("=") + 1) Case "FullColour" Me.FullColour = _ str.Substring(str.LastIndexOf("=") + 1) '... FillFromRDP takes a fileinfo (dim f as new fileinfo(p as path)) object and attempts to fill the connection object with the options presented in an RDP file: FillFromRDP fileinfo dim f as new fileinfo(p as path) Public Sub FillFromRDP(ByVal f As FileInfo) 'FillFromRDP works very much the same 'as the VNC function of the same 'name with a couple small differences. 'Since the RDP file does not 'contain header sections, the line 'qualifier statement if str.contains("=") 'from the VNC function is not needed 'and the characters separating the 'parameters are different, 'in this case, the : character is used. Dim a As String() = File.ReadAllLines(f.FullName) Dim lc As Integer = 0 Do Until lc = a.Length Dim str As String = a(lc) Select Case str.Remove(str.IndexOf(":")) Case "screen mode id" Me.ScreenMode = _ str.Substring(str.LastIndexOf(":") + 1) Case "desktopwidth" Me.DesktopWidth = _ str.Substring(str.LastIndexOf(":") + 1) Case "desktopheight" Me.DesktopHeight = _ str.Substring(str.LastIndexOf(":") + 1) Case "session bpp" Me.SessionBPP = _ str.Substring(str.LastIndexOf(":") + 1) Case "winposstr" Me.WinPosStr = _ str.Substring(str.LastIndexOf(":") + 1) '... SaveToReg saves the connection information and options into the application's Registry under HKLM\Software\RDPMan. SaveToReg Public Sub SaveToReg() 'create a registry object from the registry class in 'registry.vb to access the registry Dim reg As New Reg Dim key As String 'Check to see if the connection objects guid = nothing. 'If it is, create a new guid. This is done so that each 'connection object gets its own unique registry subkey. If Me.Guid Is Nothing The Me.Guid = System.Guid.NewGuid.ToString key = "Software\RDPMan\" & Me.Guid & "\" reg.CreateSubKey(reg.HKLM, Name) Else key = "Software\RDPMan\" & Me.Guid & "\" End If 'Write values to the registry reg.WriteValue(reg.HKLM, key, "ObjName", Me.Name) reg.WriteValue(reg.HKLM, key, "UseLocalCursor", _ Me.UseLocalCursor) reg.WriteValue(reg.HKLM, key, "UseDesktopResize", _ Me.UseDesktopResize) reg.WriteValue(reg.HKLM, key, "FullScreen", Me.FullScreen) reg.WriteValue(reg.HKLM, key, "FullColour", Me.FullColour) '... DeleteFromReg allows us to delete a connection object from the Registry: DeleteFromReg Public Sub DeleteFromReg() Dim reg As New Reg 'Since the registry keys for each Connection object 'are created based on a GUID created in the SaveToReg 'function, the DeleteSubKeyTree from the registry.vb 'file included with this app can be run on the me.guid 'property. reg.AppReg.DeleteSubKeyTree(Me.Guid) End Sub CreateRDPFile drops out an RDP file for the program's connectRDP function. This class has been made a public class as one would conceivably run CreateRDPFile() in a backup or export routine. CreateRDPFile connectRDP CreateRDPFile() Public Function CreateRDPFile() As FileInfo 'This section of code creates an RDP File based on the '.RDP File Format that the RDP Connec r drops out when 'it is asked save a connection. Note that this format 'is different from the VNC file format. 'RDP Options are entered in the .RDP file 'like the following: 'option:variable type:parameter Dim lines As String() = { _ "screen mode id:i:" & Me.ScreenMode, _ "desk pwidth:i:" & Me.Desk pWidth, _ "desk pheight:i:" & Me.Desk pHeight _ } 'I have removed many of the options ' make this readable. 'Create a new .vnc file in the 'startup path of the application. Dim path As String = Application.StartupPath & _ "\~" & Me.Name & ".rdp" Try File.WriteAllLines(path, lines) Catch ex As Exception MsgBox("Cannot write RDP File") End Try Dim f As FileInfo = New FileInfo(path) Return f End Function Same as CreateRDPFile: Public Function CreateVNCFile() As FileInfo 'This section of code creates a VNC File based on the '.VNC File Format that the VNC Viewer drops out when 'it is asked to save a connection. Note that this format 'is different from the RDP file format. 'VNC Options are entered into the .vnc file like the following: 'option=parameter 'two header sections exist within the files, [Connection] and '[Options], RDPMan mimics these lines in this method. Dim lines As String() = { _ "[Connection]", _ "Host=" & Me.FullAddress, _ "[Options]", _ "UseLocalCursor=" & Me.UseLocalCursor, _ "UseDesktopResize=" & Me.UseDesktopResize, _ "FullScreen=" & Me.FullScreen _ } 'I have removed some of the options 'to make this a bit shorter 'Create a new .vnc file in the startup path of the application. Dim path As String = Application.StartupPath & _ "\~" & Me.Name & ".vnc" Try File.WriteAllLines(path, lines) Catch ex As Exception MsgBox("Cannot write to VNC File") End Try Dim f As FileInfo = New FileInfo(path) Return f End Function ConnectRDP uses the CreateRDPFile method to drop out an RDP file in the application directory and then use the file to connect to a Remote Desktop. The ConnectRDP function returns the fileinfo object used so that the presentation layer can delete the file on application close. There is probably a better way to do this if someone wants to provide a tip. ConnectRDP Public Function ConnectRDP() As FileInfo 'Runs the CreateRDPFile Method and attempts to fun 'the RDP file. This run is using the command 'system.diagnostics.Process.start on the filename. 'since the mstsc.exe application itself is not mentioned 'here, the file will be run based on the .rdp file association. Dim f As FileInfo = Me.CreateRDPFile Try Process.Start(f.FullName) Catch ex As Exception Throw End Try Return f End Function ConnectVNC is the same as connectRDP. ConnectVNC ConnectTelnet connects to the default Telnet port 23. ConnectTelnet Public Sub ConnectTelnet() 'The intellisense for the process start command 'notes that the second argument is something like 'the username that one wants the application to run as 'but I found out from someone's article that the 'function takes application parameters as the second 'parameter as well in an overload. Try Process.Start("telnet", Me.FullAddress) Catch ex As Exception Throw End Try.
http://www.codeproject.com/Articles/15772/RDP-Manager?fid=346023&df=90&mpp=10&sort=Position&spc=None&tid=1702258
CC-MAIN-2016-18
refinedweb
1,445
59.7
. Log message: ruby-nokogiri: update to 1.13.4. Upstream changes: 1.13.4 / 2022-04-11 Security * Address CVE-2022-24836, a regular expression denial-of-service vulnerability. See GHSA-crjr-9rc5-ghw8 for more information. * [CRuby] Vendored zlib is updated to address CVE-2018-25032. See GHSA-v6gp-9mmm-c6p5 for more information. * [JRuby] Vendored Xerces-J (xerces:xercesImpl) is updated to address CVE-2022-23437. See GHSA-xxx9-3xcr-gjj3 for more information. * [JRuby] Vendored nekohtml (org.cyberneko.html) is updated to address CVE-2022-24839. See GHSA-gx8x-g87m-h5q6 for more information. Dependencies * [CRuby] Vendored zlib is updated from 1.2.11 to 1.2.12. (See LICENSE-DEPENDENCIES.md for details on which packages redistribute this library.) * [JRuby] Vendored Xerces-J (xerces:xercesImpl) is updated from 2.12.0 to 2.12.2. * [JRuby] Vendored nekohtml (org.cyberneko.html) is updated from a fork of 1.9.21 to 1.9.22.noko2. This fork is now publicly developed at https:// github.com/sparklemotion/nekohtml Log message: ruby-nokogiri: update to 1.13.3. Upstream changes: 1.13.3 / 2022-02-21 Fixed * [CRuby] Revert a HTML4 parser bug in libxml 2.9.13 (introduced in Nokogiri v1.13.2). The bug causes libxml2's HTML4 parser to fail to recover when encountering a bare < character in some contexts. This version of Nokogiri restores the earlier behavior, which is to recover from the parse error and treat the < as normal character data (which will be serialized as \ < in a text node). The bug (and the fix) is only relevant when the RECOVER parse option is set, as it is by default. [#2461] 1.13.2 / 2022-02-21 Security * [CRuby] Vendored libxml2 is updated from 2.9.12 to 2.9.13. This update addresses CVE-2022-23308. * [CRuby] Vendored libxslt is updated from 1.1.34 to 1.1.35. This update addresses CVE-2021-30560. Please see GHSA-fq42-c5rg-92c2 for more information about these CVEs. Dependencies * [CRuby] Vendored libxml2 is updated from 2.9.12 to 2.9.13. Full changelog is available at libxml2-2.9.13.news * [CRuby] Vendored libxslt is updated from 1.1.34 to 1.1.35. Full changelog is available at libxslt-1.1.35.news Log message: ruby-nokogiri: update to 1.13.1. Upstream changes: 1.13.1 / 2022-01-13 Fixed * Fix Nokogiri::XSLT.quote_params regression in v1.13.0 that raised an exception when non-string stylesheet parameters were passed. Non-string parameters (e.g., integers and symbols) are now explicitly supported and both keys and values will be stringified with #to_s. [#2418] * Fix HTML5 CSS selector query regression in v1.13.0 that raised an Nokogiri::XML::XPath::SyntaxError when parsing XPath attributes mixed into the CSS query. Although this mash-up of XPath and CSS syntax previously worked unintentionally, it is now an officially supported feature and is documented as such. [#2419] Log message: ruby-nokogiri: update to 1.13.0. Upstream changes: 1.13.0 / 2022-01-06 Notes Ruby This release introduces native gem support for Ruby 3.1. Please note that Windows users should use the x64-mingw-ucrt platform gem for Ruby 3.1, and x64-mingw32 for Ruby 2.6-3.0 (see RubyInstaller 3.1.0 release notes). This release ends support for: * Ruby 2.5, for which official support ended 2021-03-31. * JRuby 9.2, which is a Ruby 2.5-compatible release. Faster, more reliable installation: Native Gem for ARM64 Linux This version of Nokogiri ships experimental native gem support for the aarch64-linux platform, which should support AWS Graviton and other ARM Linux platforms. We don't yet have CI running for this platform, and so we're interested in hearing back from y'all whether this is working, and what problems you're seeing. Please send us feedback here: Feedback: Have you used the aarch64-linux native gem? Publishing This version of Nokogiri opts-in to the "MFA required to publish" \ setting on Rubygems.org. This and all future Nokogiri gem files must be published to Rubygems by an account with multi-factor authentication enabled. This should provide some additional protection against supply-chain attacks. A related discussion about Trust exists at #2357 in which I invite you to participate if you have feelings or opinions on this topic. Dependencies * [CRuby] Vendored libiconv is updated from 1.15 to 1.16. (Note that libiconv is only redistributed in the native windows and native darwin gems, see LICENSE-DEPENDENCIES.md for more information.) [#2206] * [CRuby] Upgrade mini_portile2 dependency from ~> 2.6.1 to ~> 2.7.0. \ ("ruby" platform gem only.) Improved * {XML,HTML4}::DocumentFragment constructors all now take an optional parse options parameter or block (similar to Document constructors). [#1692] (Thanks, @JackMc!) * Nokogiri::CSS.xpath_for allows an XPathVisitor to be injected, for finer-grained control over how CSS queries are translated into XPath. * [CRuby] XML::Reader#encoding will return the encoding detected by the parser when it's not passed to the constructor. [#980] * [CRuby] Handle abruptly-closed HTML comments as recommended by WHATWG. (Thanks to tehryanx for reporting!) * [CRuby] Node#line is no longer capped at 65535. libxml v2.9.0 and later support a new parse option, exposed as Nokogiri::XML::ParseOptions::PARSE_BIG_LINES, which is turned on by default in ParseOptions::DEFAULT_{XML,XSLT,HTML,SCHEMA} (Note that JRuby already supported large line numbers.) [#1764, #1493, #1617, #1505, #1003, #533] * [CRuby] If a cycle is introduced when reparenting a node (i.e., the node becomes its own ancestor), a RuntimeError is raised. libxml2 does no checking for this, which means cycles would otherwise result in infinite loops on subsequent operations. (Note that JRuby already did this.) [#1912] * [CRuby] Source builds will download zlib and libiconv via HTTPS. \ ("ruby" platform gem only.) [#2391] (Thanks, @jmartin-r7!) * [JRuby] Node#line behavior has been modified to return the line number of the node in the final DOM structure. This behavior is different from CRuby, which returns the node's position in the input string. Ideally the two implementations would be the same, but at least is now officially documented and tested. The real-world impact of this change is that the value returned in JRuby is greater by 1 to account for the XML prolog in the output. [#2380] (Thanks, @dabdine!) Fixed * CSS queries on HTML5 documents now correctly match foreign elements (SVG, MathML) when namespaces are not specified in the query. [#2376] * XML::Builder blocks restore context properly when exceptions are raised. [#2372] (Thanks, @ric2b and @rinthedev!) * The Nokogiri::CSS::Parser cache now uses the XPathVisitor configuration as part of the cache key, preventing incorrect cache results from being returned when multiple XPathVisitor options are being used. * Error recovery from in-context parsing (e.g., Node#parse) now always uses the correct DocumentFragment class. Previously Nokogiri::HTML4::DocumentFragment was always used, even for XML documents. [#1158] * DocumentFragment#> now works properly, matching a CSS selector against only the fragment roots. [#1857] * XML::DocumentFragment#errors now correctly contains any parsing errors encountered. Previously this was always empty. (Note that HTML::DocumentFragment#errors already did this.) * [CRuby] Fix memory leak in Document#canonicalize when inclusive namespaces are passed in. [#2345] * [CRuby] Fix memory leak in Document#canonicalize when an argument type error is raised. [#2345] * [CRuby] Fix memory leak in EncodingHandler where iconv handlers were not being cleaned up. [#2345] * [CRuby] Fix memory leak in XPath custom handlers where string arguments were not being cleaned up. [#2345] * [CRuby] Fix memory leak in Reader#base_uri where the string returned by libxml2 was not freed. [#2347] * [JRuby] Deleting a Namespace from a NodeSet no longer modifies the href to be the default namespace URL. * [JRuby] Fix XHTML formatting of closing tags for non-container elements. [#2355] Deprecated * Passing a Nokogiri::XML::Node as the second parameter to Node.new is deprecated and will generate a warning. This parameter should be a kind of Nokogiri::XML::Document. This will become an error in a future version of Nokogiri. [#975] * Nokogiri::CSS::Parser, Nokogiri::CSS::Tokenizer, and Nokogiri::CSS::Node are now internal-only APIs that are no longer documented, and should not be considered stable. With the introduction of XPathVisitor injection into Nokogiri::CSS.xpath_for there should be no reason to rely on these internal APIs. * CSS-to-XPath utility classes Nokogiri::CSS::XPathVisitorAlwaysUseBuiltins and XPathVisitorOptimallyUseBuiltins are deprecated. Prefer Nokogiri::CSS::XPathVisitor with appropriate constructor arguments. These classes will be removed in a future version of Nokogiri. Log message: textproc/ruby-nokogiri: reduce dependency Depends on devel/ruby-racc only on ruby26 since Ruby 2.7 and later contains racc as bundled gem. Bump PKGREVISION. Log message: revbump for icu and libffi
https://pkgsrc.se/textproc/ruby-nokogiri
CC-MAIN-2022-33
refinedweb
1,458
53.07
A significant new part of Perl 6’s object model and type system is the addition of roles. Part of their origin is an implementation in Smalltalk (there called traits). They also solve some systemic problems of other OO systems. Why are they useful and how do they work? What is a Role? A role is a named collection of behavior — a set of methods identified by a unique name. This resembles a class or a type, in that referring to the role refers to the combined set of behaviors, but it is more general than a class and more concrete than a type. Put another way, a role is an assertion about a set of capabilities. For example, a Doglike role identifies the important behavior that any doglike entity will possess: perhaps a tail attribute and the methods wag() and bark(). Even only this much is a great advantage to using roles. Presuming you have a method that needs something Doglike, you can ask “Does the class or object I receive behave in a Doglike fashion?” and take the appropriate action. Bad Approaches to The Problem You might already have encountered this problem. A common pattern is to create an abstract Dog object and requiring that any doglike object inherit from that Dog object. This allows you to check that the object or class you receive inherits from Dog. This “solves” the problem in a bad way in all but very simple systems. Problems with Inheritance In a single-inheritance system, where a class can inherit from one and only one ancestor, the abstract base class strategy takes away your option of inheriting from any other ancestor, even if another would make more sense. In a multiple-inheritance system, if you need to mark a particular class as Dog-like you now have the potential for weird method dispatch resolution errors. Can you guarantee that you’ll only ever get the right methods at the right times, or that no one will ever add another method to an ancestor class and override the right method? Suppose Dog itself extends Mammal and someone adds a method to that with the same name as a method in a class appearing after Dog in your class’s list of ancestors. Changes are fragile and the effects can appear far, far away. Even further, there’s a design smell with this approach: it forces a particular class design strategy upon other classes. You know that you have the behavior of a Dog because of inheritance, but there are many other ways to get Dog-like behavior. You can reimplement the methods yourself. You can use a delegation or proxying relationship to contain one or many Dog objects and wrap all accesses to their methods. You can use a mock- Dog pattern for testing. Requiring that all doglike entities actually specifically inherit from Dog–when all your code really cares about is that the objects or classes behave in a Doglike way–worries too much about how the classes and objects behave, not what they do. Inheritance is one way to ensure that a class or object implements a known interface appropriately, but the important part of polymorphism is that you don’t know how an implementation works, only that it does the right thing. Forcing the use of inheritance outside of your classes is like expecting every iterator to maintain an internal stack of items to return or every webserver to be Apache httpd, rather than being able to call next_element() or speak HTTP. (Perl 5 allows you to overload isa() to get past poorly-designed systems that violate the encapsulation of classes and objects they merely use, but there are plenty of error patterns around that method too.) Problems with Duck Typing Other systems wisely eschew checking the structure and implementation of objects and classes–specifically, how they provide an interface–but substitute checks for the presence of specifically-named methods. This is duck typing (or very loose ). Duck typing argues that anything that can bark() is obviously a Dog. That works sometimes, but imagine that your system includes, besides the Dog class, a Tree class. Tree objects have the bark attribute available through the bark() accessor. You can call the method bark() on a Tree, just as you can on a Dog, but the two methods, however similar in name, have completely different semantics. They’re false cognates. Objects of either class are not substitutable. Duck typing ignores this. In practice, this may not often be a problem. Duck typing advocates argue, correctly, that effective testing strategies catch such errors in the rare cases when they occur. However, giving up the association between class name and method name is a loss of expressivity. There are already perfectly-serviceable namespaces for both the Dog’s bark() and the Tree’s bark(). Why should a language lose that information if it’s already in the system? Why should an implementation ignore that information, if it’s easy to use? Problems with Java Interfaces The designers of Java apparently identified this problem, but invented new problems when trying to solve it. Java interfaces are abstract, named collections of method signatures (and little else). They allow you to identify a collection of behavior with a name and request that a class or object implement that behavior. The problems come mostly from the poor design and implementation. Classes and interfaces occupy separate namespaces in Java, to some degree. The syntax for querying that an object or class extends another class is different from the syntax for querying that an object or class implements an interface. This approach also requires that you know too much about how an object or class performs specific behaviors. This is most painful when dealing with code you cannot change. If the code explicitly requires that a passed-in object or class inherit from another class–that is, the function signature specifically names a class or error-checking code at the start of the function explicitly checks that the argument is or inherits from a specific class, you cannot pass in a class or object that uses another object strategy. You’re stuck. Likewise, if the immutable code requires that your class or object implement an interface, that’s your only option. You’re slightly better off that way, but it’s not perfect either — and it’s rarely the default case, as it takes much more work to define an interface and use it than to define a class. If the library requires a specific class, not an interface, and someone has declared that class final, you’re in real trouble. This is perhaps the worst possible case with all of Java’s language features. This anti-social behavior is evident even in parts of the Java standard library. Another sin of Java’s interfaces is that they disallow code reuse. When the option is between allowing object implementation strategies other than inheritance but duplicating code and forcing the use of inheritance in related code but not duplicating code, people tend to take the somewhat-obvious latter choice. Then again, Java has always been verbose, and the current state of the art for Java programming appears to be to use ever-more-powerful IDEs to generate boilerplate code, so one argument is to use better tools to work around a language’s design deficiencies. What Roles Actually Do What in terms of roles — collections of supported behavior. A role is an interface, in the behavior sense. It’s a guarantee that anything for which the question “Are you Doglike?” is true will behave in a Dog-like way. More concretely, it’s the difference between saying “This method only ever takes String objects and prints them to a log file” and saying “This method only ever takes Stringifiable things, stringifies them, and prints them to a log file.” The former cares how it works. The latter cares that it works. Perl makes this easier with automatic coercion when you use an object in a string context, but you have to remember to overload the stringification operator. Languages without automatic coercion make this much more difficult. Roles can also provide behavior. This is an option, not a requirement. A Serializable role can provide default implementations that know how to inspect hash-based objects in Perl 5, while allowing for other objects or roles to perform the Serializable role for array-based objects, for example. To the outside world, it only matters that objects of both roles are Serializable, not that one uses a hash and one uses an array. This is the second great philosophical realization about roles: not all useful behavior fits into inheritance hierarchies. Code reuse is more important than inheritance, so there should be ways to reuse code in situations where inheritance is not useful or helpful or necessary. As well, decomposing collections of behavior into roles can help to improve design in such a way that a class can compose only the roles it needs directly, rather than accidentally inheriting several unrelated methods. Role Implementation Notes There are two important parts of working with roles. The first is marking a class or object as performing a role. The second is querying that a class or object performs a role. One possibility for marking a class or object as performing a role is allowing the compiler to detect this automatically, based on duck typing likelihoods. This suffers from the same false cognate problem as normal duck typing; it’s unlikely that static analysis could identify all of the potential roles in a system as well as identify good names for them. (Names are for humans, not computers.) A likelier solution is to require specific declaration of roles and annotation of classes and objects that perform those roles. This is an implementation detail of the class, so it’s okay to use different syntax for “this class extends this ancestor” and “this class implements a role”. Maintaining the distinction for the querying side may help people design better code by being more thoughtful about roles. In addition, there may be cases where explicit queries about inheritance relationships may be useful (perhaps in reflection or editor-support analyses). Using a separate meta-method, such as does(), seems effective. As for the actual implementation, marking a class as performing a role adds the behavior of that role to the class immediately, at compile-time. For each method the role provides, that method becomes an available method on the class with the role’s implementation, unless the class already provides that method. Obviously a class can provide its own implementations for every method of a role and still implement that role — the same goes for delegation or aggregation relationships. The role resolver merely has to know, when it runs, that the class unambiguously performs the behavior of the role. Roles may have dependencies as well. Resolving them happens at compile time. A role may require that the class implementing that role also implement other roles. A Sortable role may require the implementation of a Comparable role, for example, to be sure that any existing compare() method does the right thing. Checking only for the existence of a compare() method may suffer from the false cognate problem. Of course, the ambiguous bark() problem needs a solution as well. The role resolver must detect conflicts in role method names and require disambiguation. When you try to define a DogwoodTree that does both Doglike and Treelike, you must disambiguate explicitly, whether composing in one method or the other or providing your own bark() method that redispatches to the appropriate parent depending on context. Advanced Roles For optimal benefit in the system, any type checking (such as in method signatures) should query for roles first, then inheritance. A type system that cares more about capabilities than structural identity removes an entire world of pain–specifically the case when you need to work with a library you cannot modify that did not explicitly disallow the use of roles. Now you have to use more syntax and do more work to prevent people from using roles. Classes Imply Roles If the does() question falls back to the isa() question (that is, if a type check first queries that the entity performs a role and then that it or its class inherits from an ancestor), there’s an implicit relationship implied between roles and classes. In other words, every class explicitly defined in the system should implicitly declare a role of the same name. More likely, classes and roles occupy the same namespace. A class is a specialization of a role that is instantiable. Both classes and roles provide names for and define collections of behavior. This also implies that it should be possible to declare that one class performs the role of another class. A CustomerProxy class may explicitly say that it does Customer. Regardless of the object strategy the code uses, the type system should consider that it behaves as a Customer and is usable any place that requests a Customer without modifying that code. In a language with class-implies-role and a role-based type system, this requires almost no syntax: class SomeClass { method some_method { ... } } class AnotherClass does SomeClass { method some_method { ... } } The unmodifiable library problem moves further away. Runtime Application In a robust object system with a capable meta-model (basically a formalized system for customizing the behavior of classes and objects), it’s even possible to apply a role to an object at runtime. Why would you do this? The obvious answers are decorating an object with logging or debugging code, but consider all of the uses of the Decorator pattern. I recently built a small game using this pattern. Each enemy element had two parts, a shape and color and movement behavior. The visual portion used a specific role and the movement portion used another role. The game used the Factory pattern to create enemies and applied a random visual and movement role to each. Thus I only had to create an abstract role for each role type and several concrete roles for each type of movement or visuals. Adding the roles to the objects allowed for several combinations of behavior — n times m combinations, where I only had to write n plus m roles. In systems that do not allow runtime generation of new classes, it’s still possible to get some of the benefit of runtime role application with code generation and more use of the Factory pattern. Parameterization As long as a role provides the appropriate methods at role composition time, does it really matter how the role provides those methods? In a sufficiently dynamic language, it could generate them based on the phase of the moon (probably not useful apart from a Lycanthropic role), the program’s configuration file (much more useful), or even arguments passed to the role. If you’re going to factor out common code into a role, why not go one step further and allow the parameterization of that code? When you apply a role to a class or an object, you can add arguments to the role to gain even more specific behavior. For example, if you have a Loggable role, why not pass a filehandle or the name of a log file to the role at composition time? If you have a role that performs internationalization and localization, why not pass the name of the string library or the language to use? There are plenty of possibilities and dozens of potential patterns awaiting discovery and categorization. Conclusion Despite the heavy insistence of too many OO tutorials, inheritance isn’t the fundamental principle of object orientation. Polymorphism is. Supporting roles at the language level itself provides yet another way of promoting polymorphism and providing design tools to build better systems. (Yet polymorphism based strongly on class identity loses most of its power; it’s allomorphism that matters — nominal typing over structural typing.) Thinking in terms of capabilities and named collections of behavior can help you divide your systems into smaller, self-contained entities. These don’t have to be classes themselves, nor do they necessarily have to have ancestor or descendant relationships with other entities in your system. Decreased coupling, increased cohesion, improved code reuse, and more ubiquitous and automatic polymorphism are fine design goals. Roles encourage them. An interesting discussion, to which I have rather questions than comments: is there any particular literature on which this entry is based? To what extent are roles implemented in Perl6 (I read that Perl6 is "on the way")? Is there any syntax specification for it (which I haven't found in few minutes; only the following page: containing sth on roles). Thanks. You know, I'd argue that 'polymorphism' (especially the sort practiced by static, pseudo OO languages) has become such a debased term that it's better to think of the real essence of object orientation as late binding - the target object decides, at runtime, how to deal with a message. Which puts Multimethods in a slightly weird place, but they're slightly weird anyway, so that's okay. Piers, I thought of explaining it in terms of genericity, but C++ has abused that so much (along with the notion of "compile time") that polymorphism seemed a better word. Hi chromatic Are roles same as mixin's supported by Ruby? If not, what's the difference? So we have the term role, used to constrain the behaviour of objects. Since Terry Halpin (recently at Microsoft, now returned to Academia) created Object Role Modeling, is this terminology consistent with that? chromatic-- Thanks for writing this. It's definitely a useful description of the functionality of roles. After reading, I do have a couple of questions that I hope you can help me with. First, you say: Would you be willing to expand on the relationship between types and roles? Is the functionality of types a subset of the functionality of roles, or are the two really synonyms? Second, Synopsis 12 gives an example of using an anonymous subtype in a signature: sub check_even (Num $even where { $^n % 2 == 0 }) {...} In your view, is this the same as declaring that $evendoes the Numrole, declaring an anonymous role whose only parameter is constrained to be even, and then asserting that $evendoes that anonymous role? Is this an implementation issue or is dispatching types to roles central to the functionality you want roles to provide? Thanks for any feedback, /au For an entry point to literature on roles, you may have a look at Prose, Ruby mixins are similar to roles in that they add methods to classes (or instances, but then there's an anonymous parent class added) but different in that they do not participate in the type system. That is, to my knowledge, there's no built-in system in Ruby by which you can specify that a class or object must have mixed in a particular named bundle of methods. There's duck typing, but it has the false cognate problem. aufrank, the connection between types and roles is that all types are roles. That is, to declare a type, you declare a role (or a class, or...). A role is a named collection of behavior about which you can reason certain things. That's the same thing as a type, at least if you buy into nominal typing. That's my reasoning anyway. As to your second question, I say that the signature declares an anonymous role that $evenmust perform. It's the job of the method dispatcher to check this. (In this case, it may be wise to check for Num-ness -- sorry -- before checking the subtype constraint.) I may eventually regret suggesting that the words "role" and "type" are somewhat synonymous here. What method were you using to check them? I tested the instanceofoperator and it works on classes I've extended and interfaces I've implemented. More on this in a moment. Recently, I was looking at Swing to build a simple GUI app in Java. One of the things recommended in Swing tutorials is the use of javax.swing.SwingUtilities.invokeLater() to asynchronously execute things on the GUI thread by adding them to a queue. invokeLater has a signature of public static void invokeLater(Runnable doRun) Runnable is an interface, but you can't tell that by looking at invokeLater's method declaration. To test instanceof's behavior, I wrote the following two objects. One of the objects implements Runnable and the other has a main thread to create an IRun object to test it using instanceof. (The following text should contain tabs, but in the preview, something was still stripping them out, even inside preformatted text tags.) As you can see, doRun instanceof Runnablereturns the boolean value true, which is implicitly converted to a string by System.out.prinln(). Powerlord, I just checked with Blackdown Java 1.3.1 (the latest version that I can convince to run on Linux/PPC unfortuantely), and you're right. My concern in the article was that you cannot use an interface name as a type in the signature of a method, but I just tested that and it worked correctly. I still dislike the separation implied by implementsand the abstractness of interfaces, but my other criticism was clearly wrong.
http://www.oreillynet.com/onlamp/blog/2006/08/roles_composable_units_of_obje.html
crawl-002
refinedweb
3,566
60.75
Update: I’ve announced react-pacomo, a solution for some of the problems with CSS outlined here, without the downsides of Inline Style. So one of the hottest topics in the React world lately is Inline Style, i.e. setting styles with an element’s style property instead of CSS. This new and shiny way of doing things promises to make your life easier. It eliminates the bugs caused by global styles, it allows you to package styles with your components, and it unifies the entire development process under the single language of JavaScript, fuck yeah. And now that all the cool kids are using it, it’s time to jump on the bandwagon too! But don’t just take my word for it! See for yourself with this handy dandy list of all the problems which you could have fixed with plain old CSS if you hadn’t of drunk the cool-aid, and the new problems you’ll now have to deal with too. Problems which Inline Style didn’t have to solve Everything in CSS is a global Let’s get this one out of the way first, because it is always the first thing that comes up in any list of reasons not to use CSS. Yes, everything in CSS is a global: .active { background: red; } /* ... lines and lines of unrelated code ... */ .active { /* oops */ background: blue; } So namespace your classes and deal with it: .app-NavItem-active { background: red; } /* ... lines and lines of unrelated code ... */ .app-ContactForm-active { background: blue; } While this may look long-winded, if you’re using SCSS or LESS then the parent selector with a dash ( &-) will make wonder what you were ever worried about: .app-ContactForm { &-active { background: blue; } &-saving, &-fetching { /* style goes here */ } } And on the React side, there are tools to generate these scoped class names for you. Or if you’re stuck outside the kingdom of React, CSS modules will accomplish basically the same thing. But then again, sometimes you want global styles, like this: @import url(''); Or this: html, body { position: relative; height: 100%; min-height: 100%; font-family: Roboto; } And good luck implementing these with Inline Style. Cascading gets in the way There are a number of meanings of “cascading”, so let’s pick one: styles sometimes inherit their value from parent styles. For example, defining font-family: "Comic Sans MS" on body will cause most everything on your page to be cheerful and happy. Some people don’t like this, because it means that your component may end up with style which you didn’t specifically give to it. But style will still cascade, even when defined with Inline Style. So why is Inline Style a benefit? Well, Inline Style let’s you write JavaScript helper functions to ensure that your components overwrite any inherited styles with their own styles. But seriously, let’s think about this for a moment: all you’ve really done is re-implemented cascading through JavaScript helper functions. Though at least it’s written with JavaScript now, amiright? So let’s move onto the next meaning of cascading: if you create a selector for ul li, then your styles will be applied to ul li ul li and ul li ul li ul li too. Selectors aren’t very specific. Easy fix: don’t use HTML elements in selectors. Instead, give your elements namespaced CSS classes which double as a way of documenting what they actually do. Your code will be clearer and more precise as a result. Inline Style can be included directly from your component modules This is actually a huge benefit. Nobody likes maintaining entirely separate project trees just for their CSS, or managing CSS files which live in completely separate locations to their corresponding JSX. But now that we all have access to wonderful tools like Webpack, this benefit isn’t limited to Inline Style anymore. In fact, I place my LESS files right next to my JSX (or JS with Angular), and then include them like this: import './DocumentForm.less' But James – you tell me – those LESS files still depend on other LESS files which are separate to your components. And it isn’t like you can use JavaScript-loaded styles in production! Half true. The LESS files do depend on other LESS files – but only ones included like this: @import (reference) '~cloth-util-less/index.less'; That little (reference) ensures that no output is generated by the @import; it is just a way of importing variables and mixins. This allows each component’s CSS to be completely independent of the other components. And with Webpack’s ExtractTextPlugin, the generated CSS is all extracted to a real CSS file. Which can be cached separately! Have fun caching your inline styles separately. Inline Style lets you use JavaScript, so you won’t have to learn CSS Except that you will. Seriously, what are you people smoking?! See those property names in your Inline Styles? They’re CSS. Ok, maybe you won’t have to learn media queries and pseudo selectors. But instead, you’re going to have to learn the equivalents for whatever framework you choose. Or maybe you won’t – because if you don’t use an Inline Style framework, you’ll spend so much time re-implementing media queries and pseudo selectors that you’ll know how they work back to front. Inline Style gives you more power This is about the only valid point that people make. You do get more power with Inline Styling than you would with CSS, or even with CSS compilers like LESS, SCSS or PostCSS. But do you really need it? No website is complete without a terrible car analogy, and this might be my only chance to ever write one, so here goes: CSS is a moped. It sucks, but it is what you start out with. It is dangerous, slow, and it quickly teaches you why you need more power. LESS/SCSS is a Japanese car. It is reliable, it gets you where you need to go, it won’t need fixing any time soon, and it’ll still hit the speed limit if you want it to. Inline Style is this: Problems you would have avoided if you just used boring old CSS You can’t use any existing tooling Sure, CSS is a bit shit. But that is why people have made a number of amazingly useful tools which aren’t: Maybe you don’t use any of these tools. Which would surprise me, because it would be silly to jump on this bandwagon without having at least tried the industry standard solutions first. But let’s say you haven’t, and then a brilliant new tool arrives which targets CSS. Too bad. But tools aren’t all you’ll be missing. You can’t use any existing CSS So you know all those styles you’ve written over the years? You can kiss them goodbye. But maybe you don’t write many styles, so it’s all good. Except, you know all those styles that other people have written over the years? You may as well forget they ever existed, because they’ll be incompatible with yours. But no worries, it isn’t like you have to put all your style in Inline Style. You can just gradually move your style over, and pull in parts of the occasional CSS library when it makes sense. Except that you can’t, because: Inline Style is infectious So you know how you can define CSS styles with a number of priorities? Element selectors have the lowest priority — you can override them with CSS classes. Classes can be overridden with IDs, which can in turn be overridden with !important. And you know what overrides everything? Inline Style. Now you may be thinking, “That’s great, because now I never have to worry about anything overriding my styles”. Which is true, but you wouldn’t have to worry about this anyway if you just namespaced your classes properly. Notice a pattern here? But you will have to worry if you ever need to apply third-party styles, because Inline Style will kill them dead. And if you want to add some CSS to a component which defines its own Inline Styles? You’re shit out of luck. And if you want to override anything without passing in options as more Inline Styles? You just can’t. And this might not be a problem for you. But unless your project is just a toy, you’re not the only one who counts. Designers speak CSS If you work on a team of anything other than the most talented people, you’ve probably already experienced this. In fact, you’re probably reading this rant for entertainment, not education. I hope you enjoyed the show, and don’t forget that sharing cat videos and JavaScript rants is caring! But let’s say you’re a lone front-end developer experienced in both CSS and JavaScript. You’re building something which you’d like to eventually pay the bills, and scaling sounds like a problem you’d like to have. Part of scaling is that your code will at some point be handled by other people. And even if you don’t find anything in this article scary, everyone else certainly will. But CSS still has problems too! You’re absolutely correct. The thing is, there are better ways of solving these problems than using Inline Style. I’ve already alluded to a few of these solutions: - Using LESS to avoid repetitive style definitions - Using React to transform your CSS classes into namespaced CSS classes - Using Webpack to modularise your CSS Not sure how to accomplish these? You could learn them the hard way: with Google and trial and error. Or, you could learn them the easy way: Join my Newsletter, then read the guides I’ll be sending out over the next couple weeks on namespacing and modularising CSS.! - Introducing react-pacomo: Automatic namespacing for className - Webpack Made Simple: Build ES6 & LESS with autorefresh in 26 lines - Choosing a JavaScript Build Tool – Babel, Browserify, Webpack, Grunt and Gulp - Learn Raw React I started to use inline styles only for some layout properties (positionning and sizing), aside my classical SCSS styles that defines the graphical charter and the other styles. CSS specificity is really useful to define color schemes, alternative states, modifiers etc… but from my experience, it is terrible with layout. When dealing with layout, you often want to set several properties at the same time, like position-top-left-margin or display-width-height. If one of these is overridden, things quicky become complex. When working with a component approach, where the HTML/JS and scoped CSS are bound together, using inline styles for layout gave me good results until now. This is also an approach used by some JS UI libraries that works with layout, like Masonry or jQuery.equalize. I agree 70% of the time. There are times when you have some fancy component, that it totally makes sense to use the inline-styling. An example would be a calendar component. I want to be able to build 1 calendar component and it be ready to be used in all my project, without fiddling with a million different css files. I think styling components from javascript is the future, because it makes everything easier in the same way JSX or “inline markup” makes things easier. You can use inline styles and if you need the features of stylesheets, use You can even define your “jss” in “css” by using es6 template strings (which even enables you to use variables) and converting the “es6 template string css” into “jss” on the fly using Honestly, all the disadvantages you mention will go away and the benefits stay. I agree, that maybe it’s not ready for prime-time, because it needs some more “user friendlyness” and usage examples. You lose a lot when you switch to CSS in JS. CSS is doing a lot of things for you that you are probably taking for granted. () If you want the benefits of both, but the drawbacks of neither, I strongly encourage you to check out CSS Modules: css modules are better than css but dont fix css for real. – css modules cannot be customized by passing in arguments, thus a “button component” cannot receive it’s “box-shadow” as an argument. – css modules cannot adapt styling based on real-time user interaction. Both features you get with javascript and thats why jss and co. are so awesome. In the past – databinding or “reactivity” were not important and now it’s state of the art. My bet is that the same will be true for styling in the near future. Except that CSS is addressing these lacks through WEB STANDARDS right now. See these standards… Not only do CSS variables (aka custom properties) bring variables a la preprocessors to CSS, variable values have the added benefit of being both scoped and live variables. Meaning CSS can adapt styling based on real-time user interaction. This has already shipped in Chrome. Variables are nice for simple theming, but more complex needs are common. There are properties within themes which are inter-reliant. Or maybe a user wants to add a box shadow the default theme didn’t have and therefore a component didn’t provide a variable. (Plus, if components provided variables for EVERY theming possibility, it’d be reinventing CSS properties via variables–not possible, and a mess.) A good solution, simulate-able with Sass mixins, is a “block variable”, a variable that can contain any rule ad hoc. This proposal will bring this powerful and elegant feature to CSS–which, like the CSS variables above, will be dynamic and can respond to real-time user interaction and to scope. …to name two. No solution that requires an investment into a commercially supported framework will ever be a better option than an open web platform standard. The article covers a brief history supporting this assertion. I guarantee to you: Inline CSS is a fad. React will eventually go the way of jQuery. Web standards last and are a better economic choice for businesses. Separation of concerns, enough said. true 🙂 seperation of concerns is very important, I just don’t get what it has to do with the above article I used to think that way until I really have react a chance, but now I’m convinced everything should be grouped with the component that is using it. Separation of concerns only matters if your code isn’t very modular, but I’ve been very happy not having to open up 1 giant rats nest of a CSS file or opening up a million SASS partials, just give the react way a real shot and you’ll see why everyone is switching +1 🙂 or any other javascriptish solution. You get all the power of pre-processors with a powerful familiar language (javascript) You get real-time updates based on user interaction Styling calculations and other things can be put into “npm modules” and easily shared and re-used “…why everyone AT FACEBOOK is switching.” Corrected for you. Not arguing. You make great points, but this helps solve one of the issues you mentioned… I agree with SERAPATH’s comment that “the disadvantages you mention will go away and the benefits stay.” We keep matching forward, solving (and creating) problems along the way So we started rebuilding v2 of our app with react/redux and one of the devs was set on only inline styles, at first I was against it but then I started to really appreciate it. To counter some points you made, writing our own media queries in JS took 1 day, I made a sweet flexbox layout helper system in an hour which is the most painless Expetience I’ve had doing layouts. You do raise some valid points but why in the world does it only have to be all CSS or all JS, we use each where it makes sense. We have almost all components inline styles and pass props when needed, but we do have 2 components which is just made more sense to import a SASS file, (plus we run with 1 global CSS file for resets and basic styles). So why does it have to be one way or the other? I’m in love with the combination right now, it’s most fun I’ve had working in a while. A few people have mentioned that they’ve been using both together, and the pattern of Inline Style for layout, CSS for “themeing” comes up a bit. It certainly does sound like there may be some upside to this approach which I’ve glossed over. Possibly because layout can be thought of independently to theming? This could potentially work, but at that point you already have separate CSS, why split things up like that? When it doesn’t look quite right you now have to look in two places to find the code to fix! Agreed. If you find yourself wanting to add some CSS to a DOM element, don’t. Namespace your CSS if there’s just too much to deal with. If you must, write your styles to a `style` tag in the head. hi guys. i have a problem.can you help me ? how can i let many Namespace CSS modules into one `style` DOM element tag in the `head`? Just started spec’ing out a component/decorator to deal with this… 🙂 comments welcome. The only issue here is React Native. You need inline styles for that, no other way as far as I’m aware. Yep. That said, outside of Layout (which Inline Style seems to do a good job of), it might make sense to make separate Presentation components for both native and web – they’re different, after all. I’m going to be looking into the react-native conundrum a little more. Stay tuned for an update. Is there anything on the react-native front? I Agree with comments from Sylvainpv and Anton Volt: CSS is great for styling like fonts&colors but a huge pain for layouts. On the other hand, I experienced React components which did use inline styles in a restrictive way: hard-coded fonts and font sizes without a way to override them all to match your own applications fonts. All in all I think inline styles are very valid for layouts, and not so handy for theming. I’m coming around to the idea that maybe Inline Style isn’t so bad for layout. You’re spot on thought with your observation of components which hard-code things like fonts and colors. They’re a perfect use case for CSS using pacomo, and it certainly doesn’t make sense to use Inline Style with them. I’ve been interviewing JavaScript application developers for the last four years, and the majority of them appear to be incapable of writing viable CSS at all. Forget them having the sophistication to actually architect CSS intelligently, making appropriate choices about which rules to consider essential layout and which rules to consider optional theme. How do they get products to market? In most cases the answer appears to be Bootstrap or some other framework/modules of its ilk. Good luck using a pre-written CSS grid system for layout with inline CSS. Oh, but we will eventually finish reinventing the wheel for inline CSS and then everyone will see what a great idea inline CSS really is! Nevermind that whole “reinventing the wheel” requirement. Now, sure, I don’t personally need a CSS framework and can both write and architect good, change-tolerant CSS. I could probably use inline CSS in a very effective way. But I see downsides, too, and am aware of better standardized solutions in coming down the pipe to most of my CSS conundrums. Not to mention there’s a typical learning curve fraught with unknown unknowns (distributed across multiple people and new hires) when making such a fundamental change. Do the potential benefits outweigh this? Not for most applications. I’m comfortable saying that most people evangelizing inline CSS are are the last people I’d trust for CSS advice. If you only have a hammer, every problem looks like a nail, or so I hear. There’s not enough nuance in any fandom, even in tech industry cargo cults. However, I’m essentially biased against developers embracing React and its JavaScript-obsessed, HTML-and-CSS-phobic mindset in numerous fundamental ways. @USCAREME This is the only person here that makes sense. Everyone else sounds like noobs, JS developers who don’t know CSS, or some combination. Inline styles are horrible. Why would you want that? You can just as easily have locally scoped css when using external stylesheets, by using css name spaces. The guy who said using CSS namespaces is bad is wrong. How else are you supposed to prevent css collisions? Everyone has an opinion. The react way of doing HTML and CSS is wrong and will not last the test of time. The angualr2 way is also wrong. The best way is the traditional external css with properly defined classes and html structure. The goal is the make the markup as minimally dependent on css structure as possible, and to make the css structure as minimially dependent on the markup structure. In other words, the goal is to try to be flexible so changes are easier later. But let’s face it, writing CSS, SCSS, LESS, is not easy. It’s not for javascript developers, C# devs, or any object oriented programmed. CSS is for those people who can say “I’m a front-end developer” and understand CSS at it’s core. It is possible to be a frontend and backend developer, but only if you truly understand all the languages you are using. Most JS devs are way out of their league with CSS. I started with HTML, then CSS, then JS, and this has allowed me a great understanding of the CSS/HTML marriage. Please, JS devs, go learn CSS, and then a preprocessor of your choice. Stop ruining the CSS workflow with your lazyiness and all this inline-bogus css. I have to work hard with angular2 just to get external css working, but luckily, it works, thanks to webpack and loaders. CSS4 is due any month now, and will be MUCH better than every previous version. I think there is a small mistake in the article, where it says that inline styles override everything. Actually they don’t override !important. I agree with one concept – do not write inline styles. The rest of the article, however, has made my girlfriends blood boil. Scss generates css. It isnt better. Hand written css is faster to write than waiting 2 seconds for the whole compile time every change you make. Its popular but so is angular. Using it doesnt automatically make your code architecture more maintainable, scalable, performant, or speed up development. From experience it actually leads to slower development. Also namespacing classes is stupid because if you change the structure of your html, then your class .parent-child-subchild becomes wrong. Thanks for your articles James. Especially “Learn Raw React — no JSX, no Flux, no ES6, no Webpack…” For me, with JSX, React was very convoluted and hard for me to wrap my head around. I recently became very interested in web apps and so reentered the web development world after an eight (or so) year hiatus. Back then I was developing static websites on the LAMP/PHP stack. I used JavaScript mostly for form validation only. I don’t think css preprocessors existed then. Shifting to the modern web paradigm with JavaScript has been an arduous journey for me. However, I have found inline styles to be a joy to work with inasmuch as it affords me more maintainability and control. without needing to learn “yet another technology” like SASS or LESS or… and so far in my very limited experience with React I don’t see the need to. Then again I don’t really know any better. Currently I am using css only for layout and for a small amount of global styling. Time will tell whether or not this approach bites me in the ass. You can use existing css with any preprocessors like SASS, Less or Styl using the respective webpack-loader and chaining the output to radium-loader for radium. Check this repo out: Your main point seems to be that inline style is bad, because you can reach the same results with css + a lot of tooling and convention. But why would I put up with all this extra complexity? What BIG benefit is CSS offering over inline that makes this worthwhile? Because if your component gets integrated on another site (a third party one) your styles will overwrite theirs. That was the main reason inline-styles were a bad idea on the first place. The only exception I understand is positioning, but if you’ve worked with CSS you now that many things on the page styles (a different font, a different layout aproach) can break that. So you’ll have to be very carefull if you want ot deliver you positioning rules inline. You know, that’s kind of horseshit. This is why there are developers worth their salt who can do simple things like Object.assign({}, style, this.props.style}) when they’ve created their components if they’d like them to be re-useable and fully-styleable 😛 I can’t even believe this is a thing. It’s like the brexit debate. Since when did even the idea of inline styling get considered as an option? I suspect it was someone who hasn’t had to deal with performance, or had to share their code, and has only ever worked on silo’d componentsa, or ever had to work much with browser quirks or hacks, or ever had to outsource work to different skilled developers due to time pressures – ie non javascripters who are excellent at css, Inline styling is for immature developers. period. Close minds and dogmas are for immature developers who are annoying to work with, period. I am wondering how exactly libraries like Material-UI () exist which run directly contrary to what you are saying and have a thriving ecosystem? Your open mind must make you a treat to work with 😛 There is precisely one meaning of cascading. What you have described as the first one is actually known as inheritance. But it’s very common for developers to mistakenly use cascading to mean inheritance, so the author is ensuring that readers are using the same definition he intends. As is appropriate to do on any foreseeable point of confusion. Hope you enjoyed that dopamine surge, though.
http://jamesknelson.com/why-you-shouldnt-style-with-javascript/
CC-MAIN-2017-34
refinedweb
4,508
71.34
2007-02-08: Tor 0.1.2.7-alpha is out From LinuxReviewsJump to navigationJump to search "This is the seventh development snapshot for the 0.1.2.x series. It makes rate limiting much more comfortable for servers, along with a huge pile of other bugfixes. Changes in version 0.1.2.7-alpha - 2007-02-06 - Major bugfixes (rate limiting): - Servers decline directory requests much more aggressively when they're low on bandwidth. Otherwise they end up queueing more and more directory responses, which can't be good for latency. - But never refuse directory requests from local addresses. - Fix a memory leak when sending a 503 response for a networkstatus request. - Be willing to read or write on local connections (e.g. controller connections) even when the global rate limiting buckets are empty. - If our system clock jumps back in time, don't publish a negative uptime in the descriptor. Also, don't let the global rate limiting buckets go absurdly negative. - Flush local controller connection buffers periodically as we're writing to them, so we avoid queueing 4+ megabytes of data before trying to flush. - Major bugfixes .) - Major bugfixes (other): - Previously, we would cache up to 16 old networkstatus documents indefinitely, if they came from nontrusted authorities. Now we discard them if they are more than 10 days old. - Fix a crash bug in the presence of DNS hijacking (reported by Andrew Del Vecchio). - Detect and reject malformed DNS responses containing circular pointer loops. - If exits are rare enough that we're not marking exits as guards, ignore exit bandwidth when we're deciding the required bandwidth to become a guard. - When we're handling a directory connection tunneled over Tor, don't fill up internal memory buffers with all the data we want to tunnel; instead, only add it if the OR connection that will eventually receive it has some room for it. (This can lead to slowdowns in tunneled dir connections; a better solution will have to wait for 0.2.0.) - Minor bugfixes (dns): - Add some defensive programming to eventdns.c in an attempt to catch possible memory-stomping bugs. - Detect and reject DNS replies containing IPv4 or IPv6 records with an incorrect number of bytes. (Previously, we would ignore the extra bytes.) - Fix as-yet-unused reverse IPv6 lookup code so it sends nybbles in the correct order, and doesn't crash. - Free memory held in recently-completed DNS lookup attempts on exit. This was not a memory leak, but may have been hiding memory leaks. - Handle TTL values correctly on reverse DNS lookups. - Treat failure to parse resolv.conf as an error. - Minor bugfixes (other): - Fix crash with "tor --list-fingerprint" (reported by seeess). - When computing clock skew from directory HTTP headers, consider what time it was when we finished asking for the directory, not what time it is now. -. - Stop using C functions that OpenBSD's linker doesn't like. - Don't launch requests for descriptors unless we have networkstatuses from at least half of the authorities. This delays the first download slightly under pathological circumstances, but can prevent us from downloading a bunch of descriptors we don't need. - Do not log IPs with TLS failures for incoming TLS connections. (Fixes bug 382.) - If the user asks to use invalid exit nodes, be willing to use unstable ones. - Stop using the reserved ac_cv namespace in our configure script. - Call stat() slightly less often; use fstat() when possible. - Refactor the way we handle pending circuits when an OR connection completes or fails, in an attempt to fix a rare crash bug. - Only rewrite a conn's address based on X-Forwarded-For: headers if it's a parseable public IP address; and stop adding extra quotes to the resulting address. - Major features: - Weight directory requests by advertised bandwidth. Now we can let servers enable write limiting but still allow most clients to succeed at their directory requests. (We still ignore weights when choosing a directory authority; I hope this is a feature.) - Minor features: - Create a new file ReleaseNotes which was the old ChangeLog. The new ChangeLog file now includes the summaries for all development versions too. - Check for addresses with invalid characters at the exit as well as at the client, and warn less verbosely when they fail. You can override this by setting ServerDNSAllowNonRFC953Addresses to 1. - Adapt a patch from goodell to let the contrib/exitlist script take arguments rather than require direct editing. - Inform the server operator when we decide not to advertise a DirPort due to AccountingMax enabled or a low BandwidthRate. It was confusing Zax, so now we're hopefully more helpful. - Bring us one step closer to being able to establish an encrypted directory tunnel without knowing a descriptor first. Still not ready yet. As part of the change, now assume we can use a create_fast cell if we don't know anything about a router. - Allow exit nodes to use nameservers running on ports other than 53. - Servers now cache reverse DNS replies. - Add an --ignore-missing-torrc command-line option so that we can get the "use sensible defaults if the configuration file doesn't exist" behavior even when specifying a torrc location on the command line. - Minor features (controller): - Track reasons for OR connection failure; make these reasons available via the controller interface. (Patch from Mike Perry.) - Add a SOCKS_BAD_HOSTNAME client status event so controllers can learn when clients are sending malformed hostnames to Tor. - Clean up documentation for controller status events. - Add a REMAP status to stream events to note that a stream's address has changed because of a cached address or a MapAddress directive. published 2007-02-06 - last edited 2019-06
https://linuxreviews.org/2007-02-08:_Tor_0.1.2.7-alpha_is_out
CC-MAIN-2020-05
refinedweb
953
57.27
Talk:Complete Roguelike Tutorial, using python+libtcod, part 6 From RogueBasin Guys, i have a little advice for AI section. When we are setting properties of our object, we are creating wrong movement rules (which allows monsters to run into 4 directions instead of 8). Player will broke 4-dir chains later, but now i propose a little piece of code which helps the reader to make good chasing algorithm. As you can see bellow, i am using rude algorithm instead of "round" operation as this one doesn't allow us to make proper 8-direction integer. Instead of move toward section we should include this into our code: def move_towards(self, target_x, target_y): #vector from this object to the target, and distance dx = target_x - self.x dy = target_y - self.y distance = math.sqrt(dx ** 2 + dy ** 2) #this returns number based on result of div operation. then taking itsinteger. dx = (dx / distance) if dx > 0: dx = 1 if dx < 0: dx = -1 dy = (dy / distance) if dy > 0: dy = 1 if dy < 0: dy = -1 dx = int(dx) dy = int(dy) self.move(dx, dy) --High priest of Ru (talk) 14:48, 21 May 2018 (CEST)
http://www.roguebasin.com/index.php?title=Talk:Complete_Roguelike_Tutorial,_using_python%2Blibtcod,_part_6
CC-MAIN-2020-34
refinedweb
197
63.19
Linear regression is a common machine learning technique that predicts a real-valued output using a weighted linear combination of one or more input values. For instance, the sale price of a house can often be estimated using a linear combination of features such as area, number of bedrooms, number of floors, date of construction etc. Mathematically, it can be expressed using the following equation: house_price = w1 * area + w2 * n_bedrooms + w3 * n_floors + ... + w_n * age_in_years + b The “learning” part of linear regression is to figure out a set of weights w1, w2, w3, ... w_n, b that leads to good predictions. This is done by looking at lots of examples one by one (or in batches) and adjusting the weights slightly each time to make better predictions, using an optimization technique called Gradient Descent. Let’s create some sample data with one feature x(e.g. floor area) and one dependent variable y(e.g. house price). We’ll assume that y is a linear function of x, with some noise added to account for features we haven’t considered here. Here’s how we generate the data points, or samples: m, c = 2, 3 noise = np.random.randn(250) / 4 x = np.random.rand(250) y = x * m + c + noise And here’s what it looks like visually: Now we can define and instantiate a linear regression model in PyTorch: class LinearRegressionModel(nn.Module): def __init__(self, input_dim, output_dim): super(LinearRegressionModel, self).__init__() self.linear = nn.Linear(input_dim, output_dim) def forward(self, x): out = self.linear(x) return out model = LinearRegressionModel(1, 1) criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.01) For the full code, including imports etc. see this link. Here’s how the model weights look like right now: As you can see, it’s quite far from the desired result. Now we can define a utility function to run a training epoch: def run_epoch(epoch): # Convert from numpy array to torch tensors inputs = Variable(torch.from_numpy(x.reshape(-1, 1).astype('float32'))) labels = Variable(torch.from_numpy(y.reshape(-1, 1).astype('float32'))) # Clear the gradients w.r.t. parameters optimizer.zero_grad() # Forward to get the outputs outputs = model(inputs) # Calcuate loss loss = criterion(outputs, labels) # Getting gradients from parameters loss.backward() # Updating parameters optimizer.step() return loss Next, we can train the model and update the state of a animated graph at the end of each epoch: %matplotlib notebook fig, (ax1) = plt.subplots(1, figsize=(12, 6)) ax1.scatter(x, y, s=8) w1, b1 = get_param_values() x1 = np.array([0., 1.]) y1 = x1 * w1 + b1 fit, = ax1.plot(x1, y1, 'r', label='Predicted'.format(w1, b1)) ax1.plot(x1, x1 * m + c, 'g', label='Best Fit') ax1.legend() ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('Linear Regression') def init(): ax1.set_ylim(0, 6) return fit, def animate(i): loss = run_epoch(i) [w, b] = model.parameters() w1, b1 = w.data[0][0], b.data[0] y1 = x1 * w1 + b1 fit.set_ydata(y1) epochs = np.arange(1, 250) ani = FuncAnimation(fig, animate, epochs, init_func=init, interval=100, blit=True, repeat=False) plt.show() This will result in following animated graph: That’s it! It takes about 200 epochs for the model to come quite close to the best fit line. The complete code for this post can be found in this Jupyter notebook: read original article here
https://coinerblog.com/visualizing-linear-regression-with-pytorch-hacker-noon/
CC-MAIN-2019-39
refinedweb
562
59.8
23 January 2012 22:26 [Source: ICIS news] HOUSTON (ICIS)--BASF has declared force majeure on ethylene oxide (EO) produced at its plant in Geismar, Louisiana, market sources said on Monday. BASF itself did not identify the plant in a letter it sent to customers, and it did not immediately respond to a question regarding the location of the plant. In the letter, BASF said it declared force majeure because of an extensive mechanical failure of a critical piece of equipment. As a result, BASF declared force majeure on EO and on its EO-containing Pluracol products, including conventional and graft products, the company said. The force majeure will continue through February 2012. BASF will continue its efforts to seek EO from alternate sources, and it will allocate available supplies of Pluracol in a fair and equitable manner, the company said. However, BASF will not be able to produce Pluracol 4156, and material will not be available. BASF operates the 220,000 tonne/year EO plant in ?xml:namespace> Additional reporting by Leela Landress
http://www.icis.com/Articles/2012/01/23/9526216/basf-declares-force-majeure-on-eo-at-us-plant-sources.html
CC-MAIN-2014-42
refinedweb
174
52.49
There are three questions as possible duplicates (but too specific): from concurrent.futures import * def f(v): return lambda: v * v if __name__ == '__main__': with ThreadPoolExecutor(1) as e: # works with ThreadPoolExecutor l = list(e.map(f, [1,2,3,4])) print([g() for g in l]) # [1, 4, 9, 16] f What you want the other processes to do with the references? What's the problem with just using a proxy? Most of the time it's not really desirable to pass the reference of an existing object to another process. Instead you create your class you want to share between processes: class MySharedClass: # stuff... Then you make a proxy manager like this: import multiprocessing.managers as m class MyManager(m.BaseManager): pass # Pass is really enough. Nothing needs to be done here. Then you register your class on that Manager, like this: MyManager.register("MySharedClass", MySharedClass) Then once the manager is instanciated and started, with manager.start() you can create shared instances of your class with manager.MySharedClass. This should work for all needs. The returned proxy works exactly like the original objects, except for some exceptions described in the documentation.
https://codedump.io/share/mYkUuxnNCm9R/1/multiprocessing-share-unserializable-objects-between-processes
CC-MAIN-2018-26
refinedweb
193
59.7
Currently registering a DTD in the system is cumbersome and error-prone; you have to follow some rather complex instructions in EntityCatalog and create a layer entry. This is a natural candidate for an annotation to hide the details. Making such an annotation should be simple, though I am not sure where to put it. Ideally the annotation, and its processor, would be collocated with the runtime implementation. EntityCatalog is in openide.util and defines the placement, but cannot actually implement it because this uses layers. FileEntityResolver in openide.loaders currently implements it, but this seems wrong since the impl does not need anything else in Datasystems. openide.filesystems is the "lowest" possible place the impl could go, though sticking this annotation in the org.openide.filesystems package seems odd. Created attachment 103416 [details] Proposed patch, minus apichanges Please review. Y01 Stuffing this into openide.filesystems seems wrong to me. I'd not be afraid to start new module org.netbeans.api.dtd or even reuse existing org.netbeans.api.xml (and move it to platform). Such module can easilly have compile time dependency on filesystems and host any XML related annotations while having low closure of runtime dependencies. Existing utilities like org.openide.xml could be slowly migrated in there. Potentially the api.xml current content (probably requires dependency on openide.nodes) could be moved away and the org.netbeans.api.xml code name used solely for purposes of new, modern APIs for dealing with XML. (In reply to comment #3) > Existing utilities like org.openide.xml could be slowly migrated in there. I don't see any way of migrating org.openide.xml upwards. Even if e.g. XMLUtil were copied into some new module, the old version could not delegate to the new version, so what would be the point? (Module auto deps could be used to redirect existing clients, but this would still be an incompatible change at the source level, which is a major disturbance for such a widely used module as openide.util. Anyway XMLUtil is used in a number of places in openide.filesystems.) >. If openide.filesystems is unacceptable (even under a different package name), then I would rather put the annotation where the runtime currently resides, in openide.loaders. It would be possible to move this "down" into openide.filesystems or some new module TBD later. (In reply to comment #4) > I don't see any way of migrating org.openide.xml upwards. Even if e.g. XMLUtil > were copied into some new module, the old version could not delegate to the > new > version, so what would be the point? The point if that people would start to use the new API migrating way from the original (misplaced) one. Creating the new API methods is the visible part of the change. How do we make the old XMLUtil and new API behave the same is implementation issue. > but this would > still be an incompatible change at the source level, Option1: XMLUtil finds its own implementation in lookup (provided by the xml module) Option2: We duplicate the code. Share the tests to ensure the code behaves the same. Option3: New API delegates to old XMLUtils. In future we switch to option 1 or option 2. > which is a major > disturbance for such a widely used module as openide.util. Anyway XMLUtil is > used in a number of places in openide.filesystems.) That is indeed annoying. Option 3 would however work OK for this case. > >. OK. I just liked the codenamebase. Call the new module org.netbeans.api.xmlutil if you want. > If openide.filesystems is unacceptable (even under a different package name), > then I would rather put the annotation where the runtime currently resides, in > openide.filesystems or some new module TBD later. OK. Different package name (using the new org.netbeans.api style) is imho a TCR. New module like the above mentioned api.xmlutil is a TCA. > openide.loaders. It would be possible to move this "down" into No! To not encourage new dependencies on openide.loaders is a TCR, imho. (In reply to comment #5) > Option3: New API delegates to old XMLUtils. In future we switch to option 1 or > option 2. This seems most attractive in general, but the devil is in the details. For the case of XMLUtil, the problem is that there are a number of usages of XMLUtil in openide.filesystems and core.startup. Either we: 3a. Make openide.filesystems and core.startup depend on the new module, which must then be in core/*.jar. May not even work, since the new module must depend on openide.filesystems to get LayerGeneratingProcessor. 3b. Copy parts of XMLUtil into a private class in openide.filesystems, and copy other parts into a private class in core.startup. (Some aspects, esp. namespace support, can probably be omitted.) For the case of EntityCatalog, the problem is that this is defined as an SPI, i.e. you could in principle register a subtype into global lookup. Either we: 3c. Make the new EntityCatalog be a subtype of the old one, which prevents eventual abandonment of the old one and looks ugly since it will inherit from a deprecated type. 3d. Search lookup for instances of either kind of EntityCatalog, which will not work for the case of an instance of the new EC when the old EC.getDefault() is called. Compared to these obstacles, plus the overhead of a new module and the labor involved in getting hundreds of client classes to switch to a new import, the current patch seems much more straightforward. (In reply to comment #6) > 3b. Copy parts of XMLUtil into a private class To clarify: this would be needed when finally deprecating XMLUtil, not when introducing a delegating copy (can use @SuppressWarnings("deprecation") in the meantime). > For the case of EntityCatalog, the problem is that this is defined as an SPI or 3e. Do not support custom instances of the new EC in the global lookup at all; continue to support instances of the old EC until this is deleted. (The support for layer registration can be hardcoded, and the other instance in current global lookup is from openide.loaders and only deals with long-deprecated APIs.) Since the original patch has been sidetracked by a request to move even the unrelated XMLUtil into a new module, which is a much more disruptive change than just replacing a layer registration with an annotation, I will try a different approach: @EntityCatalog.Registration in openide.util, with processor and runtime impl there as well, using e.g. ---%<--- .../org-openide-loaders.jar!/META-INF/entities -//NetBeans//Entity Mapping Registration 1.0//EN org/netbeans/modules/openide/loaders/EntityCatalog.dtd -//NetBeans IDE//DTD xmlinfo//EN org/netbeans/modules/openide/loaders/xmlinfo.dtd ---%<--- and a new public method Set<String> publicIDs() in EntityCatalog. Should be straightforward and permit the annotation to live in the most natural place. The current xml/entities/ registration can be deprecated and the runtime impl can continue to live in openide.loaders. Don't give up! I did not want to block the original attempt, just make it better. I understand the problems deprecating XMLUtil, so I am going to give up on that. Still I believe new module is better than openide.filesystems or openide.loaders. I prefer use of LayerGeneratingProcessor over other configuration files. So just create xml.catalog (I know it exists) or xml.catalog.api module and put your @EntityCatalogRegistration and its processor there. Make this module depend on openide.filesystems and let everyone who wants to register the catalog use this new module. My primary vision is to move (most of) the XML related stuff into autoload modules and eliminate that from the fixed ones. After some offline discussion: xml.catalog.api is reasonable but only if a cleaned-up version of what is now in the SPI of xml.catalog gets moved there, so it has some interesting content beyond what EntityCatalog does now. We probably want three annotations (plus plural variants): 1. @Catalog(id="projects", displayName="#...", shortDescription="#...", icon="...") to create a read-only XML catalog (displayed in Tools > DTDs and XML Schemas). For example, project.ant would do this instead of the current ProjectXMLCatalogReader. 2. @Entity(publicID="...", entity="something.dtd") for registering DTDs for internal use (such as for settings), similar to former proposal. Optional attr catalog="maven" to create an alternate entity catalog useful for user documents. 3. @Schema(uri="...", schema="something.xsd", catalog="projects") for registering schemas by namespace URI to a catalog of your choice. For example, various project type modules would use this to register schemas into project.ant's catalog. EntityCatalog.getDefault could probably stay where it is (the new module would register the preferred EntityCatalog impl), and XMLUtil need not be touched, but there could be a new API such as: public abstract class Catalog { public static final String ID_INTERNAL; // used for EC.default public static @CheckForNull Catalog forID(String id); public static Set<Catalog> allCatalogs(); // in Lookup public abstract String getID(); public abstract String getDisplayName(); public abstract String getShortDescription(); public abstract Icon getIcon(); public abstract EntityResolver createEntityResolver(); // cf. XMLUtil.parse public abstract Schema createSchema(); // cf. XMLUtil.validate } where the impl of createSchema in the standard catalog impl (created using annotations above) would either eagerly combine all schemas as ProjectXMLCatalogReader does today, or use LSResourceResolver to load a schema on demand if that proves to work. (MavenCatalog also uses some trick with "SCHEMA:" prefix that needs to be investigated.) Some sort of compatibility layer for users of xml.catalog's SPI would be desirable; TBD what we can do. Created attachment 105213 [details] Slightly updated patch, in case this is still useful *** Bug 35880 has been marked as a duplicate of this bug. *** Too much for 7.2. Any interest in pushing this forward?
https://netbeans.org/bugzilla/show_bug.cgi?id=192595
CC-MAIN-2017-30
refinedweb
1,639
50.94
NAME cpio.h - cpio archive values SYNOPSIS #include <cpio.h> DESCRIPTION Values needed by the c_mode field of the cpio archive format are described as follows: Name Description Value (Octal) C_IRUSR Read by owner. 0000400 C_IWUSR Write by owner. 0000200 C_IXUSR Execute by owner. 0000100 C_IRGRP Read by group. 0000040 C_IWGRP Write by group. 0000020 C_IXGRP Execute by group. 0000010 C_IROTH Read by others. 0000004 C_IWOTH Write by others. 0000002 C_IXOTH Execute by others. 0000001 C_ISUID Set user ID. 0004000 C_ISGID Set group ID. 0002000 C_ISVTX On directories, restricted deletion flag. 0001000 C_ISDIR Directory. 0040000 C_ISFIFO FIFO. 0010000 C_ISREG Regular file. 0100000 C_ISBLK Block special. 0060000 C_ISCHR Character special. 0020000 C_ISCTG Reserved. 0110000 C_ISLNK Symbolic link. 0120000 C_ISSOCK Socket. 0140000 The header shall define the symbolic constant: MAGIC "070707" .
http://manpages.ubuntu.com/manpages/precise/en/man7/cpio.h.7posix.html
CC-MAIN-2014-15
refinedweb
129
65.59
Doing calculations and showing the answer in Swift is really easy. This example will do just that and print the answer to the terminal. But you could change the code to show the problems and the answer in a UILabel. Here’s the code: import UIKit import Foundation let firstNumber = 56 let secondNumber = 23 let answer = (firstNumber + secondNumber) print (firstNumber, “+”, secondNumber, “=”, answer) As always, import UIKit and then the Foundation Frameworks so the code will work correctly. 1. Assign two variables two different numbers to give the computer something to calculate. 2. Assign a variable to show the answer to the calculations. 3. Show the problem and the answer to the terminal – 56 + 23 = 79. 4. That’s it. Note: The ( ) around the firstNumber and lastNumber means that the computer will always calculate the variables in-between the ( ) first, so you can preform any other calculations after that. Like so: let answer = (firstNumber + secondNumber) / 30
https://www.compuscoop.com/?p=23390
CC-MAIN-2019-26
refinedweb
155
54.32
Need help installing packages with pip? see the pip install tutorial. Columns and rows, that's all there is to it! From here, we can utilize Pandas to perform operations on our data sets at lightning speeds. Pandas is also compatible with many of the other data analysis libraries, like Scikit-Learn for machine learning, Matplotlib for Graphing, NumPy, since it uses NumPy, and more. It's incredibly powerful and valuable to know. If you're someone who finds themselves using Excel, or general spreadsheets, for various computational tasks, where they might take a minute, or an hour, to run, Pandas is going to change your life. I've even seen versions of Machine learning like K-Means clustering being done on Excel. That's really cool, but my Python is going to do that for you way faster, which will also allow you to be a bit more stringent on parameters, have larger datasets and just plain get more done. Another bit of good news? You can easily load in, and output out in the xls or xlsx format quickly, so, even if your boss wants to view things the old way, they can. Pandas is also compatible with text files, csv, hdf files, xml, html, and more with its incredibly powerful IO. If you're just now joining us with Python, you should be able to follow along without already having mastered Python, and this could even be your intro to Python in general. Most importantly, if you have questions, ask them! If you seek out answers for each of the areas of confusion, and do this for everything, eventually you will have a full picture. Most of your questions will be Google-able as well. Don't be afraid to Google your questions, it wont laugh at you, I promise. I still Google a lot of my goals to see if someone has some example code doing what I want to do, so don't feel like a noob just because you do it. If I have not sold you yet on Pandas, the elevator pitch is: Lightning fast data analysis on spreadsheet-like data, with an extremely robust input/output mechanism for handling multiple data types and even converting to and from data types. Assuming you've got Python installed. Next, go to your terminal or cmd.exe, and type: pip install pandas. Did you get a "pip is not a recognized command" or something similar? No problem, this means pip is not on your PATH. Pip is a program, but your machine doesn't just simply know where it is unless it is on your PATH. You can look up how to add something to your path if you like, but you can always just explicitly give the path to the program you want to execute. On Windows, for example, Python's pip is located in C:/Python34/Scripts/pip. Python34 means Python 3.4. If you have Python 3.6, then you would use Python36, and so on. Thus, if regular pip install pandas didn't work, then you can do C:/Python34/Scripts/pip install pandas On that note, another major point of contention for people is the editor they choose. The editor really does not matter in the grand scheme of things. You should try multiple editors, and go with the one that suits you best. Whatever you feel comfortable with, and you are productive with. That's what matters most in the end. Some employers are also going to force you to use editor X, Y, or Z in the end as well, so you probably shouldn't become dependent on editor features. With that, I prefer the simple IDLE, so that's what I will code in. Again though, you can code in Wing, emacs, Nano, Vim, PyCharm, IPython, whatever you want. To open IDLE, just go to start, search for IDLE, and choose that. From there, File > New, and boom you have a text editor with highlighting and a few other little things. We'll cover some of these minor things as we go. Now, with whatever editor you are using, open it up, and let's write some quick code to check out a dataframe. Generally, a DataFrame is closest to the Dictionary Python data structure. If you are not familiar with Dictionaries, there's a tutorial for that. I'll annotate things like that in the video, as well as having links to them in the description and on the text-based versions of the tutorials on PythonProgramming.net First, let's make some simple imports: import pandas as pd import datetime import pandas_datareader.data as web Here, we import pandas as pd. This is just a common standard used when importing the Pandas module. Next, we import datetime, which we'll use in a moment to tell Pandas some dates that we want to pull data between. Finally, we import pandas.io.data as web, because we're going to use this to pull data from the internet. Next up: start = datetime.datetime(2010, 1, 1) end = datetime.datetime.now() Here, we create start and end variables that are datetime objects, pulling data from Jan 1st 2010 to today. Now, we can create a dataframe like so: df = web.DataReader("XOM", "morningstar", start, end) This pulls data for Exxon from the Morningstar API (which we've had to change since the video, since both Yahoo and Google have stopped their APIs), storing the data to our df variable. Naming your dataframe df is not required, but again, is pretty popular standard for working with Pandas. It just helps people immediately identify the working dataframe without needing to trace the code back. So this gives us a dataframe, how do we see it? Well, can can just print it, like: print(df) So that's a lot of space. The middle of the dataset is ignored, but that's still a lot of output. Instead, most people will just do: print(df.head())Output: Close High Low Open Volume Symbol Date XOM Morningstar's api returns slightly more complex formatted results. We can clean this up to be just rows and columns like a spreadsheet might be with: df.reset_index(inplace=True) df.set_index("Date", inplace=True) df = df.drop("Symbol", axis=1) print(df.head()) Close High Low Open Volume Date This prints the first 5 rows of the dataframe, and is useful for debugging and just generally seeing what your dataframe looks like. As you perform analysis and such, this will be useful to see if what you intended actually happened or not. We'll dive more into this later on, however. We could stop here with the intro, but one more thing: Data Visualization. Like I said earlier, Pandas works great with other modules, Matplotlib being one of them. Let's see! Open your terminal or cmd.exe, and do pip install matplotlib. You should already have got it I am prety sure with your pandas installation, but we want to make sure. Now, at the top of your script with the other imports, add: import matplotlib.pyplot as plt from matplotlib import style style.use('fivethirtyeight') Pyplot is the basic matplotlib graphing module. Style helps us quickly make our graphs look good, and style.use lets us choose a style. Interested in learning more about Matplotlib? Check out the in-depth Matplotlib tutorial series! Next, below our print(df.head()), we can do something like: df['High'].plot() plt.legend() plt.show() Full code for that: import datetime import pandas_datareader.data as web import matplotlib.pyplot as plt from matplotlib import style style.use('fivethirtyeight') start = datetime.datetime(2010, 1, 1) end = datetime.datetime.now() df = web.DataReader("XOM", "morningstar", start, end) df.reset_index(inplace=True) df.set_index("Date", inplace=True) df = df.drop("Symbol", axis=1) print(df.head()) df['High'].plot() plt.legend() plt.show() Pretty cool! There's a quick introduction to Pandas, but nowhere near what is available. In this series, we're going to be covering more of the basics of pandas, then move on to navigating and working with dataframes. From there, we'll touch a bit more on visualization, input and output with many data formats, basic and intermediate data analysis and operations, merging and combining dataframes, resampling, and much more with a lot of realistic examples. If you're lost, confused, or need some clarity, don't hesitate to ask questions on the respective videos.
https://pythonprogramming.net/data-analysis-python-pandas-tutorial-introduction/
CC-MAIN-2021-39
refinedweb
1,420
74.69
This forum is now read-only. Please use our new forums at discuss.codecademy.com. Conditionals & Control Flow Forum The Big If Hi everyone! First, my apologies for the confusion surrounding the exercise The Big If. The below should help clear things up: - Python doesn't like mixed spaces and tabs when you indent a line. For that reason, it's best to use four spaces (not the tab key!) when indenting. This forum answer should help you if you think indentation is the problem. - The exercise is looking for a properly indented if/elif/else with at least one comparison ( <=, >=, <, >, !=, or ==) and one boolean operator ( and, or, or not). When indenting, make sure to indent the same amount on the line after each colon! - Whatever your if/elif/else evaluates to should be True. Here's an example of an if/elif/else that does all of the above, only I've left out the boolean operator requirement (since otherwise you could just cut/paste this answer!): if 1 < 2: # This is true... return True # so True will be returned! elif 1 > 2: return False else: return False For the boolean part, you can just use something like if True and True (which is True) or if not True (which is False) to meet all the requirements of the lesson. I hope this helps, and apologies again for the confusion! A. 17 Comments no it is not it is work tnx Another problem with good coding going bad umm well i have pretty much the exact same thing but it is throwing mine out for no bool operators when i do i am confused at why and i was wondering if someone might have an answer I was getting so angry so i copy and pasted this and it still did not work. Turns out I needed to refresh my browser. So I used this and then went back and did my own to make sure I knew what I was doing....so if you get stuck and you swear your code is right then just refresh the page and try it again. (I'm using Firefox) Thank you so much! you so save the day! this is really helpful !thx!!! I COPY THIS AND GET SOOO MUCH GOOD NICE THINGS omg ive been struggling for ages thank you so much love you long time youve completed my life Thank you sir :0 danny no one cares i am having problems with indentation i didnt put in. eg it will tell me the indent on the elif statemnet is wrong even though it was there already. indent was my problem 2 ;solved!guys never forget to code in the right line dosent work lol refreshing worked! Thank you I was totally lost 1 Comment Same Here. Poor examples and explination! @Davide: Rather than cutting/pasting, try typing the code out with exactly the same formatting as shown. Cutting/pasting may change the indentation of the lines, which will cause Python to raise an error. 20 Comments Its about time we get feedback from the people how wrote these lousy codes. Don't stress. Count yourself lucky to be using this free service maintained mostly by kind people volunteering their time. It says they are hiring, so they can go blow me for all I care. Hey I tried that, still not working! Could you tell me how to fix it? They may be hiring, but this is still a free service... OMFG that was annoying. Thank God I found this post. Four spaces...not a tab. I think that should be somewhere in the tutorial...no? Hey everybody! I just spent three hours trying everything to make this work! For me it worked like this: if 1 < 2 #four spaces in front of "if"!!! return True #six spaces in front of "return"!!! --> It was just about the right number of spaces in front of the "if" and "return"! Hi Eric, why Python has such a problem between spaces and tab? Interesting... I could swear that in the tutorial it said tab, not four spaces. Now it's the other way around? What? I dont-? WTH? I think my spacing is fine here. I've got all the requirements. My error message just reads "Oops try again/true" def theflyingcircus(): answer == "Yes" if answer == "Yes": return True elif answer == "Yes" or "No": return False else: return True +1 this new editor doesn't help. prefered the old one HEY GUYS ITS captainsparkelz No. Your not captainsparkelz Thanks Eric. The bit of info that got mine to work is " When indenting, make sure to indent the same amount on the line after each colon! " don't mean to intrudes, but to be more specific, one of each should be selected ( == or !=, < or >, => or <=.) throughout if/elif/else statements in order to satisfy this exercise. I think incorporating the 'print' function would be helpful req. This assignment doesn't accept any valid solution for me. Quite frustrating. I am unable to continue... Eric: So glad you explained this. I was really scratching my head when my script worked in "real" python and not in the exercise. You explained why... (: haha,I misunderstanding that the four spaces are in front of the "if" at first, so~ :( But now, it's ok! I'm so happy! Thank you! FWIW, my problem was with return keyword. I only saw it in the early examples and it was never explained the way that the My first (failed) code here was: def the_flying_circus(): if 226 == 226: print "226 is teh ossum." elif True and False: print "is not true" else: print "and nope" After eventually looking at the correct answers here and just trial and erroring, I finally added the return commands and made it work: def the_flying_circus(): if 226 == 226: return True print "226 is teh ossum." elif True and False: return False print "is not true" else: return False print "and nope" What's odd about this lesson is that until now we've had Python do the work for us with regard to math and logic; suddenly here we're 'returning' the answer to...I don't know who the answer returns to. Good luck repairing this exercise! Love the course anyway. 5 Comments Same issue here. The "return" must be explained earlier. Ditto! I actually tried using return even though it bothered me it hadn't been explained - was still getting an error but suppose it's the indentation thing explains it. Thanks. GOD BLESS YOU! I've tried everyone elses code and it wont work, but yours did! Yes!! Thank you!! Thank you very much!!!It is really helpful!!! a = "lets do it" def theflyingcircus(): if a == "lets do it": return True elif True and False: return False else: return False 11 Comments This works with copy and paste! Thanks! Thanks ! It really works :D thanks is work wow! finally! save me thanks you it works thanks omg thanks a ton! Thank you Thanks.... Thank you!!! Omg! Thank You This was definately a good fix the real answer: def theflyingcircus(): # Start coding here! answer = "chloe" if answer == "frog" or answer=="I am an idiot": return False elif answer == "chloe" and 1==1: return True else: return False 8 Comments thank you. you can have my bucket def theflyingcircus(): if 1==1 or 2==1+1: return True elif 5==3+2: return True else: return True def theflyingcircus(): answer = "Chloe" if answer == "Chloe" or answer=="I am an idiot": return True elif answer == "frog" and 1==1: return False else: return True I was wondering why this code was wrong. The console says that there is invalid syntax in line 3 and the "supportive" box is telling me to check my colons. I checked it and it still gives me the same response. Help. monkey = "The monkey fell" def theflyingcircus(): if monkey == "The monkey landed" or "The monkey stood up" not "The monkey made it into the pool": return False elif: monkey == "The monkey flew" and "The monkey flew across": return False else: monkey == "The monkey fell": return True I tried all of the above code to answer this problem and this is the first solution that returned a correct answer IndentationError: unindent does not match any outer indentation level wat does this mean???????? tks thanks man..!!stucked there but your help .. Here we go, i can do it, in that way: def theflyingcircus(): # Written By Renan Zapelini a = 100 b = 100 c = 200 if a == b: return True elif a != b and a > c: return False else: return False 4 Comments i loved your code ... Thx. :) Interesting! You the good fellow:) awsome simple and cool!!!!! Here is the answer: def the_flying_circus(): if 3>2: return True elif 1>2 and 2>1: return False else: return True 1 Comment Thanks. Mine was too complicated. the real answer!!! number = 2 def theflyingcircus(): if number < 3 and number > 1: return True elif number == 2: return True else: print " Number is egal to 2" return True 1 Comment no it doesnt!!!!!!!11 Eric, If we wipe the code out and type it again, can we use tabs only? Is the reason it's best to use 4 spaces idiosyncratic to this editor or just the lesson? This is an argument that apparently has no end, so low or high scope answers are equally good. Thanks for the lessons! Hello Eric, Thanks a lot for the exercises. Very clever of you with the Big if solution. After some errors, I finally got it right. So kids, not just copy and paste!!! 1 Comment do you know who your talking to? -_- Thank you,I’ve solved this problem too. But then what should I do when I want to change another line to write new codes such as return or elif. By the way,I'm Chinese,it's really difficult for me to figure out what this course is talking about.And I think it's unnecessary to learn Python,maybe someone can translate it to Chinese. 1 Comment they have created a learning structure for this "general purpose" programming language. it will make more sense as you complete more and more units. Hopefully this will help for some of you having a problem: My Code is below. I was getting the same line error's seen in this string, and I found out that it was a spacing error. Move your code 4 spaces on your 1st if line, as you will see in my code. Then move your return line 4 spaces on the second line. The code below is not keeping my spacing used in the exercise, so please use the 4 spaces for each and this should work for you! I hope this helps you!! def theflyingcircus(): # Start coding here! if 8 < 9: print "I get printed!" return True elif 8 > 9 or 7 > 8: print "I don't get printed." else: print "I also don't get printed!" 1 Comment @Falcon212 Please use markdown to make your code be formatted as code. this will make it easier for others. Thanks. okay sorry for that just use this code def theflyingcircus(): if 1 < 666: print "Thanks PureAC" return True elif 893 > 932 or 32 > 32: print "asiaskdnasl." else: print "asd" # Start coding here! and please tab it correctly if your still having problems feel free to ask me at Tyrepickett@gmail.com this works def theflyingcircus(): if 1==1 and 5==2*2+1: return True elif 5==5: return True else: return True 1 Comment you need to add spaces before certain lines def the_flying_circus(pythons): if pythons > 0 < 100: return True elif pythons > 100: return not True else: return False After some trouble with an unexplained syntax error on line 2, (I assumed it was an identation problem), I got this code to pass on the second attempt so use the spacebar instead of relying on enter/tab, since that's what did the trick for me. Make sure that theflyingcircus() returns True a = "lets do it" def theflyingcircus(): if a == "lets do it": # Start coding here! return True # Don't forget to indent # the code inside this block! elif True and False: return False # Keep going here. else: return False# You'll want to add the else statement, too! -- Don't forget spaces In case this helps someone else: I found that the thing I was missing was to add "return True" or "return False" after my Boolean statements, instead of just "print True". This syntax was used but not explained in the other lessons, which is why I didn't use it on my first attempts..! def the_flying_circus(): if 3>2: return True I've tried to copy and paste your code if 1 < 2: # This is true... return True # so True will be returned! elif 1 > 2: return False else: return False and I always get this error: Traceback (most recent call last): File "runner.py", line 105, in compilecode File "python", line 4 elif 1 > 2: ^ SyntaxError: invalid syntax How can I fix it? Thanks in advance. davide 9 Comments - You need the function, 2. you need to us at least one comparaison & boolean mine is saying to check the indentation The whole point is you DONT COPY AND PASTE IDIOTS. READ THE WHOLE THING BEFORE YOU COMMENT STUPID THINGS. PROGRAMMERS ARE NOT STUPID. THEY WILL CLEAVE YOU FROM THE HERD. def theflyingcircus(): if 3>2: return True elif 1>2 and 2>1: return False else: return True put your cursor at the beginning of a line that starts 'elif' or 'else' (it should be indented). hit the key. if the cursor goes all the way to the beginning of the line, it was a character. replace it with 4 (four) spaces (i.e., hit 4 times). I agree with you jonny, but try not to be so explosive. Geez. I'm not sure how to copy and paste an idiot... You cull from a herd. You cleave a molecular bond, or a ham. @Davide Linosa Next time please post your questions in the Q&A section. this can be found in the editor at the footer of the page (with the new editor). or if you are using the old editor, it will be at the top. Hi, Here is the code I'm trying but I can't seem to get it right: def theflyingcircus(): jeff = "animator" if len(jeff) == 2 + 3: return True elif len(jeff) + 2 == 100**0.5 and 2 == 5 - 3: return True else: return False Thanks in advance! 6 Comments dude, it doesn't work.... Really? 3+2 is 5.. there's 4 letters in Jeff. Your elif reads as 4+2 = 6 == 50 and 2 ==2 5 == 50 and 2 == 2 and needs both to be true 6 == 50 #False So your code reads as: 4 == 5 #False 6 == 50 and 2 == 2 # False #False Easy fix to this: if len(jeff) == 2+2 jeff is the variable name but the actual content of it is the string "animator" which contains 8 letters. It works out. whats wrong My answer: def the_flying_circus(): a = 100 b = 200 c = 300 of a or b == c: return True elif a or b <= c: return False else: return False Nothing much, but thought I'd give my solution to the problem as well as others have =) 3 Comments that doenst work !!!!!!!!!!!!!!!!!!!! It should work, try using proper indentation. It doesn't work because of the "of" typo in the conditional. 3 Comments Spam is not appreciated here. This is for asking questions and learning. ITS NOT SPAM U NOOB Spam, as in using the stupid word "Noob" and improper spelling and capitalization is honestly not welcomed. I originally used comparisons but got nothing but errors. This is the code I used and it worked. Is this wrong? answer = "'Tis but a scratch!" def blackknight1(): if answer == "'Tis but a scratch!": return True else: "Tis not a scratch!" return False def blackknight2(): if answer == "Go away, or I shall taunt you a second time!": return True else: "Goes away" return False You're input is greatly appreciated. Thanks in advance!! 1 Comment @Lodog Next time please post your questions in the Q&A section. this can be found in the editor at the footer of the page (with the new editor). or if you are using the old editor, it will be at the top. Hi, What is the best way of "saying" this...? if 3>2: return True elif 1>2 and 2>1: return False else: return True WHat is the intuition behind returning "True" on the else part? I get the If and Elif part but not the 3rd part. Thx. 2 Comments @cna892 Next time please post your questions in the Q&A section. this can be found in the editor at the footer of the page (with the new editor). or if you are using the old editor, it will be at the top. nothing is working for me is it necessary that we need to use print over there?
https://www.codecademy.com/forum_questions/510160926dc6acb5ee005f6a
CC-MAIN-2017-26
refinedweb
2,865
82.04
Type error in Service Routine - this.wiederkehr I randomly get the following error: Unhandled exception in callback handler TypeError: 'bytearray' object is not callable or Unhandled exception in callback handler TypeError: 'slice' object is not callable This happens in a callback which is triggerd by a pin interrupt. Defined like so: int_pin.callback(trigger=Pin.IRQ_RISING, handler=self.handler) The callback function is a very simple method of an object: def handler(self, arg): self.todo = True Actually, there is no such type as sliceor bytearrayinvolved in the callback which makes me think its a bug ... Sometimes the callback works fine for several thousands of times, some times for just a few hundreds. After that exception the callback seems to be never called again, even tough it should have been called again. I have a main loop which looks as follows and shall handle the todos: while True: irq = machine.disable_irq() if this.todo: this.todo = False do_this() if that.todo: that.todo = False do_that() machine.enable_irq(irq) time.sleep_us(1) Any ideas? Oh yea, forgot to mention: (sysname='LoPy', nodename='LoPy', release='1.6.12.b1', version='v1.8.6-593-g8e4ed0fa on 2017-04-12', machine='LoPy with ESP32', lorawan='1.0.0')
https://forum.pycom.io/topic/1071/type-error-in-service-routine
CC-MAIN-2017-34
refinedweb
205
59.19
Am Montag, 11. April 2005 15:59 schrieb Christoph Bauer: > > Ok, my second haskell program ;-): > > module Init where > > import Maybe > > left :: a -> Maybe [a] -> Maybe [a] > left x None = (Just []) ^^^^^^^^ Nothing, as below :-) > left x (Just l) = (Just (x:l)) > > init :: [a] -> [a] > init xs = fromJust . foldr left Nothing xs > > Sure, there is a better solution... I don't think so. As far as I see, it's impossible to do it with just init xs = foldr fun val xs, (unless we use some dirty trick) because we must have fun x (init ys) = x:init ys for any nonempty list ys and init [] = val forces val to be 'error "init of empty List"' or something of the sort and this has to be evaluated when we reach the end of the list, for we would need fun _ (error "blah") = [] for nonempty lists. Dirty trick: unsafePerformIO import System.IO.Unsafe fun :: a -> [a] -> [a] fun x xs = unsafePerformIO ((xs `seq` (return (x:xs))) `catch` (\ _ -> return [])) init3 :: [a] -> [a] init3 = foldr fun (unsafePerformIO (ioError (userError "init of []"))) *Init> init3 [] *** Exception: user error (init of []) *Init> init3 [1] [] *Init> init3 [1 .. 10] [1,2,3,4,5,6,7,8,9] Though this works, it is utterly horrible and despicable. DON'T DO THAT!!!!! > > Best Regards, > Christoph Bauer Cheers, Daniel
http://www.haskell.org/pipermail/haskell-cafe/2005-April/009569.html
CC-MAIN-2014-42
refinedweb
220
76.25
# How to quickly check out interesting warnings given by the PVS-Studio analyzer for C and C++ code? ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/df3/c29/8f2/df3c298f21cb71d0c7c699b7e27269d3.png) Once in a while, programmers who start getting acquainted with the PVS-Studio code analyzer ask me: «Is there a list of warnings that accurately indicate errors?» There is no such list because uninteresting (false) warnings in one project are very important and useful in another one. However, one can definitely start digging into the analyzer from the most exciting warnings. Let's take a closer look at this topic. Trouble is, as a rule, at first runs a programmer drowns in a huge number of warnings that he gets. Naturally, he wants to start reviewing the most interesting warnings in order to understand whether he should spend his time sorting out all this. Good, so here are three simple steps that will let him check out the most exciting warnings. Step 1 ------ ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/648/436/d98/648436d98521380495f1c9f8ee384646.png) Disable all types of warnings except general ones (GA). A common mistake is to enable all types of warnings. Inexperienced users think that the more to enable, the better. That's not the case. There are diagnostic sets, such as 64-bit checks and MISRA rules, that should only be used when one clearly knows what are they and how to work with them. For example, enabling MISRA diagnostics for an ordinary application program, you will drown in tens, thousands or hundreds of thousands of warnings such as:* [V2506](https://www.viva64.com/en/w/v2506/). MISRA. A function should have a single point of exit at the end. * [V2507](https://www.viva64.com/en/w/v2507/). MISRA. The body of a loop\conditional statement should be enclosed in braces. * [V2523](https://www.viva64.com/en/w/v2523/). MISRA. All integer constants of unsigned type should have 'u' or 'U' suffix. Most MISRA warnings indicate not errors, but code smells. Naturally, a programmer begins to ask questions. How do you find something interesting in the pile of all these warnings? What numbers should he watch? These are the wrong questions. You just need to disable the MISRA set. This is the standard for writing quality code for embedded devices. The point of the standard is to make the code extremely simple and understandable. Don't try to apply it where it's inappropriate. Note. Yes, MISRA has rules designed to identify real bugs. Example: [V2538](https://www.viva64.com/en/w/v2538/) — The value of uninitialized variable should not be used. But don't be afraid to disable the MISRA standard. You're not going to lose anything. The real errors will still be found as part of the General Diagnostics (GA). For example, an uninitialized variable will be found by the [V614](https://www.viva64.com/en/w/v614/) diagnostic. Step 2 ------ ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/f30/9c1/d2f/f309c1d2fed7a2736170dad987cd4460.png) Any static analyzer issues false positives at the first runs and requires some configuring. Nothing can be done about it, but it's not as scary as it may seem. Even a simple quick setting allows you to remove most false positives and start viewing a quite relevant report. I won't talk more about it, as I have written about it many times, for example, in this article: "[Characteristics of PVS-Studio Analyzer by the Example of EFL Core Libraries, 10-15% of False Positives](https://www.viva64.com/en/b/0523/)". Spend a little time disabling obviously irrelevant warnings and fighting against false positives related to macros. Generally speaking, macros are the main reason of false positives, as a warning appears in all cases when a poorly implemented macro is used. To suppress warnings in macros, you can write comments of a special type next to their declaration. The more of comments format is covered in the [documentation](https://www.viva64.com/en/m/0017/). Yes, the initial setting will take a little time, but will drastically improve the perception of the report by eliminating the distracting noise. Take some time to do it. If there are any difficulties or questions, we are always ready to help and tell you how to set up the analyzer in the best way. Feel free to [write](https://www.viva64.com/en/about-feedback/) and ask us questions. Step 3 ------ ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/f87/0ec/c49/f870ecc49989743ae8caea3c745e602e.png) Start viewing warnings from Level 1. Only after it watch 2 and 3. Warning levels are nothing more than the veracity of a warning. Warnings of the Level 1 are more likely to indicate an actual error than warnings of the Level 2. You can say, when you choose to «watch Level 1,» you press the «watch the most interesting errors» button. In more detail, the classification of PVS-Studio warnings by levels is described in the article "[The way static analyzers fight against false positives, and why they do it](https://www.viva64.com/en/b/0488/)". So why isn't there a list? -------------------------- ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/df2/95f/70b/df295f70b19562c12916ec578022829f.png) However, the idea of having a list of the most useful warnings may still seem reasonable. Let me show you in a practical example that the usefulness of a diagnostic is relative and depends on the project. Let's consider the [V550](https://www.viva64.com/en/w/v550/) warning. The warning detects a potential error related to the fact that in order to compare numbers with a floating-point the operators == or != are used. Most of the developers I've talked to, think that this diagnostic is useless and they disable it because all its triggerings for their project are false. That's why this diagnostic has low level of certainty and relates to the Level 3. Indeed, in most applications, float/double types are used in very simple algorithms. Often the comparison with the constant is used solely to check if a certain value is set by default, or whether it has changed. In this case, the exact check is quite appropriate. I'll explain it with pseudo-code. ``` float value = 1.0f; if (IsUserInputNewValue()) value = GetUserValue(); if (value == 1.0f) DefaultBehavior(); else Foo(value); ``` Here the comparison *(value of 1.0f)* is correct and safe. Does this mean that the V550 diagnostic is uninteresting? No. It all depends on the project. Let me quote a snippet from the article "[How We Tried Static Analysis on Our X-Ray Endovascular Surgery Training Simulator Project](https://www.viva64.com/en/b/0331/)", written by our user. So, what our static analyzer pays attention to here: V550 An odd precise comparison: t != 0. It's probably better to use a comparison with defined precision: fabs(A — B) > Epsilon. objectextractpart.cpp 3401 ``` D3DXVECTOR3 N = VectorMultiplication( VectorMultiplication(V-VP, VN), VN); float t = Qsqrt(Scalar(N, N)); if (t!=0) { N/=t; V = V - N * DistPointToSurface(V, VP, N); } ``` Errors of such type repeat quite often in this library. I can't say it came as a surprise to me. Previously, I've met incorrect handling of numbers with a floating point in this project. However, there were no resources to systematically verify the sources. As a result of the check, it became clear that it was necessary to give the developer something to broaden his horizons in terms of working with floating point numbers. He's been linked to a couple of good articles. We'll see how things turn out. It is difficult to say for sure whether this error causes real disruptions in the program. The current solution exposes a number of requirements for the original polygonal mesh of arteries, which simulates the spread of X-ray contrast matter. If the requirements are not met, the program may fall, or the work is clearly incorrect. Some of these requirements are obtained analytically, and some — empirically. It is possible that this empirical bunch of the requirements is growing just because of incorrect handling of numbers with a floating point. It should be noted that not all found cases of using precise comparison of numbers with a floating point were an error. As you can see, what is not interesting in some projects is of interest in others. This makes it impossible to create a list of the «most interesting» ones. Note. You can also set the level of warnings using settings. For example, if you think that the V550 diagnostic deserves close attention, you can move it from Level 3 to the 1 one. This type of settings is described in the [documentation](https://www.viva64.com/en/m/0040/) (see «How to Set Your Level for Specific Diagnostics»). Conclusion ---------- ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/37d/d9e/ecd/37dd9eecd44e33d66c2993ad552e11f7.png) Now you know how to start studying analyzer warnings by looking at the most interesting ones. And don't forget to look into the documentation to get a detailed description of warnings. Sometimes it happens that behind a nondescript, at first glance, warning lies hell. An example of such diagnostics: [V597](https://www.viva64.com/en/w/v597/), [V1026](https://www.viva64.com/en/w/v1026/). Thank you for your attention.
https://habr.com/ru/post/457330/
null
null
1,557
57.47
[ ] ASF GitHub Bot commented on PHOENIX-3757: ----------------------------------------- Github? > System mutex table not being created in SYSTEM namespace when namespace mapping is enabled > ------------------------------------------------------------------------------------------ > > Key: PHOENIX-3757 > URL: > Project: Phoenix > Issue Type: Bug > Reporter: Josh Elser > Assignee: Karan Mehta > Priority: Critical > Labels: namespaces > Fix For: 4.13.0 > > Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch, PHOENIX-3757.003.patch > > > Noticed this issue while writing a test for PHOENIX-3756: > The SYSTEM.MUTEX table is always created in the default namespace, even when {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like the logic for the other system tables isn't applied to the mutex table. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
http://mail-archives.us.apache.org/mod_mbox/phoenix-dev/201710.mbox/%3CJIRA.13060807.1490994869000.60967.1508962680397@Atlassian.JIRA%3E
CC-MAIN-2021-49
refinedweb
117
57.67
T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 project. I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008. The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool. If you haven't tried DotLessCSS yet, go check it out now!. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly. Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010. They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies. This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build? Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change. So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: - GAC your assemblies and use Namespace Reference or Fully Qualified Type Name - Use a hard-coded Fully Qualified UNC path - Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name. - Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. - Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain. But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment. However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio. Think of it like a VS-only GAC. This is likely the best place for something like dotLessCSS and is my preferred solution. However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location. Normally this is located at: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system. The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system. In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments. An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution. This is just one of the many macros you can use. If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build. However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually. I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)” statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010. As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you. If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010. Happy T4 templating, and “May the fourth be with you!”
http://weblogs.asp.net/lhunt/t4-template-error-assembly-directive-cannot-locate-referenced-assembly-in-visual-studio-2010-project
CC-MAIN-2015-18
refinedweb
1,133
60.95
JScript 8.0, the next generation of the Microsoft JScript language, is designed to be a fast and easy way to access the Microsoft .NET platform using the language of the Web. The primary role of JScript 8.0 is construction of Web sites with ASP.NET and customization of applications with Script for the .NET Framework. JScript 8.0, helps to maintain security by adding a restricted security context for the eval method. Several new features in JScript 8.0 take advantage of the CLS — a set of rules that standardizes such things as data types, how objects are exposed, and how objects interoperate. Any CLS-compliant language can use the classes, objects, and components that you create in JScript 8.0. And you, as a JScript developer, can access classes, components, and objects from other CLS-compliant programming languages without considering language-specific differences such as data types. Some of the CLS features that JScript 7.0 programs use are namespaces, attributes, by-reference parameters, and native arrays. Following are some of the new features in JScript .NET and JScript 8.0: The /platform option is used to specify the type of processor targeted by the output file: x86 for 32-bit Intel-compatible processors, Itanium for the Intel 64-bit processors, and x64 for AMD 64-bit processors. The default (anycpu) allows the output file to run on any platform. To help maintain .NET introduces a const statement that defines an identifier that represents a constant value. For more information, see JScript Variables and Constants. JScript .NET introduces the enum statement that allows you to construct enumerated data types. With an enumeration, you can specify helpful names for your data type values. For more information, see enum Statement.
http://msdn.microsoft.com/en-us/library/e2h4yzx6.aspx
crawl-002
refinedweb
291
58.79
In my previous post I talked about a vimwiki plugin for taking notes: I was still unsatisfied with the results, such as redefining markdown behavior and losing some of the essential shortcuts. Then I stumbled across an article «You (probably) don’t need Vimwiki» by Joe Reynolds. Replacing vimwiki So to replace vimwiki we need to configure following features: - open/create new file under cursor. - open notes from any location(with Leaderww) - shortcut for toggling checkboxes to make TODO lists - align tables in markdown - previewing html Create new file under cursor. The article that I mentioned above did say that you can replace the vimwiki's enter shortcut that creates new markdown files and opens them with gf(open file under cursor). Unfortunately, it works only when the file actually exists. We want to be able to quickly create and open a new note once we typed its name. However, there is an alternative shortcut in markdown plugin by plasticboy –– ge that allows you to open file under cursor(if it is a markdown link) even if it doesn’t exist... unless it's in a subfolder(s) that doesn't exist. Lucky for us, there is a plugin that creates all subfolders in the path of the file( mkdir -p for vim): Mkdir Maggie! Don't even ask. Just bring it. Come on. -- Hot Rod Installation Install using your preferred vim plugin management plugin. Usage :e this/does/not/exist/file.txt :w Smile when you are not presented with an error. Instead, notice that vim has automatically created the non-existent directory for you. so if you add to your .vimrc or init.vim following plugins: Plug 'pbrisbin/vim-mkdir' Plug 'plasticboy/vim-markdown', { 'for': 'markdown' } provided that you're using vim-plug as a plugin manager, you should be able to type ge in md file(on a markdown link) and vim would create the file under cursor, as well as all subfolders and open this file for editing. You can find more information about markdown plugin mappings in README. Open notes from any location I store all my notes in ~/Documents/notes folder(and in MacOS all files are automatically synced with iCloud). Our shortcut should essentially open the ~/Documents/notes/index.md file: " open ~/Documents/notes/index.md nnoremap <Leader>ww :e ~/Documents/notes/index.md<cr> since we've configured plugin that creates subfolders it should automatically create notes folder as well. Toggling checkboxes to make TODO lists For this I found a nice plugin: jkramer / vim-checkbox Vim plugin for toggling checkboxes. Vim Checkbox Description Simple plugin that toggles text checkboxes in Vim. Works great if you're using a markdown file for notes and todo lists. Installation Just copy the script into your plugin folder, e.g. ~/.vim/plugin/. If you're using pathogen, just clone this repository in ~/.vim/bundle. Usage <leader>tt to toggle the (first) checkbox on the current line, if any. That means, [ ] will be replaced with [x] and [x] with [ ]. If you want more or different checkbox states, you can override the contents of g:checkbox_states with an array of characters, which the plugin will cycle through. The default is: let g:checkbox_states = [' ', 'x'] When there's no checkbox on the current line, <leader>tt will insert one at the pattern defined in g:insert_checkbox. The new checkbox's state will be the first element of g:checkbox_states. The default for g:insert_checkbox is '\<', which… the shortcut that interests us is Leadertt that searches and toggles the checkbox on the cursor line. In visual mode it will toggle all checkboxes on selected lines. Align tables in markdown The previously mentioned markdown plugin already contains this feature in form of a command: :TableFormat. It will align a table under cursor. Previewing html To preview markdown in a browser I'm using this amazing plugin: iamcco / markdown-preview.nvim markdown preview plugin for (neo)vim ✨ Markdown Preview for (Neo)vim ✨ ❤️ Introduction It only works on vim >= 8.1 and neovim Preview markdown on your modern browser with synchronised scrolling and flexible configuration Main features: - Cross platform (macos/linux/windows) - Synchronised scrolling - Fast asynchronous updates - Katex for typesetting of math - Plantuml - Mermaid - Chart.js - sequence-diagrams - flowchart - dot - Toc - Emoji - Task lists - Local images - Flexible configuration Note it's no need mathjax-support-for-mkdp plugin for typesetting of math install & usage " If you don't have nodejs and yarn " use pre build, add 'vim-plug' to the filetype list so vim-plug can update this plugin " see: Plug 'iamcco/markdown-preview.nvim', { 'do': { -> mkdp#util#install() }, 'for': ['markdown', 'vim-plug']} " If you have nodejs and yarn Plug 'iamcco/markdown-preview.nvim', { 'do':… to start a preview you simply need to type :MarkdownPreview and it will open up a browser and sync all your modifications. To stop the preview you can type :MarkdownPreviewStop. So in the end your .vimrc file should look something like this: call plug#begin('~/.vim/plugged') Plug 'pbrisbin/vim-mkdir' Plug 'jkramer/vim-checkbox', { 'for': 'markdown' } Plug 'plasticboy/vim-markdown', { 'for': 'markdown' } if executable('npm') Plug 'iamcco/markdown-preview.nvim', { 'do': 'cd app & npm install' } endif " Initialize plugin system call plug#begin('~/.vim/plugged') " open ~/Documents/notes/index.md nnoremap <Leader>ww :e ~/Documents/notes/index.md<cr> You can find these configs and more in my vim config repo: Vim Settings An article describing key features of this config. Prerequisites In order to get all features you might want to install following packages: Installation On unix and windows(with bash which can be installed with git): curl -L | bash macOS In macOS terminal.app don't forget to check the «Use option as meta key»: And «Esc+» option in iterm2: Shortcuts Some of shortcuts(Leader key is comma): - Ctrl + s saves current file - Leader + s in both selectand normalmode initiates search and replace - Alt + Up/Down moves line or selection above or below current line(see upside-down for more info) - Alt + Left/Right moves character or selection to left or to the right - Leader + n toggles NERDTree - Leader + m shows current file in NERDTree - when in select mode ', ", ( wraps selection accordingly - y… You can also find me on twitter: Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/konstantin/taking-notes-in-vim-revisited-558k
CC-MAIN-2021-10
refinedweb
1,037
54.42
Talking | Yet Another Podcast Call in comments: 1-347-YAP-CAST John Papa compared KnockoutJS to XAML data binding, but he mentioned that his comparison wasn’t quite accurate. The thing that XAML data binding is missing is dependency tracking. XAML uses change notification, which is completely manual. When FirstName changes, you have to write code to notify XAML that FullName has changed. I’ve been doing dependency tracking for many years. I’ve recently started a project called KnockoutCS to bring KnockoutJS-style dependency tracking to XAML. Here’s what I have working: private void MainPage_Loaded(object sender, RoutedEventArgs e) { dynamic model = KO.Observable(new Model()); DataContext = KO.ApplyBindings(model, new { FullName = KO.Computed(() => model.FirstName + " " + model.LastName) }); } Where the model is simply: public class Model { public string FirstName { get; set; } public string LastName { get; set; } } Much easier than INotifyPropertyChanged. Is there a reason that i cant download this podcast via zune from switzerland? cheers
http://jesseliberty.com/2012/02/20/yet-another-podcast-60knockout-js/
CC-MAIN-2017-04
refinedweb
156
50.12
A wildly popular operation you’ll find in any (non-trivial) code base is to concatenate lists—but there are multiple methods to accomplish this. Master coders will always choose the right method for the right problem. This tutorial shows you the difference between three methods to concatenate lists: - Concatenate two lists with the +operator. For example, the expression [1, 2, 3] + [4, 5]results in a new list [1, 2, 3, 4, 5]. More here. - Concatenate two lists with the +=operator. This operation is inplace which means that you don’t create a new list and the result of the expression lst += [4, 5]is to add the elements on the right to the existing list object lst. More here. - Concatenate two lists with the extend()method of Python lists. Like +=, this method modifies an existing list in place. So the result of lst.extend([4, 5])adds the elements 4and 5to the list lst. More here. To summarize: the difference between the + method and the += and extend() methods is that the former creates a new list and the latter modify an existing list object in-place. You can quickly compare those three methods in the following interactive code shell: Puzzle: Can you already figure out the outputs of this code snippet? Fear not if you can’t! I’ll explain you each detailed example next. Method 1: Add (+) The standard way of adding two lists is to use the + operator like this: # METHOD 1: ADD + lst = ['Alice', 'Bob', 'Ann'] lst_new = lst + [42, 21] print(lst) print(lst_new) While the + operator is the most readable one (especially for beginner coders), it’s not the best choice in most scenarios. The reason is that it creates a new list each time you call it. This can become very slow and I’ve seen many practical code snippets where the list data structure used with the + operator is the bottleneck of the whole algorithm. In the above code snippet, you create two list objects in memory—even though your goal is probably just to update the existing list ['Alice', 'Bob', 'Ann']. This can be nicely demonstrated in the code visualization tool: Just keep clicking “Next” until the second list appears in memory. Method 2: INPLACE Add (+=) The += operator is not well understood by the Python community. Many of my students (join us for free) believe the add operation lst += [3, 4] is just short for lst = lst + [3, 4]. This is wrong and I’ll demonstrate it in the following example: # METHOD 2: INPLACE ADD += lst = ['Alice', 'Bob', 'Ann'] lst_old = lst lst += [42, 21] print(lst) print(lst_old) Again, you can visualize the memory objects with the following interactive tool (click “Next”): The takeaway is that the += operation performs INPLACE add. It changes an existing list object rather than creating a new one. This makes it more efficient in the majority of cases. Only if you absolutely need to create a new list, you should use the + operator. In all other cases, you should use the += operator or the extend() method. Speaking of which… Method 3: Extend() Like the previous method +=, the list.extend(iterable) method adds a number of elements to the end of a list. The method operators in-place so no new list object is created. # METHOD 3: EXTEND() lst = ['Alice', 'Bob', 'Ann'] lst_old = lst lst.extend([42, 21]) print(lst) print(lst_old) Here’s the interactive memory visualization: Click “Next” and explore how the memory allocation “unfolds” as the execution proceeds. Speed Comparison Benchmark Having understood the differences of the three methods + vs += vs extend(), you may ask: what’s the fastest? To help you understand why it’s important to choose the best method, I’ve performed a detailed speed benchmark on my Intel i7 (8th Gen) Notebook (8GB RAM) concatenating lists with increasing sizes using the three methods described previously. Here’s the result: The plot shows that with increasing list size, the runtime difference between the + method (Method 1), and the += and extend() methods (Methods 2 and 3) becomes increasingly evident. The former creates a new list for each concatenation operation—and this slows it down. Result: Thus, both INPLACE methods += and extend() are more than 30% faster than the + method for list concatenation. You can reproduce the result with the following code snippet: import time # Compare runtime of three methods list_sizes = [i * 300000 for i in range(40)] runtimes_1 = [] # Method 1: + Operator runtimes_2 = [] # Method 2: += Operator runtimes_3 = [] # Method 3: extend() for size in list_sizes: to_add = list(range(size)) # Get time stamps time_0 = time.time() lst = [1] lst = lst + to_add time_1 = time.time() lst = [1] lst += to_add time_2 = time.time() lst = [1] lst.extend(to_add) time_3 = time.time() # Calculate runtimes runtimes_1.append((size, time_1 - time_0)) runtimes_2.append((size, time_2 - time_1)) runtimes_3.append((size, time_3 - time_2)) # Plot everything import matplotlib.pyplot as plt import numpy as np runtimes_1 = np.array(runtimes_1) runtimes_2 = np.array(runtimes_2) runtimes_3 = np.array(runtimes_3) print(runtimes_1) print(runtimes_2) print(runtimes_3) plt.plot(runtimes_1[:,0], runtimes_1[:,1], label='Method 1: +') plt.plot(runtimes_2[:,0], runtimes_2[:,1], label='Method 2: +=') plt.plot(runtimes_3[:,0], runtimes_3[:,1], label='Method 3: extend()') plt.xlabel('list size') plt.ylabel('runtime (seconds)') plt.legend() plt.savefig('speed.jpg') plt.show() If you liked this tutorial, join my free email list where I’ll send you the most comprehensive FREE Python email academy right in your INBOX. Join the Finxter Community!
https://blog.finxter.com/python-list-concatenation-add-vs-inplace-add-vs-extend/
CC-MAIN-2020-34
refinedweb
904
65.12
Downloading Coherence 3.7LSV Sep 24, 2013 10:32 AM I want to download and start using latest Coherence jar. I tried it in below link. but, i am unable to download. when i click on "Accept License Agreement" it's not recognizing. Can someone pls help ? Oracle Coherence Software Archive</title><meta name="Title" content="Oracle Coherence Software A… 1. Re: Downloading Coherence 3.7Ricardo Ferreira-Oracle Sep 30, 2013 6:26 PM (in response to LSV) Maybe the final link could be broken due some website restrictions. Access support.oracle.com and download the specific version you want of Coherence. Cheers, Ricardo Ferreira 2. Re: Downloading Coherence 3.7LSV Oct 3, 2013 12:04 PM (in response to Ricardo Ferreira-Oracle) Thanks for the reply. But i dont see a way there to download. anyways, i got the jar. thanks again. We were running our Coherence grid using 3.6. it was working fine. We just updated the server to use 3.7.1 and our server is not starting with below error. Any suggestions ? 2013-10-03 07:39:38,460 [Logger@1247017815 3.7.1.0] INFO Coherence - 2013-10-03 07:39:38.459/0.734 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/test/stand/coherence/target/classes/test/coherence/grid/config/coherence-server-one.xml"; this document does not refer to any schema definition and has not been validated. 2013-10-03 07:39:38,619 [Logger@1247017815 3.7.1.0] INFO Coherence - 2013-10-03 07:39:38.618/0.893 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "jar:file:/test/stand/coherence/lib/coherence.jar!/coherence-cache-config.xml" 2013-10-03 07:39:38,664 [Logger@1247017815 3.7.1.0] INFO Coherence - 2013-10-03 07:39:38.664/0.939 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): WARNING: Failed to load Coherence cache-config.dtd. Provided configuration XML element names will not be validated. Class:com.oracle.coherence.environment.extensible.namespaces.CoherenceNamespaceContentHandler 3. Re: Downloading Coherence 3.7Jonathan.Knight Oct 3, 2013 12:47 PM (in response to LSV) Hi I'm not sure that the message you have posted is the cause of your cluster failing to start. All the message says it that the XML configuration will not be validated against a schema or DTD. I would expect there are more error messages or stack traces in your logs. I see that you are using the Coherence Incubator too - have you made sure that the version of the Incubator you are using is compatible with 3.7.1 JK 4. Re: Downloading Coherence 3.7Leo_TA Oct 22, 2013 10:09 AM (in response to LSV) Hi , The best place to download Oracle software is "Oracle edelivery" 1 edelivery.oracle.com/ 2 log with oracle account 3 choose Select a Product Pack: Oracle Fussion Middleware , click GO 4 choose Oracle WebLogic Server 12c Media Pack 5 choose "Oracle Coherence Version 3.7.1" 6 click "download" button Leo_TA 5. Re: Downloading Coherence 3.7LSV Oct 22, 2013 2:19 PM (in response to Leo_TA) Thanks for the reply
https://community.oracle.com/message/11215495
CC-MAIN-2017-51
refinedweb
548
53.58
XML Framework Generic Framework for XML with two important and difficult goals: . Read in XML to produce Java Objects (Optional Validation) Translates Java Objects into an XML Document Left to its own devices, the Framework will give you something like xBeans. But as we will see, it gives you the opportunity to be much smarter, re-using your code automatically. The Diagram on the left is a graphical representation of two Geometry Objects. The Diagram on the left is a portion of the XML representation of these Geometry Objects. If we had used the default xBeans implementation, we would have represented all the data. Instead, we included some extra smarter algorithms and created JTS Java Objects. This provides us with all the optimized operations included in a JTS Object, such as checking for a Geometric Intersection between the Line and the Polygon instances. In the example to the left, with the default xBean implementation, the coordinates would be unparsed and the ‘Location’ would not be a JTS Geometry. But since the schema for Building (included later) defines Location to be a Geometry, the default implementation would return a Bean with a ‘Geometry getLocation()’ method. This is all due to the frameworks ability to automatically recognize and link in your additional functionality. Summary : Thus far we have introduced the notion of combining your smart directives with the default xBean implementation using XML schemas to produce real Java Objects. We have also seen that the framework will automatically reuse your hard work to parse XML representations of your object within other XML documents. XML inheritance in the XML Framework Lets take a look at the Schema for our building example above. In the second and third lines we setup some XML inheritance. Inheritance is used by the XML framework to carry on doing smart operations. When you sub-class from the original parent instance, the framework will continue to attempt to complete your smart operations. If we were to compare XML inheritance and Java Inheritance, we would see some similarities. The inheritance trees are similar to Interfaces in Java, except you may only implement a single interface. In XML the names of the interfaces are declared separately from the actual internals of the interface. Names are declared as abstract elements and the internals off the interface is represented as an extensible complex type, which the abstract element refers to. In the example above, the two interfaces declared in the GML schema are used, “_Feature” and “pointProperty”. In the first example, we see “_Feature” declared as the interface name. From examining the GML namespace we found that the associated complex type was “AbstractFeatureType”. In line two above, we set up a Java ‘cast’ operation with the substitutionGroup declaration, and extend the “AbstractFeatureType” interface with two additional fields. The interesting part of this example with respect to the XML framework is when you implement a smart algorithm for “_Feature” and “AbstractFeatureType”, in most cases you should not need to implement any Java code! Streaming in the XML Framework When your application deals with a lot of data, often it is important to not be memory bound. This has typically been solved using a buffering algorithm. In my implementation I found it was simplest to implement an Iterator that sits over a buffer. The Iterator executes in the main program thread, and spawns a secondary IO thread, which populates the buffer. Careful manipulation of the both threads execution patterns can result in an efficient and same streaming parser. The diagram included above depicts my solution to creating a streaming parser. The arrows show data flow with respect to the thread’s activity status. Notice the consumer thread only accesses the data after allowing the producer thread to start population of the buffer, and that at no time do both threads execute in tandem. Some words of warning: Be careful not to pause too long as you IO connection may timeout If you choose to manage your own IO timeouts, be careful to avoid infinite loops waiting on stalled IO. When killing loops, remember to both interrupt the IO and to terminate the thread’s execution. For managing my own timeouts, I set a counter for the number of no-op yields in the producer thread, and reset the counter every time I produce an Object. For killing Threads I use the Thread.interrupt() method, and a combination of a well-known exception (to catch in the consumer thread) and an internal state variable.
http://docs.codehaus.org/pages/worddav/preview.action?fileName=xml_framework.doc&pageId=4577
CC-MAIN-2014-10
refinedweb
748
52.19
sphinx_py3doc_enhanced_theme 2.4.0 A theme based on the theme of with some responsive enhancements. A theme based on the theme of with some responsive enhancements. - Free software: BSD license Installation pip install sphinx_py3doc_enhanced_theme Add this in your documentation’s conf.py: import sphinx_py3doc_enhanced_theme html_theme = "sphinx_py3doc_enhanced_theme" html_theme_path = [sphinx_py3doc_enhanced_theme.get_html_theme_path()] Customization No extra styling This theme has some extra styling like different fonts, text shadows for headings, slightly different styling for inline code and code blocks. To get the original styling Python 3 docs have add this in you conf.py: html_theme_options = { 'githuburl': '', 'bodyfont': '"Lucida Grande",Arial,sans-serif', 'headfont': '"Lucida Grande",Arial,sans-serif', 'codefont': 'monospace,sans-serif', 'linkcolor': '#0072AA', 'visitedlinkcolor': '#6363bb', 'extrastyling': False, } pygments_style = 'friendly' Custom favicon To have a custom favicon create a theme directory near your conf.py and add this theme.conf in it: [theme] inherit = sphinx_py3doc_enhanced_theme Then create a favicon.png in the static directory. And then edit your conf.py to have something like this: import sphinx_py3doc_enhanced_theme html_theme = "theme" html_theme_path = [sphinx_py3doc_enhanced_theme.get_html_theme_path(), "."] The final file structure should be like this: docs ├── conf.py └── theme ├── static │ └── favicon.png └── theme.conf A bit of extra css html_theme_options = { 'appendcss': 'div.body code.descclassname { display: none }', } Examples Changelog 2.3.2 (2015-12-24) - Fixed regression in sidebar size when there was no page content. Sidebar has its own height again. 2.3.1 (2015-12-18) - Fixed sidebar contents not moving while scrolling at all. 2.3.0 (2015-12-18) 2.2.4 (2015-10-23) - Removed awkward bottom padding of paragraphs in table cells. - Fix highlight of “p” anchors (that have id and got :target). 2.2.3 (2015-09-13) - Fixed display of argument descriptions when there are multiple paragraphs. First paragraph shouldn’t be on a second line. 2.2.2 (2015-09-12) - Fixed issues with highlighting a section (via anchor location hash). Previously code blocks would get ugly bar on the left. 2.2.1 (2015-08-21) - Fixed positioning of navigation sidebar when displayed in narrow mode (at the bottom). Previously it overlapped the footer. 2.2.0 (2015-08-19) - Added the appendcss theme options for quick customization. - Added the path setuptools entrypoint so html_theme_path doesn’t need to be set anymore in conf.py. 2.1.1 (2015-07-11) - Remove background from reference links when extrastyling is off. 2.1.0 (2015-07-11) - Added new theme option extrastyling which can be used to get the original Python 3 docs styling (green code blocks, gray inline code blocks, no text shadows etc) - The py.png favicon is renamed to favicon.png. - Added some examples for customizing the styling or using a custom favicon. 2.0.2 (2015-07-08) - Make inline code blocks bold. 2.0.1 (2015-03-25) - Fix inclusion of default.css (now classic.css). 2.0.0 (2015-03-23) - Use HTML5 doctype and force IE into Edge mode. - Add a embedded flag that removes JS (for building CHM docs). - Inherit correct theme (default renamed in Sphinx 1.3). 1.2.0 (2015-02-24) - Fat-fingered another version. Should had been 1.0.1 … damn. 1.1.0 (2015-02-24) - Match some markup changes in latest Sphinx. 1.0.0 (2015-02-13) - Fix depth argument for toctree (contributed by Georg Brandl). 0.1.0 (2014-05-31) - First release on PyPI. - Author: Ionel Cristian Mărieș - License: BSD - Categories - Development Status :: 5 - Production/Stable - Intended Audience :: Developers - License :: OSI Approved :: BSD License - Operating System :: Microsoft :: Windows - Operating System :: POSIX - Operating System :: Unix - Programming Language :: Python - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Programming Language :: Python :: Implementation :: CPython - Programming Language :: Python :: Implementation :: PyPy - Topic :: Utilities - Package Index Owner: ionel - DOAP record: sphinx_py3doc_enhanced_theme-2.4.0.xml
https://pypi.python.org/pypi/sphinx_py3doc_enhanced_theme
CC-MAIN-2017-30
refinedweb
641
52.26
Postgres, MySQL, SQLite3¶ Before the db_hook rake task is invoked, Solano CI will install a database.yml into the config directory of your application. It then sets the environment so that the database.yml will point to the right database instance each time the db_hook is run and whenever one of your tests is run. A sample stanza from database.yml for the test environment is shown below; the same configuration is available in the production, staging, development, etc. environments. test: &test adapter: <%= ENV['TDDIUM_DB_ADAPTER'] %> database: <%= ENV['TDDIUM_DB_NAME'] %> username: <%= ENV['TDDIUM_DB_USER'] %> password: <%= ENV['TDDIUM_DB_PASSWORD'] %> When MySQL is configured, Solano CI will also export the TDDIUM_DB_MYSQL_SOCKET environment variable with the path to the MySQL Unix domain socket. For both Postgres, TDDIUM_DB_PG_HOST and TDDIUM_DB_PG_PORT contain the host address and TCP port for the database server; similarly for TDDIUM_DB_MYSQL_HOST and TDDIUM_DB_MYSQL_PORT. In some rare cases you may need to specify the database adapter Solano CI configures. For instance, if you have both the mysql and mysql2 gems in your Gemfile, there is no general way to determine which adapter to use. You can force the choice of adapter like so: --- mysql: adapter: 'mysql2' In some cases you may need to load Postgres contrib packages or language extensions; the canonical way to do this is via a custom database initialization Rake hook task. You can read more about this process on the Loading Postgres Extensions page. Similarly, if you need to load a raw SQL schema dump, you can do so via the custom initialization hook; you can read more about raw SQL dump support on the Setup Hooks page. Note that sqlite3, if enabled, will be configured to place the databases in the top-level db directory in your repository. Postgres Extension Guide¶ Postgres ships with a number of “contrib” packages that provide extension languages, user-defined types, triggers, etc. By default, Solano CI does not install any of these extensions into your database cluster, however you can easily do so in the database setup hook. You can also load your own extensions out of your git repository. As a simple example, take the PostGIS extension, which must be manually loaded into the database. You can do so by defining a custom db_hook. Assuming that this is the only setup required, you can use the following snippet: --- hooks: worker_setup: createdb $TDDIUM_DB_NAME; psql $TDDIUM_DB_NAME -c 'CREATE EXTENSION postgis;' Note that the postgis version can be forced explicitly via solano.yml; the default is the most recent available for the version of Postgres that you have enabled. Alternatively, you can define a Rake task suitable for use in a Rails application (also as a gist): namespace :tddium do desc "load database extensions" task db_hook: :"db:create" do # Copyright (c) 2011, 2012, 2013, 2014, 2015, 2016 Solano Labs All Rights Reserved # Kernel.system("psql #{ENV['TDDIUM_DB_NAME']} -c 'CREATE EXTENSION postgis;'") Rake::Task["tddium:default_db_hook"].invoke end end Installing Extensions with Ruby¶ For a standard extension such as hstore, the simplest approach is to define a tddium:db_hook that first invokes the default db_hook and then loads the extension. If your schema depends on the extension, it may be necessary to invoke the db:create task, load the extension, and then run migrations or load schema.rb. Some extensions are not captured by schema.rb, so you may need to invoke the db:structure:load task instead of our default, which tries db:schema:load. A simple example for Postgres 8.4 that loads the hstore extension is shown below. If you are using Postgres 9.1 or later, be sure to use CREATE EXTENSION instead; you can read more about loading extensions in Postgres 9.1 here or adapt this gist:. # Copyright (c) 2011, 2012, 2013, 2014, 2015, 2016 Solano Labs All Rights Reserved namespace :tddium do desc "load database extensions" task :db_hook do Rake::Task["tddium:default_db_hook"].invoke # There is not yet a way to determine Tddium PG version from environment contrib = '/usr/share/postgresql/8.4/contrib/hstore.sql' Kernel.system("psql #{ENV['TDDIUM_DB_NAME']} -f #{contrib}") end end The above example is available as a gist: Multiple Relational Database Servers¶ Solano CI supports the use of multiple relational database servers by the same application at the same time. For instance, an application may wish to use both MySQL and PostgreSQL at the same time: mysql: version: '5.5' postgresql: version: '9.3' To support this, Solano CI will stand up a copy of each database server, export configuration information to the environment, and write out a configuration file, config/tddium-database.yml. A snippet of this configuration file follows showing both the format of the YAML configuration file and the environment variables exported by Solano CI: sqlite: test: &test adapter: <%= ENV['TDDIUM_DB_SQLITE_ADAPTER'] %> database: <%= ENV['TDDIUM_DB_SQLITE_NAME'] %> username: <%= ENV['TDDIUM_DB_SQLITE_USER'] %> password: <%= ENV['TDDIUM_DB_SQLITE_PASSWORD'] %> <%= "socket: #{ENV['TDDIUM_DB_SQLITE_SOCKET']}" if ENV['TDDIUM_DB_SQLITE_SOCKET'] %> The top-level keys are symbols naming the type of the database ( sqlite, postgres, mysql). The next level of the nested hash are strings naming environments (test, development, etc.). The last is as in database.yml above.
http://docs.solanolabs.com/SettingupDatabasesandSearch/database-setup/rdbms/
CC-MAIN-2017-26
refinedweb
840
52.09
Better White Space Control in Twig Templates Whitespace control in Twig templates allows you to control the indentation and spacing of the generated contents (usually HTML code). Controlling white spaces is very important when using Twig to generate contents like YAML, text emails or any other format where the white spaces are significant. In contrast, when generating HTML contents, most of the times you should ignore this feature, because the HTML contents are minified and compressed before sending them to the users, so trying to generate perfectly aligned HTML code is just a waste of time. However, there are some specific cases where whitespace can change how things are displayed. For example, when an <a> element contains white spaces after the link text and the link displays an underline, the whitespace is visible. That's why Twig provides multiple ways of controlling white spaces. In recent Twig versions, we've improved those features. New whitespace trimming options¶ Contributed by Fabien Potencier in #2925. Consider the following Twig snippet: If the value of some_variable is 'Lorem Ipsum', the HTML generated when the if expression matches, would be the following: Twig only removes by default the first \n character after each Twig tag (the \n after the if and endif tags in the previous example). If you want to generate HTML code with better indention, you can use the - character, which removes all white spaces (including newlines) from the left or right of the tag: The output is now: Starting from Twig 1.39 and 2.8.0, you have another option to control whitespace: the ~ character (which can be applied to {{, {% and {#). It's similar to -, with the only difference that ~ doesn't remove newlines: The output now contains the newlines after/before the <li> tags, so the generated HTML is more similar to the original Twig code you wrote: Added a spaceless filter¶ Contributed by Fabien Potencier in #2872. In previous Twig versions, there was a tag called {% spaceless %} which transformed the given string content to remove the white spaces between HTML tags. However, in Twig, transforming some contents before displaying them is something done by filters. That's why, starting from Twig 1.38 and 2.7.3, the spaceless tag has been deprecated in favor of the spaceless filter, which works exactly the same: However, this is commonly used with the alternative way of applying some filter to some HTML contents: In case you missed it, the apply tag was recently added to replace the filter tag. In any case, even after these changes, it's still recommend to not use the spaceless filter too much. The removal of white spaces with this filter happens at runtime, so calling it repeatedly can hurt performance. Fine-grained escaping on ternary expressions¶ Contributed by Fabien Potencier in #2934. This new feature introduced in Twig 1.39 and 2.8 is not related to whitespace control, but it's an important new feature to consider in your templates. Consider the following example and the results rendered in Twig versions before 1.39 and 2.8: The reason why this example worked in that way in previous Twig versions is that in the first ternary statement, foo is marked as being safe and Twig does not escape static values. In the second ternary statement, even if foo is marked as safe, bar remains unsafe and so is the whole expression. The third ternary statement is marked as safe and the result is not escaped. This behavior was confusing to lots of designers and developers. That's why, starting from Twig 1.39 and 2.8, the result of this example has changed as follows: Before, the escaping strategy was the same for both sides of the ternary operator. Now, in Twig 1.39 and 2.8, Twig applies a special code path for ternary operator that is able to have different escaping strategies for the two sides. Better White Space Control in Twig Templates symfony.com/blog/better-white-space-control-in-twig-templatesTweet this __CERTIFICATION_MESSAGE__ Become a certified developer. Exams are taken online! To ensure that comments stay relevant, they are closed for old posts. naitsirch said on Jun 24, 2019 at 10:21 #1
https://symfony.com/blog/better-white-space-control-in-twig-templates?utm_source=Symfony%20Blog%20Feed&utm_medium=feed
CC-MAIN-2019-47
refinedweb
704
63.39
Originally posted by Phil Perkins: Can you overload a static method? Originally posted by mark stone: public class A { static void mm() {System.out.println("mm");} static void mm(int a) {System.out.println("mm");} } method mm() is static and we overloaded it. where is the problem. i am not sure if this is the question you were asking. methods can be overloaded. period. ( static or non-static ) or are you asking overriding, then also it is fine class B extends A { static void mm() {System.out.println("mm-overriding");} } method mm() is overridden in class B and is ok. so again methods can be overridden (static or non-static) Originally posted by Vanitha Sugumaran: Static methods don't participate in overriding, in the example you have shown the method mm() is not being overridden, if you read the JLS part that Ajith has pointed out, you will understand it. Vanitha.
http://www.coderanch.com/t/235923/java-programmer-SCJP/certification/overloading-static-methods
CC-MAIN-2014-35
refinedweb
151
57.57
multip. Note Some of this package’s functionality requires a functioning shared semaphore implementation on the host operating system. Without one, the multiprocessing.synchronize module will be disabled, and attempts to import it will result in an ImportError. See issue 3770 for additional information. master process somehow.)__) if hasattr(os, 'getppid'): # only available on Unix. Depending on the platform, multiprocessing supports three ways to start a process. These start methods are - spawn - The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process objects run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver. Available on Unix and Windows. The default on Windows. -. multiprocessing supports two types of communication channel between processes: Queues The Queue class. multiprocessing contains equivalents of all the synchronization primitives from threading. For instance one can use a lock to ensure that only one process prints to standard output at a time: from multiprocessing import Process, Lock def f(l, i): l.acquire() print('hello world', i) l.release() if __name__ == '__main__': lock = Lock() for num in range(10): Process(target=f, args=(lock, num)).start() Without using the lock output from the different processes is liable to get all mixed up. Note that the methods of a pool should only ever be used by the process which created it. The multiprocessing package mostly replicates the API of the threading module. for more details). args is the argument tuple for the target invocation. kwargs is a dictionary of keyword arguments for the target invocation. If provided, the keyword-only daemon argument sets the process daemon flag to True the process’s activity. This must be called at most once per process object. It arranges for the object’s run() method to be invoked in a separate process..‘ is constructed, where each Nk is the N-th child of its parent. Return whether the process is alive. Roughly, a process object is alive from the moment the start() method returns until the child process terminates. API, Process objects also support the following attributes and methods: Return the process ID. Before the process is spawned, this will be None. The child’s exit code. This will be None if the process has not yet terminated. A negative value -N indicates that the child was terminated by signal N.. and WaitForMultipleObjects family of API calls. On Unix, this is a file descriptor usable with primitives from the select module. New in version 3.3. Terminate the process. On Unix this is done using the SIGTERM signal;. Note that the start(), join(), is_alive(), terminate() and exitcode methods should only be called by the process that created the process object. Example usage of some of the methods of Process: >>> import multiprocessing, time, signal >>> p = multiprocessing.Process(target=time.sleep, args=(1000,)) >>> print(p, p.is_alive()) <Process(Process-1, initial)> False >>> p.start() >>> print(p, p.is_alive()) <Process(Process-1, started)> True >>> p.terminate() >>> time.sleep(0.1) >>> print(p, p.is_alive()) <Process(Process-1, stopped[SIGTERM])> False >>> p.exitcode == -signal.SIGTERM True The base class of all multiprocessing exceptions. Exception raised by Connection.recv_bytes_into() when the supplied buffer object is too small for the message read. If e is an instance of BufferTooShort then e.args[0] will give the message as a byte string. Raised when there is an authentication error. Raised by methods with a timeout when the timeout expires.. Returns a pair (conn1, conn2) of Connection objects representing the ends of a pipe. If duplex is True (the default) then the pipe is bidirectional. If duplex is False then the pipe is unidirectional: conn1 can only be used for receiving messages and conn2 can only be used for sending messages. Returns a process shared queue implemented using a pipe and a few locks/semaphores. When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe. The usual queue.Empty and queue.Full exceptions from the standard library’s queue module are raised to signal timeouts. Queue implements all the methods of queue.Queue except for task_done() and join(). Return the approximate size of the queue. Because of multithreading/multiprocessing semantics, this number is not reliable. Note that this may raise NotImplementedError on Unix platforms like Mac OS X where sem_getvalue() is not implemented. Return True if the queue is empty, False otherwise. Because of multithreading/multiprocessing semantics, this is not reliable. Return True if the queue is full, False otherwise. Because of multithreading/multiprocessing semantics, this is not reliable., else raise the queue.Full exception (timeout is ignored in that case). Equivalent to put(obj, False)., else raise the queue.Empty exception (timeout is ignored in that case). Equivalent to get(False). multiprocessing.Queue has. Prevent join_thread() from blocking. In particular, this prevents the background thread from being joined automatically when the process exits – see join_thread(). A better name for this method might be allow_exit_without_flush(). It is likely to cause enqueued data to lost, and you almost certainly will not need to use it. It is really only there if you need the current process to exit immediately without waiting to flush enqueued data to the underlying pipe, and you don’t care about lost data. It is a simplified Queue type, very close to a locked Pipe. Return True if the queue is empty, False otherwise. Remove and return an item from the queue. Put item into the queue. JoinableQueue, a Queue subclass, is a queue which additionally has task_done() and join() methods.. Return list of all live children of the current process. Calling this has the side affect of “joining” any processes which have already finished. Return the number of CPUs in the system. May raise NotImplementedError. See also Return the Process object corresponding to the current process. An analogue of threading.current_thread(). Add support for when a program which uses multiprocessing has. If the module is being run normally by the Python interpreter then freeze_support() has no effect... is returned. The return value can be 'fork', 'spawn', 'forkserver' or None. 'fork' is the default on Unix, while 'spawn' is the default on Windows. New in version 3.4. Sets the path of the Python interpreter to use when starting a child process. (By default sys.executable is used). Embedders will probably need to do some thing like set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe')) before they can create child processes. Changed in version 3.4: Now supported on Unix when the 'spawn' start method is used. allow the sending and receiving of picklable objects or strings. They can be thought of as message oriented connected sockets. Connection objects are usually created using Pipe() – see also Listeners and Clients. Send an object to the other end of the connection which should be read using recv(). The object must be picklable. Very large pickles (approximately 32 MB+, though it depends on the OS) may raise a ValueError exception. Return an object sent from the other end of the connection using send(). Blocks until there its something to receive. Raises EOFError if there is nothing left to receive and the other end was closed. Return the file descriptor or handle used by the connection.. Read into buffer a complete message of byte data sent from the other end of the connection and return the number of bytes in the message. Blocks until there is something to receive. Raises EOFError ifort exception is raised and the complete message is available as e.args[0] where e is the exception instance. Changed in version 3.3: Connection objects themselves can now be transferred between processes using Connection.send() and Connection.recv(). New in version 3.3: Connection objects now support the context manager protocol – see Context Manager Types. __enter__() returns the connection object, and __exit__() calls close().. Generally synchronization primitives are not as necessary in a multiprocess program as they are in a multithreaded program. See the documentation for threading module. Note that one can also create synchronization primitives by using a manager object – see Managers. A barrier object: a clone of threading.Barrier. New in version 3.3. A bounded semaphore object: a clone of threading.BoundedSemaphore. (On Mac OS X, this is indistinguishable from Semaphore because sem_getvalue() is not implemented on that platform). A condition variable: an alias for threading.Condition. If lock is specified then it should be a Lock or RLock object from multiprocessing. Changed in version 3.3: The wait_for() method was added. A clone of threading.Event. A non-recursive lock object: a clone of threading.Lock. A recursive lock object: a clone of threading.RLock. A semaphore object: a clone of threading.Semaphore. Note The acquire() and wait() methods of each of these types treat negative timeouts as zero timeouts. This differs from threading where, since version 3.2, the equivalent acquire() methods treat negative timeouts as infinite timeouts. On Mac OS X, object: Create a BaseManager object. Once created one should call start() or get_server().serve_forever() to ensure that the manager object refers to a started manager process. address is the address on which the manager process listens for new connections. If address is None then an arbitrary one is chosen. authkey is the authentication key which will be used to check the validity of incoming connections to the server process. If authkey is None then current_process().authkey is used. Otherwise authkey is used and it must be a byte string. Start a subprocess to start the manager. If initializer is not None then the subprocess will call initializer(*initargs) when it starts. a local manager object to a remote manager process: >>> from multiprocessing.managers import BaseManager >>> m = BaseManager(address=('127.0.0.1', 5000), authkey=b'abc') >>> m.connect() Stop the process used by the manager. This is only available if start() has been used to start the server process. This can be called multiple times. False then this can be left as None. proxytype is a subclass of BaseProxy which is used to create proxies for shared objects with this typeid. If None then a proxy class is created automatically. exposed is used to specify a sequence of method names which proxies for this typeid should be allowed to access using BaseProxy._callmethod(). (If exposed is None None then proxytype._method_to_typeid_ is used instead if it exists.) If a method’s name is not a key of this mapping or if the mapping is None instances also have one read-only property: The address used by the manager. Changed in version 3.3: Manager objects support the context manager protocol – see Context Manager Types. __enter__() starts the server process (if it has not already started) and then returns the manager object. __exit__() calls shutdown(). In previous versions __enter__() did not start the manager’s server process if it was not already started. A subclass of BaseManager which can be used for the synchronization of processes. Objects of this type are returned by multiprocessing.Manager(). It also supports creation of shared lists and dictionaries. Create a shared threading.Barrier object and return a proxy for it. New in version 3.3. Create a shared threading.BoundedSemaphore object and return a proxy for it. Create a shared threading.Condition object and return a proxy for it. If lock is supplied then it should be a proxy for a threading.Lock or threading.RLock object. Changed in version 3.3: The wait_for() method was added. Create a shared threading.Event object and return a proxy for it. Create a shared threading.Lock object and return a proxy for it. Create a shared Namespace object and return a proxy for it. Create a shared queue.Queue object and return a proxy for it. Create a shared threading.RLock object and return a proxy for it. Create a shared threading.Semaphore object and return a proxy for it. Create an array and return a proxy for it. Create an object with a writable value attribute and return a proxy for it. Create a shared dict object and return a proxy for it. Create a shared list object and return a proxy for it. Note Modifications to mutable values or items in dict and list proxies will not be propagated through the manager, because the proxy has no way of knowing when its values or items are modified. To modify such an item, you can re-assign the modified object # reassigning the dictionary, the proxy is notified of the change lproxy[0] = d >>> import queue >>>(Worker,). A proxy can usually be used in most of the same ways that. Note, however, that if a proxy is sent to the corresponding manager’s process then unpickling it will produce the referent itself. This means, for example, that one shared object can contain a second: >>> a = manager.list() >>> b = manager.list() >>> a.append(b) # referent of a now contains referent of b >>> print(a, b) [[]] [] >>> b.append('hello') >>> print(a, b) [['hello']] ['hello'] Note The proxy types in multiprocessing do nothing to support comparisons by value. So, for instance, we have: >>> manager.list([1,2,3]) == [1,2,3] False One should just use a copy of the referent instead when making comparisons. Proxy objects are instances of subclasses of BaseProxy. Call and return the result of a method of the proxy’s referent. If proxy is a proxy whose referent is obj thenError exception and is raised by _callmethod(). Note in particular that an exception will be raised if methodname has not been exposed An example of the usage of _callmethod(): >>> l = manager.list(range(10)) >>> l._callmethod('__len__') 10 >>> l._callmethod('__getslice__', (2, 7)) # equiv to `l[2:7]` [2, 3, 4, 5, 6] >>> l._callmethod('__getitem__', (20,)) # equiv to `l[20]` Traceback (most recent call last): ... IndexError: list index out of range Return a copy of the referent. If the referent is unpicklable then this will raise an exception. Return a representation of the proxy object. Return the representation of the referent. One can create a pool of processes which will carry out tasks submitted to it with the Pool class. A process pool object which controls a pool of worker processes to which jobs can be submitted. It supports asynchronous results with timeouts and callbacks and has a parallel map implementation. processes is the number of worker processes to use. If processes is None then the number returned by os.cpu_count() is used. If initializer is not None. New in version 3.2: maxtasksperchild New in version 3.4: context Note Pool exposes this ability to the end user. Call func with arguments args and keyword arguments kwds. It blocks until the result is ready. Given this blocks, apply_async() is better suited for performing work in parallel. Additionally, func is only executed in one of the workers of the pool. A variant of the apply() parallel equivalent of the map() built-in function (it supports only one iterable argument though). It blocks until the result is ready. This method chops the iterable into a number of chunks which it submits to the process pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer. A variant of the map() then the next() method of the iterator returned by the imap() method has an optional timeout parameter: next(timeout) will raise multiprocessing.TimeoutError if the result cannot be returned within timeout seconds. The same as imap() except that the ordering of the results from the returned iterator should be considered arbitrary. (Only when there is only one worker process is the order guaranteed to be “correct”.) Like map() except that the elements of the iterable are expected to be iterables that are unpacked as arguments. Hence an iterable of [(1,2), (3, 4)] results in [func(1,2), func(3,4)]. New in version 3.3.. Stops the worker processes immediately without completing outstanding work. When the pool object is garbage collected terminate() will be called immediately. Wait for the worker processes to exit. One must call close() or terminate() before using join(). New in version 3.3: Pool objects now support the context manager protocol – see Context Manager Types. __enter__() returns the pool object, and __exit__() calls terminate(). The class of the result returned by Pool.apply_async() and Pool.map_async(). Return the result when it arrives. If timeout is not None and the result does not arrive within timeout seconds then multiprocessing.TimeoutError is raised. If the remote call raised an exception then that exception will be reraised by get(). Wait until the result is available or until timeout seconds pass. Return whether the call has completed.. Send a randomly generated message to the other end of the connection and wait for a reply. If the reply matches the digest of the message using authkey as the key then a welcome message is sent to the other end of the connection. Otherwise AuthenticationError is raised. authenticate is True or authkey is a byte string then digest authentication is used. The key used for authentication will be either authkey or current_process().authkey if authkey is None. If authentication fails then AuthenticationError is raised. See Authentication keys. None then the family is inferred from the format of address. If address is also None then a default is chosen. This default is the family which is assumed to be the fastest available. See Address Formats. Note that if family is 'AF_UNIX' and address is None then the socket will be created in a private temporary directory created using tempfile.mkstemp(). If the listener object uses a socket then backlog (1 by default) is passed to the listen() method of the socket once it has been bound. If authenticate is True (False by default) or authkey is not None then digest authentication is used. If authkey is a: The address which is being used by the Listener object. The address from which the last accepted connection came. If this is unavailable then it is None. New in version 3.3: Listener objects now support the context manager protocol – see Context Manager Types. __enter__() returns the listener object, and __exit__() calls close(). with) An 'AF_INET' address is a tuple of the form (hostname, port) where hostname is a string and port is an integer. An 'AF_UNIX' address is a string representing a filename on the filesystem.(). Some support for logging is available. Note, however, that the logging package does not use process shared locks so it is possible (depending on the handler type) for messages from different processes to get mixed up. Returns the logger used by multiprocessing. If necessary, a new one will be created. When first created the logger has level logging.NOTSET and no default handler. Messages sent to this logger will not by default propagate to the root logger. Note that on Windows child processes will only inherit the level of the parent process’s logger – any other customization of the logger will not be inherited. This function performs a call to get_logger() but in addition to returning the logger created by get_logger, it adds a handler which sends output to sys.stderr using format '[%(levelname)s/%(processName)s] %(message)s'.. multiprocessing.dummy replicates the API of multiprocessing but is no more than a wrapper around the threading module. There are certain guidelines and idioms which should be adhered to when using multiprocessing. will join the process. Even so it is probably good practice to explicitly join all the processes that you start. Better to inherit than pickle/unpickle When using the spawn or forkserver start methods many types from multiprocessing need.terminate method to stop a process is liable to cause any shared resources (such as locks, semaphores, pipes and queues) currently being used by the process to become broken or unavailable to other processes. Therefore it is probably best to only consider using Process.terminate method automatically be joined. round originally unconditionally called:os.close(sys.stdin.fileno()) in the multiprocessing.Process._bootstrap() method — this resulted in issues with processes-in-processes. This has been changed to:sys.stdin.close() sys.stdin = open(os.devnull) issue 5155, issue 5313 and issue 5331) — just define a function and use that instead. Also, if you subclass Process then make sure that instances will be picklable when the Process.start method is called. Global variables Bear in mind that if code run in a child process tries to access a global variable, then the value it sees (if any) may not be the same as the value in the parent process at the time that Process.start was()
https://docs.python.org/dev/library/multiprocessing.html
CC-MAIN-2014-15
refinedweb
3,469
60.82
It is very common in the Salt codebase to see opts referred to in a number of contexts. For example, it can be seen as __opts__ in certain cases, or simply as opts as an argument to a function in others. Simply put, this data structure is a dictionary of Salt's runtime configuration information that's passed around in order for functions to know how Salt is configured. When writing Python code to use specific parts of Salt, it may become necessary to initialize a copy of opts from scratch in order to have it available for a given function. To do so, use the utility functions available in salt.config. As an example, here is how one might generate and print an options dictionary for a minion instance: import salt.config opts = salt.config.minion_config('/etc/salt/minion') print(opts) To generate and display opts for a master, the process is similar: import salt.config opts = salt.config.master_config('/etc/salt/master') print(opts)
https://docs.saltstack.com/en/latest/ref/internals/opts.html
CC-MAIN-2019-47
refinedweb
166
54.63
README IntroductionIntroduction CAZ (Create App Zen) It's a a simple template-based Scaffolding tools for my personal productivity, inspired by Yeoman & Vue CLI 2 & etc. For more introduction, please refer to the How it works. FeaturesFeatures - Easy to use - Light-weight - Still powerful - High efficiency - Less dependencies - Template-based - Configurable - Extensible - TypeScript - Use modern API I'll give you specific reasons later. Table of ContentsTable of Contents - Introduction - Getting Started - Recipes - Advanced - References - Motivation - About - Roadmap - Contributing - License Getting StartedGetting Started PrerequisitesPrerequisites InstallationInstallation # install it globally $ npm install -g caz # or yarn $ yarn global add caz Quick StartQuick Start Create new project from a template. $ caz <template> [project] [-f|--force] [-o|--offline] # caz with an official template $ caz <template> [project] # caz with a github repo $ caz <owner>/<repo> [project] If you only use it occasionally, I recommend that you use npx to run caz directly. $ npx caz <template> [project] [-f|--force] [-o|--offline] OptionsOptions -f, --force: Overwrite if the target exists -o, --offline: Try to use an offline template RecipesRecipes GitHub Repo TemplatesGitHub Repo Templates $ caz nm my-project The above command pulls the template from caz-templates/nm, then prompts for some information according to the configuration of this template, and generate the project at ./my-project. $ caz nm#typescript my-project By running this command, CAZ will pulls the template from typescript branch of caz-templates/nm. Use Custom templatesUse Custom templates $ caz zce/nm my-project The above command pulls the template from zce/nm. This means that you can also pull templates from your public GitHub repository. Public repository is necessary. Local TemplatesLocal Templates Instead of a GitHub repo, you can also use a template on your local file system. e.g. $ caz ~/local/template my-project The above command use the template from ~/local/template. Remote ZIP TemplatesRemote ZIP Templates Instead of a GitHub repo, you can also use a template with a zip file uri. e.g. $ caz my-project The above command will download & extract template from. Offline ModeOffline Mode $ caz nm my-project --offline By running this command, CAZ will try to find a cached version of nm template or download from GitHub if it's not yet cached. Prompts OverridePrompts Override CAZ allows you to specify prompt response answers through cli parameters. $ caz minima my-project --name my-proj Debug ModeDebug Mode $ caz nm my-project --debug --debug parameter will open the debug mode, In debug mode, once an exception occurs, the exception details will be automatically output. This is very helpful in finding errors in the template. List Available TemplatesList Available Templates Show all available templates $ caz list [owner] [-j|--json] [-s|--short] ArgumentsArguments [owner]: GitHub orgs or user slug, default: 'caz-templates' OptionsOptions -j, --json: Output with json format -s, --short: Output with short format Official TemplatesOfficial Templates Current available templates list: - template - for creating caz templates. - nm - for creating node modules. - react - for creating modern react app. - vue - for creating modern vue.js app. - vite - for creating vue.js app powered by vite. - electron - :construction: for creating electron app. - mp - :construction: for creating wechat mini-programs. - jekyll - :construction: for creating jekyll site. - x-pages - for creating x-pages static site. Maybe more: You can also run $ caz listto see all available official templates in real time. AdvancedAdvanced Create Your TemplateCreate Your Template $ caz template my-template The above command will pulls the template from caz-templates/template, and help you create your own CAZ template. To create and distribute your own template, please refer to the How to create template. Maybe fork an official template is also a good decision. ConfigurationConfiguration CAZ will read the configuration file in ~/.cazrc, default config: ; template download registry, ; {owner} & {name} & {branch} will eventually be replaced by the corresponding value. registry ={owner}/{name}/archive/{branch}.zip ; template offlicial organization name official = caz-templates ; default template branch name branch = master This means that you can customize the configuration by modifying the configuration file. For example, in your ~/.cazrc: registry ={owner}/{name}/archive/{branch}.zip official = faker branch = main Then run the following command: $ caz nm my-project The above command will download & extract template from. Create Your ScaffoldCreate Your Scaffold # install it locally $ npm install caz # or yarn $ yarn add caz with ESM and async/await: import caz from 'caz' ;(async () => { try { const template = 'nm' // project path (relative cwd or full path) const project = 'my-project' const options = { force: false, offline: false } // scaffolding by caz... await caz(template, project, options) // success created my-project by nm template } catch (e) { // error handling console.error(e) } })() or with CommonJS and Promise: const { default: caz } = require('caz') const template = 'nm' // project path (relative cwd or full path) const project = 'my-project' const options = { force: false, offline: false } // scaffolding by caz... caz(template, project, options) .then(() => { // success created my-project by nm template }) .catch(e => { // error handling console.error(e) }) This means that you can develop your own scaffolding module based on it. To create and distribute your own scaffolding tools, please refer to the How to create scaffolding tools based on CAZ. ReferencesReferences caz(template, project?, options?)caz(template, project?, options?) Create new project from a template templatetemplate - Type: string - Details: template name projectproject - Type: string - Details: project name - Default: '.' optionsoptions - Type: object - Details: options & prompts override - Default: {} forceforce Type: boolean Details: overwrite if the target exists Default: false offlineoffline Type: boolean Details: try to use an offline template Default: false [key: string][key: string] Type: any Details: cli options to override prompts MotivationMotivation 👉 🛠 ⚙ Joking: I want to make wheels ;P The real reason is that I think I need a scaffolding tool that is more suitable for my personal productivity. Nothing else. AboutAbout How It WorksHow It Works P.S. The picture is from the Internet, but I have forgotten the specific source, sorry to the author. Main WorkflowMain Workflow The core code is based on the middleware mechanism provided by zce/mwa. The following middleware will be executed sequentially. - confirm - Confirm destination by prompts. - resolve - Resolve template from remote or local. - load - Load template config by require. - inquire - Inquire template prompts by prompts. - setup - Apply template setup hook. - prepare - Prepare all template files. - rename - Rename file if necessary. - render - Render file if template. - emit - Emit files to destination. - install - Execute npm | yarn | pnpm installcommand. - init - Execute git init && git add && git commitcommand. - complete - Apply template complete hook. Built WithBuilt With - adm-zip - A Javascript implementation of zip for nodejs. Allows user to create or extract zip files both in memory or to/from disk - cac - Simple yet powerful framework for building command-line apps. - env-paths - Get paths for storing things like data, config, cache, etc - fast-glob - It's a very fast and efficient glob library for Node.js - ini - An ini encoder/decoder for node - lodash - Lodash modular utilities. - node-fetch - A light-weight module that brings Fetch API to node.js - ora - Elegant terminal spinner - prompts - Lightweight, beautiful and user-friendly prompts - semver - The semantic version parser used by npm. - validate-npm-package-name - Give me a string and I'll tell you if it's a valid npm package name RoadmapRoadmap The following are the features I want to achieve or are under development: - config command - cache command - all lifecycle hooks - console output (colorful & verbose) - more and more official templates See the open issues for a list of proposed features (and known issues). ContributingContributing - Fork it on GitHub! - Clone the fork to your own machine. git checkout -b my-awesome-feature - Commit your changes to your own branch: git commit -am 'Add some feature' - Push your work back up to your fork: git push -u origin my-awesome-feature - Submit a Pull Request so that we can review your changes. NOTE: Be sure to merge the latest from "upstream" before making a pull request! LicenseLicense Distributed under the MIT License. See LICENSE for more information. © 汪磊
https://www.skypack.dev/view/caz
CC-MAIN-2021-49
refinedweb
1,316
55.54
A secondary school in Melbourne, Australia, is using KDE on Kubuntu, with KDE's own Kiosk tool to lock down its library workstations. Implementing a kiosk mode Kubuntu setup allowed Westall Secondary School to save money, exact greater control over security measures, and extend the life of older and previously abandoned hardware without sacrificing performance. The school's IT support manager said the Kiosk admin tool was chosen as there was not enough flexibility with other desktops to allow "decent" lockdown. He was also surprised to discover that Kubuntu desktops ran some applications faster on Linux than when they ran on Windows. excellent, and in other linux news: * The brazilian Election Supreme Court migrates 430 thousand voting machines to GNU / Linux * Red Hat Chairman Sells $3.4M Shares of Stock (and how many shares did MS buy?) * Disgusting:... A site to keep bookmarked along with Groklaw to keep the truth in mind: Yeah let's boycott the company responsible for hiring the highest number of KDE devels. A company that provides one of the best KDE desktops out there, with regular updates, maintaining three branches as we speak 3.5, 4.0 and 4.1. A company that took an already excellent but somewhat closed distribution (SuSE Pro), and made it free and its development open to the community. That is really smart. 'A company that took an already excellent but somewhat closed distribution (SuSE Pro), and made it free and its development open to the community.' I am not in agreement with the original poster at all, albeit for other reasons than you. But if you are so quickly to name this company's advantage, how about the strategic alliance with Microsoft and the patent clause deal? Do you defend that as well? By the way, I also think it is wrong to give any company too much power - we see it with the Linux kernel. Those that hire devs get it much easier to see changes in the kernel. Understandable - but unfair to "normal" users just as well. No I don't defend that. I wish they hadn't done it. And I agree. But it's not the case, there are many other big players in the field, plus tons of small ones, plus a huge community. I think we'll be just fine ;) I still haven't read any true argument to who (except Red Het) and why is the deal bad. > By the way, I also think it is wrong to give any company too much power - we > see it with the Linux kernel. Those that hire devs get it much easier to see > changes in the kernel. Understandable - but unfair to "normal" users just as > well. That is just totally untrue. Read the report in the link below specially the section about "Who is Sponsoring the Work" If you wish to reward a convicted monopoly (Microsoft) fine, I do not. If Microsoft is so pro-Linux, why can't I find a free software repository for Linux on their site like Google has? When I search for Linux on Microsoft's site, why are there not mounds of pro-Novell pro-Linux articles and features? They (Microsoft) have made claims of interoperability but they have done nothing to bring a working non-trojaned non-crippled DirectX to Linux or provide Linux sales or downloads of ports for their commecial software. In 2000 Corel had some partnership with Microsoft and we heard many of the same claims we hear now with Novell and Microsoft and Linux lost out then as it will now. Go to Corel's website and search for Linux, have fun! No Corel Linux (it became Xandros after the pact and Xandros signed some patent agreement too so the circle is complete there), no tons of programs like Word Perfect for Linux. It appears to all have been abandoned. So we see where Linux and Microsofts pacts go based on the Corel and other such agreements. Linux has lasted as long as it has because it wasn't in control of one or two companies to be bought out, but through agreements with Microsoft, give it time and all of the castles will fall and the community scuttled. Just give it time. To think otherwise is to ignore the history of Microsoft and its dealings with other companies and operating systems. The camel nose is well under the tent. The clock is ticking, the patent wagons are being gathered to circle, we'll see how long the beast remains quiet. In the land of nothing for free, you can expect Microsoft to win any patent battle against Linux. Notice how IBM has been pushed out of the government recently even after a LOC deal with Microsoft. Microsoft's power is on the rise, we are yet to see the full monopoly in action, these are dark times as you buy your EEE (embrace extend extinguish) Xandros PCs (how much of the money for the purchase goes back to Microsoft) and whatever else Microsoft-approved-Linux. The shills are in full force, while Linux lands on desktops it's often under Microsoft-approved-Linux like Linspire and Xandros, Novell and perhaps others. You can either bury your head in the sand and spew Simpsons quotes to laugh and remove yourself from the facts or you can pay attention and participate somehow to make a difference. Read the history of Microsoft's actions, most of you would either suck it up like you do with your Xboxes and Halo anyway were Linux to all fall under the Microsoft banner, with the few running to BSD until that became popular and squashed too. "Microsoft's power is on the rise, we are yet to see the full monopoly in action..." Hah hah! Microsoft is definitely on the decline. They lost their browser monopoly. They lost the cheap server monopoly. Mac OSX and Linux are eroding Windows' desktop market share. The received wisdom is that it is impossible to compete with a monopoly, yet it was during the height of the Microsoft monopoly that the Open Source movement flowered, that Apple made a complete turnaround in fortune. Patents are indeed a serious problem, and it needs to be addressed. But it needs to be addressed directly, without using Microsoft as a proxy scapegoat. Get rid of software patents and Microsoft ceases to be a threat. Get rid of Microsoft and the patent threat remains as strong as ever. "If Microsoft is so pro-Linux" Who said that Microsoft is pro-Linux? Novell and Microsoft are still concurrents, just agreed in a few areas. "In 2000 Corel had some partnership with Microsoft and we heard many of the same claims we hear now with Novell and Microsoft and Linux lost out then as it will now." And what's the problem with Corel not shipping Linux any more? They have the right to do so - of course it is better for us if they don't. But you want to boycott Novell right now, and not use Novell-developed Linux distros even when they ship them, so you don't need them, so what would be the problem if they stopped to develop Linux distros? Anyway, it is not likely as Novell makes much money with Linux and more and more every year. "Linux has lasted as long as it has because it wasn't in control of one or two companies" And it won't be just because some developers, even who have deal with MS, take part in its development. If the code is good, Linus will include it in Linux and it should be in Linux. If the code is bad, it simply won't go into the official Linux. "while Linux lands on desktops it's often under Microsoft-approved-Linux like Linspire and Xandros, Novell and perhaps others." Mostly Novel SLED or (K)Ubuntu, sometimes Red Hat, sometimes even Debian. (Xandros is on EeePC but the big migrations usually go to these.) And this is because these are the most usable distros with the largest supporting community in the "user-friendly" corporate desktop category. "In the land of nothing for free, you can expect Microsoft to win any patent battle against Linux." If Novell, Xandros, Linspire have a deal with MS, and, considering the worst case, MS sues evribody who use a Linux distro not purchased from a distributor with a deal with MS, you will be allowed to use SLED, Xandros, Linspire and nothing else. If there aren't these deals, you cannot use Linux at all. I wouldn't say that the first case is better as the good in Linux is the community but it is still not worse. -1 Knobhead Novell have been a major contributer to various OSS projects including Linux and KDE. A recent report states that Novell is the second largest contributer to Linux kernel. Still I cant figure out why anyone should boycott Novel? If so then I think we may have to boycott KDE, Gnome, Linux etc since they all are using code contributed by Novell or by developers paid by Novell. You make it sound as if this is a unique and only advantage. But there are two sides of a medal to it. If we look at only one side, we will not see the bigger picture. Do or do not, there is no try. > A site to keep bookmarked along with Groklaw to keep the truth in mind: > Please take your FUD somewhere else. Read: Here is other page for you, because you gave the novell fact page, this is just makes your argument complete ;-) The Novell FAQ obviously cannot be regarded as unbiased (btw what apokryphos linked is the openSUSE FAQ based on the Novell FAQ) but you can only say it is not an acceptable argument if you can prove that it is not true. So which point of that FAQ is false? Congrats to the school (&KDE!). It may only be a few computers, but success like this spreads as teachers talk to one another. Education is one area where even small savings can be important for buying materials, providing trips etc. to students. Even if the benefit of the computers themselves is small, the knock on effect can be huge in the long run. I suspect teachers will also appreciate the self-reliant nature of using OSS away from outside interference and budget constraints: It'll allow them to get on with their jobs. Keep it up :) Way to go KDE! This is great news. BTW, I'm using KDE 4.03 and loving it, but I haven't found out any info on the progress of Kiosk in KDE 4. Does anyone know about its status? KIO will be droped in favor of policy kit I think. huh, what can policy kit do which is even remotely close to KIOSK? They have nothing in common... KIOSK restricts what the user can do in the GUI, Policy kit is like a smarter SUDO replacement. Yeah makes me wonder too... especially as Policy Kit introduces namespaces to users, but its an outside-KDE project and not that easy to setup (takes time) oh well hope this info is not correct kiosk (not kio, which is about file access (ergo the "io")) will not be dropped in favour of policykit; the two are complimentary but orthogonal concepts. kiosk lets one define defaults and lock down policies for settings, policykit allows one to allow access to and define processes that need to be run with auth priveleges other than what the user may be logged in with. both allow user/group definitions and are rather flexible at runtime, but both are needed rather than one replacing the other. Kiosk ist of course still working in KDE 4, just like it worked for KDE 3. But you should keep in mind that some keys changed: for example all kicker keys are not valid anymore, but that's clear because there is no kicker in KDE 4 :) However, this article mentions the GUI to kiosk, kiosktool - and this might be a problem with KDE 4: at the moment the development of kiosktool has stopped. There is no maintainer who is willing (or better: who has time) to port the tool over to KDE 4. Is there something wrong with the RSS Feed at the Moment. The Dot RSS Feed does not show the last few Articles (The last one is KDE and Wikimedia Collaborate) Other RSS Feeds on my PC are fine. I get the same here. Still doesn't contain the article bodies either. Pah :) IMHO, Linux machines in public places like schools, libraries, cafes, bars, etc. are a great (and free!) way to promote Linux to new users. Therefore, it would be important to continue the development of Kiosk tool and also port it to KDE 4 as soon as possible. Unfortunately, Kiosk tool isn't currently very well suited for this in "a promotional way": when you use the tool you'll end up hiding (instead of just locking) almost all the great features KDE has to offer. This will give you a DE that will appear boring and featureless and won't give a good impression about KDE (or "Linux") for potential new users. (Of course, whether the Kiosk tool hides or just locks features could also be optional.) I forgot to mention what I mean by locking features instead of hiding: a feature (or option) is visible for the user but when he/she tries to use the feature the system would give a message like "this feature has been disabled by the admin" or something like that. How does it work, does it show a lock icon in place of the greyed out checkbox? Well, here is one example: Say we want to restrict user's ability to change display settings. Instead of taking away all the possibilities to get to the settings dialog the user would be presented with a greyed out dialog along with a message that "The administrator has restricted access to these settings." (or something else along those lines). In this case the curious user might think something like "Ok, I can undestand why the restricted access here, but this "Linux system" is pretty cool nonetheless. I should try it at home.". But if the settings are totally inaccessible the user might think "Nah, there is nothing here. Linux sucks. No wonder nobody's using it except for this damn school of mine.". Oh, and BTW. My example is pretty much in line with the standard behaviour of the KDE control panel. Just take a look at the settings that require root access... How about trying out skolelinux as well? :) Never beeen easier to set up server, thin clients, laptops and half thin clients ever! :) And of course, KDE is default ;) The whole idea behond skoleinux is for someone non technical to be albe to install it all. I've never seen anything beeing so close as skolelinux. It just blows away the competition. And it makes me wonder why Canoical goes down the Edubuntu path. Feels like just such a waste of time. Skolelinux is based on Debian and is now sort of a part of Debian. Please check it out :)
https://dot.kde.org/comment/70917
CC-MAIN-2016-50
refinedweb
2,561
70.13
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. > this is a (probably incomplete) list of the extensions currently polluting > namespace std:: [snip] >. Great. This sounds like a good idea: let's pick off the easy stuff first. > As pointed out by Gaby, it would be quite stupid to move the extensions together > with their helpers to separate files in namespace __gnu_cxx if then we have to > use the helpers to implement the standard library. Right. > What about keeping the helpers (that is, from a practical point of view, those > names beginning with one or two underscores) insided namespace std, and > viceversa putting the extension themselves into separate files in namespace > __gnu_cxx? Sounds good. best, benjamin
http://gcc.gnu.org/ml/libstdc++/2001-12/msg00459.html
crawl-001
refinedweb
124
68.1
Building CampaignHawk: File Structure and Packages (Part 2) We went over the project scope and the wireframing process in the first part. So now that we have the basic idea of what the app should look like, we need to actually put that into code. My preferred method is to start by building out the user interface (UI), then connect all the internals. For a project like this where the UI is a big part of the project, I’ll probably spend the majority of my time making the interface clean and user friendly. And depending on how well-defined the project is, it might also make sense to write some end-to-end tests using Cucumber in advance. I know a lot of people like test driven development (TDD), but I’ve always found it more of a hinderance. It’s nothing personal. Step 1: Directories and Files First create the Meteor project: $ meteor create campaign-hawk $ cd campaign-hawk Then set up the basic file structure. Meteor is not very opinionated on file structure, which gives us a lot of flexibility. I’m going to use the following file structure to get started: ├── client │ ├── components │ │ ├── App.jsx │ │ ├── AppLoading.jsx │ │ ├── AppNotFound.jsx │ │ ├── Map.jsx │ │ ├── Login.jsx │ │ └── Settings.jsx │ ├── router.jsx │ ├── index.html │ └── styles │ └── styles.scss ├── lib │ └── collections.js └── server │ └── server.jsx └── scss.json These can be added manually or with the following command: $ mkdir client client/components client/styles $ mkdir lib $ mkdir server $ touch client/components/App.jsx $ touch client/components/AppLoading.jsx $ touch client/components/AppNotFound.jsx $ touch client/components/Map.jsx $ touch client/components/Login.jsx $ touch client/components/Settings.jsx $ touch client/router.jsx $ touch client/index.html $ touch client/styles/styles.scss $ touch lib/collections.js $ touch server/server.jsx $ touch scss.json Then trash the template files: $ trash campaign-hawk.* If you don’t have trash, I recommend getting it. You can add it with $ brew install trash. If you don’t have Homebrew, you should get it already. Step 2: Packages I’m starting this just before Meteor 1.2 comes out, which is not ideal. That said, the release candidate was just released publicly, and I have faith that it’s stable so I’m going to start by upgrading to 1.2. If you’re reading this in the future and 1.2 has already been released, please let me know so I can change this part of the process. $ meteor update --release METEOR@1.2-rc.7 A few other packages we’re going to need: meteor add fourseven:scss meteor add react meteor add reactrouter:react-router I’m also going to add the following to the scss.json file to enable auto-prefixing so we don’t have to deal with those annoying vendor prefixes: { "enableAutoprefixer": true, "autoprefixerOptions": { "browsers": ["> 5%"], "cascade": false } } That should work for now as far as packages go. Step 3: Basic Template Time to put together a few React components and get them to render properly. Within App.jsx: App = React.createClass({ render() { return ( <div> {this.props.children} </div> ) } }) …Map.jsx: Map = React.createClass({ render() { return ( <h1>This is where the map goes</h1> ) } }) …Login.jsx: Login = React.createClass({ render() { return ( <h1>This is where the login goes</h1> ) } }) Now that the components are set up, it’s time to set up the router within the router.jsx file: const { Router, Route, Redirect } = ReactRouter; const history = ReactRouter .history .useQueries(ReactRouter.history.createHistory)() Meteor.startup(function() { let AppRoutes = ( <Router history={history}> <Route component={App}> <Router component={Map} <Router component={Login} </Route> </Router> ) React.render(AppRoutes, document.body) }) This gives access to the browser history and sets up a route to login. Access to login is not actually needed at this point, but it’s easier to set up the router now rather than later. At this point in the process, the app looks like the image below: Next Steps I need to decide on what mapping API I want to use. There’s always Google Maps, but there are some better options out there. I’ve used Mapbox in the past and I’ve had a good experience with them. There’s also CartoDB which seems to put more focus on data, and since this is a very data-intensive map application, they might be a good bet. I’ll make a decision on mapping APIs later on in the process. I also need to decide on a database. I’ve become a big fan of Neo4j and graph databases in general. That said, I don’t think the data I’m working with will be particularly relational, so sticking with Mongo might be easier. I’m going to stick with Mongo for the time being, and if my queries start to get too complex and relational for Mongo, then I’ll switch over to Neo4j and the any-db package for Meteor. Sam Corcos is the lead developer and co-founder of Sightline Maps, the most intuitive platform for 3D printing topographical maps, as well as LearnPhoenix.io, an advanced tutorial site for building scaleable production apps with Phoenix and React.)
https://medium.com/@SamCorcos/building-campaignhawk-with-meteor-and-react-part-2-d4551708dcde
CC-MAIN-2018-09
refinedweb
857
58.89
. Form handles pojo within a pojo Sure it can, but you'll have to done some work to get it the way you want. The form cannot know how you want to display the Address reference, do you want it to be a select component where you select one available addresses or do you want the address to render as a form itself? Either way, you'll have to create a FormFieldFactory. In the FormFieldFactory, you can specify which types of Fields are created for each of the properties. In your case, you'll most likely want to render another form for the address field. Forms implement the Field interface, so you can have the FormFieldFactory return a Form, thus creating nested forms. Check out the Book of Vaadin for more information about Forms and FormFieldFactories. --- *Takes out his crystal ball* I think your next question will be how to have in a form some fields rendered horizontally side-by-side each other, for example the "city" and "zip" fields in your address pojo. The answer to this question can be found by searching the forums. That's an awesome tip. I have built a Form using an overridden DefaultFieldFactory, but hadn't considered putting a Form inside a Form, so presumably the property "address" would return a Form that would then list the properties to be used in that sub-Form. How would you generally tie the two forms together since often enough in a scenario like that you'd still only want the one set of buttons to Save/Cancel, etc. Do you just not defined any buttons on the sub-Form and keep the footer unspecified? Then a commit() or discard() on the main form would automatically do the same on the sub-Form? It seems reasonable, very cool, but hadn't thought of it as an option. Such a sub-Form would also make a good fit for the Drawer widget in contrib, or some sort of popup if the sub-Form was pretty busy. David Wall: Then a commit() or discard() on the main form would automatically do the same on the sub-Form? It seems reasonable, very cool, but hadn't thought of it as an option. You'll only need on set of buttons, as as you suspected, a commit() or discard() will cascade to the inline form. nice one! didn't know this was possible. I've got a similar problem: pojo containing an attribute that is a set of pojos (i.e. document->references (Set<Refererence>) ). Any ideas on how to implement a solution for that? Hendri Thijs: nice one! didn't know this was possible. I've got a similar problem: pojo containing an attribute that is a set of pojos (i.e. document->references (Set<Refererence>) ). Any ideas on how to implement a solution for that? Maybe return a table component in the custom fieldfactory? I have a quite similar problem like hendri. I have a class with an attribute that is a list of another class. (I will add some code snippets) public class Person { private String name; private Address address; private List<Pet> petList = new LinkedList<Pet>(); } pubic class Address { private String city; private String street; } public class Pet { private String name; private String kind; } Now i created a wizard. I set the a person bean item as item data source and change the displayed item properties for each wizard page. I used a custom form field factory. On the first page i can set the name ->works fine On the second page i set the address (the field is a new form with a address bean item) -> this works fine too But then on the third page i got troubles. I want to show a combo box for the animal kinds. And a text field for the name here the code form: setVisibleItemProperties(Arrays.asList(String [] {"petList"})); form field factory: if ("petList".equals(propertyId)) { List<Pet> petList= ((BeanItem<Person>) item).getBean().getPetList(); Form petParentForm = new Form(); petParentForm .setWriteThrough(false); petParentForm .setInvalidCommitted(false); petParentForm .setImmediate(true); petParentForm .setFormFieldFactory(null); Form form = new Form(); form.setWriteThrough(false); form.setInvalidCommitted(false); form.setImmediate(true); form.setFormFieldFactory(this); if (petList.size() == 0) { Pet pet= new Pet(); petList.add(pet); form.setItemDataSource(new BeanItem<Pet>(pet), Arrays.asList(new String[] { "name", "kind" })); petParentForm .getLayout().addComponent(form); } } The form for the pet is displayed but when i push the commit button the values from the form aren't committed. From my point of view i need the 3 forms because i want to add another pet forms when a user set a pet (so he is not restricted to one pet). Did someone see there the problem is? I can add other code snippets if someone needs? Is the problem the petParentForm? Is it necessary to set a item data source for this form? I'm glad for any advise :-) Greetings Christian PS. For displaying an existing person i use the form and change the form to read only. What should i do with the inline forms? Is it possible to change all components to read only? What do you actually do when the list isn't empty? Also what is this petparentform? it is not very clear in the code you posted.. Why do you create this form in the formfield factory? Where do you add it? How do you commit the main form? Are you sure the beans are not updated? Anyway, I had the same kind of problem and applied a different solution. I have some contacts who have multiple bank accounts. What I did is display a table of the bank accounts in another tab, and add controls for that specific table. So I can add,remove or delete accounts easily. It has its own commit and discard controls. Though I could link this to the main form commit. Hi Fluffy, I removed the parts for the case the list isn't empty. And as i wrote this is the part for the "create-a-person-wizard". So the list is definitely empty. The petParentForm is the field i returned for the property petList and is added to the wizard form. I want to make the form for the pets dynamic. So I need a container form there i can add other input forms for pets. Let me describe a little example. Then the user comes to this page he will see 1 pet kind combo box which is empty and 1 text input for the name. If he select a pet kind another combo box and text field is added, so he can add another pet and so on. This code i stripped because it has nothing to do with my problem. The main form is committed by a regular commit. (wizardForm.commit();) And I'm really sure that the beans aren't updated, but i will check this again. Thanks for the idea with the separate table. I will check if this works for me. If it is necessary i can post the whole code.. So you actually return the petparentform in the createfield function? I think the problem is that the subform petparentform has no subform of pets, but you just add the subsubforms in the layout, so the petparentform datasource doesnt know it has subforms, so it doesnt commit them. edit : maybe you could override the commit() function of the petparentform to iterate through the subforms and commit() each one? Fluffy Mr Sandals: So you actually return the petparentform in the createfield function? Correct Fluffy Mr Sandals: I think the problem is that the subform petparentform has no subform of pets, but you just add the subsubforms in the layout, so the petparentform datasource doesnt know it has subforms, so it doesnt commit them.? Christian Carl:? I don't have enough experience to give you an enlightened reply unfortunately. Normally you should handle that in the overriden createfield method but I don't see how you can do that dynamically in this specific case. Maybe a solution would be to create a custom pet table which includes logic to add new bean when the last field is selected, and return the table in the createfield method? normally the commit function should call the commit on the table as well. edit : tried myself, and the table thing works. Commit is called on every property including the table. If you want to control the new pets in a table, maybe you can have a generated column with a button which creates a new bean instance and inserts it in the table? _table.getContainerDataSource().addItem(this._type.newInstance());
https://vaadin.com/forum/thread/118312/form-handles-pojo-within-a-pojo
CC-MAIN-2021-43
refinedweb
1,441
65.12
#include <gtest/gtest.h> #include "base/coroutine.hh" Go to the source code of this file. This test is using a bi-channel coroutine (accepting and yielding values) for testing a cooperative task. The caller and the coroutine have a string each; they are composing a new string by merging the strings together one character per time. The result string is hence passed back and forth between the coroutine and the caller. Definition at line 185 of file coroutine.test.cc. References gem5::ArmISA::c, expected, and gem5::Coroutine< Arg, Ret >::get(). This test is still supposed to test the returning interface of the the Coroutine, proving how coroutine can be used for generators. The coroutine is computing the first #steps of the fibonacci sequence and it is yielding back results one number per time. Definition at line 144 of file coroutine.test.cc. References expected, gem5::Coroutine< Arg, Ret >::get(), and gem5::RiscvISA::sum. This test is testing nested coroutines by using one inner and one outer coroutine. It basically ensures that yielding from the inner coroutine returns to the outer coroutine (mid-layer of execution) and not to the outer caller. Definition at line 218 of file coroutine.test.cc. References expected, and gem5::Coroutine< Arg, Ret >::get(). This test is checking the parameter passing interface of a coroutine which takes an integer as an argument. Coroutine::operator() and CallerType::get() are the tested APIS. Definition at line 85 of file coroutine.test.cc. References expected, and gem5::X86ISA::val. This test is checking the yielding interface of a coroutine which takes no argument and returns integers. Coroutine::get() and CallerType::operator() are the tested APIS. Definition at line 115 of file coroutine.test.cc. References expected, gem5::Coroutine< Arg, Ret >::get(), and gem5::output(). This test is stressing the scenario where two distinct fibers are calling the same coroutine. First the test instantiates (and runs) a coroutine, then spawns another one and it passes it a reference to the first coroutine. Once the new coroutine calls the first coroutine and the first coroutine yields, we are expecting execution flow to be yielded to the second caller (the second coroutine) and not the original caller (the test itself) Definition at line 259 of file coroutine.test.cc. This test is checking if the Coroutine, once it yields back to the caller, it is still marked as not finished. Definition at line 67 of file coroutine.test.cc. This test is checking if the Coroutine, once it's created it doesn't start since the second argument of the constructor (run_coroutine) is set to false. Definition at line 49 of file coroutine.test.cc. References gem5::Fiber::started().
http://doxygen.gem5.org/release/current/coroutine_8test_8cc.html
CC-MAIN-2021-43
refinedweb
449
57.67
# iobroker.mqtt-client The on connect message is published to the on connect topic every time the client connects or reconnects to the server. The last will message is published to the last will topic every time the client connects or reconnects to the server. The Server will store this message and send it to its subscribers when the client disconnects. Comma separated list of topics that are not covered by existing states. Received messages are converted to states within the adapters namespace (e.g. mqtt.0) and subscribed. You can remove topics after all states have been created. When publishing this will be prepended to all topics. Default is empty (no prefix). When subscribing this will be prepended to all topics. Default is empty (no prefix). Enables or disables the mqtt-client functionality for this state. Disabling will delete any mqtt-client settings from this state. The topic this state is published to and subscribed from. default: state-ID converted to a mqtt topic. enablestate will be published changes onlystate will only be published when its value changes as objectwhole state will be published as object qossee retainsee enabletopic will be subscribed and state will be updated accordingly changes onlystate will only be written when the value changed as objectmessages will be interpreted as objects qossee ackon state updates the ack flag will be set accordingly as object changes onlyis always on for subscribe
https://www.npmjs.com/package/iobroker.mqtt-client
CC-MAIN-2017-22
refinedweb
235
65.42
Talking a good DevOps game: Google Container Engine deployments, Kubernetes Dashboards and Google Home Tl;dr: This is a followup to my previous article (Let’s talk Deployments with Google Home, CircleCI and Google Container Engine) and goes into more detail about: - Organizing your GKE deployments. - Creating custom Kubernetes Dashboards with the Kubernetes APIs. - Using the Google Assistant to load dashboards Background: In past articles, we’ve deployed applications (mostly Rails apps) to GKE and used different techniques to achieve Zero Downtime Deployments using Kubernetes. The techniques in this article can be used for applications built in other languages as well. We also saw a proof of concept where we could use the Google Assistant to kick off builds and deployments to GKE by integrating with RESTful APIs from CircleCI. In that article, I alluded to hooking the Assistant up with the Kubernetes API to show custom dashboards. This article deals with the following in brief: - What does a typical GKE deployment model look like? - How to deploy your application to the various environments (QA, Staging, Production etc.) - A quick look at the Kubernetes Web UI (Dashboard) - How to use the Kubernetes APIs to implement custom dashboards - Fun feature: How to show this dashboard or pretty much any other web page via voice command with the Google Assistant Ok, let’s address each of these points. What does a typical GKE deployment model look like? In the image below, you see a sample Rails Application that does the following: - Allows users to post dog pictures. This application is deployed in GKE. - It also runs background processes (workers) that process these images and identify the dog’s breed and other characteristics and persist these details in the database - It uses CloudSQL to store data, StackDriver for logging/monitoring/alerting, Pub/Sub for async event processing and Storage buckets for storing the dog pictures. - The application’s users will access the website via HTTPS. Under the hood, you will see an Ingress (routing rules) that does TLS termination, a Service (Load Balancer) that sends the requests to the web application instances(Deployments). There are separate deployments for the web application and for the workers (async processes). The ingress and the deployments will use Secrets (for environment variables) to access credentials needed to talk to third party services etc. Since the app is also a good citizen, it will use the CloudSQL Proxy to open secure connections to your database. Great! You have your first deployment in GKE. Now all you have to do is to replicate this across your multiple environments (QA, Staging, Production….) and you’re done! :-) How to deploy your application to the various environments (QA, Staging, Production etc.) Time for the second diagram. Here’s one way to organize it: There is a lot of information in the above diagram. Key features: We have 3 environments now (QA, Staging or pre-production and Production). Production has 2 types of deployments (blue and green). I’ll explain more about “cyan” later. All the build/test/deploy processes are fully automated and orchestrated by CircleCI based on different triggers in the Github repository. - Code merged to the develop branch gets deployed automatically to the cluster in the “my-qa” GCP project via CircleCI - Code merged to the master branch gets deployed automatically to the cluster in “my-staging” GCP project - When you ‘git tag’ the master branch, a deployment happens automatically to a cluster within the same “my-prod” GCP project and gets deployed to a “cyan” colored deployment on the same cluster. It also gets deployed to a “non-active color” in the production server. Example: a deployment is done to the pods marked blue deployment if the current “color” is green. - The team can now internally test the features on “smoketest.mywebsite.com” which points to the cyan deployment. At this point we have the new version of our application on the cyan deployment AND the non-active blue deployment. Green still supports the current version in Production. - Once everything looks good on “smoketest.mywebsite.com” you can flip a switch and point your production app (myapp.mywebsite.com) to the newly deployed app in the “blue colored” deployment above. - To keep the data model and the data current on the database front (for Rails applications), we have a Kubernetes Pod (the “db-updater” in the diagram above) that is started during the deployment to run migrations and seeds and the deployment will stop if it runs into any errors. With some creative querying, you can find out when the job has run and its output. This Pod will be deleted at the end of the process but you can still see the logs in the CircleCI job that passed/failed. Note: If the blue/green and now cyan(!) terminology is not making much sense, please see my article on this topic How to leverage the default Kubernetes Dashboard Once we have the above deployments in place, then comes the part of ensuring that our deployments look good and there are no restarts etc. We need a way to be able to access this information visually in addition to watching StackDriver Dashboards, setting alerts with the right thresholds etc. Tip: It is always good practice to look at the pods in production to see if there have been any new restarts as these are trickier to catch via the usual monitoring and alerting approaches. Users familiar with the kubectl command know that Kubernetes provides a very clean, user friendly dashboard when you run the command below. You can do a LOT with this dashboard (creating deployments, services, secrets, increasing/decreasing replicas etc.), and this should suffice in most cases. kubectl proxy Making a case for additional features: Sometimes, with distributed teams and with teams in different time zones, when there is a need for production pushes, the team member(s) requesting the production deployment might not necessarily be familiar with the deployment model or with the gcloud and kubectl tools. In such cases, it might be helpful to have a custom dashboard that is catered more specifically towards the applications that we own and which can be easily accessed and understood by the team. There are various ways to achieve this and they are discussed in the next section How to use the Kubernetes APIs to create custom Dashboards. IF and when there is a need to go beyond using the default Kubernetes Dashboard, we can look into creating our own dashboards by invoking the Kubernetes RESTful APIs. This can be achieved by: - Using the standard client libraries provided by Kubernetes (in Go and Python) - Using the the community supported ones (Node.js, Ruby etc.) - By directly invoking the RESTful APIs through standard REST clients The image below shows a simple Dashboard that we can use to keep track of all our environments as well as to flip our deployments to the corresponding colors. Highlights below: - See all our Kubernetes artifacts and their states (restart counts etc.). You can see our application is deployed to webserver-blue, webserver-green and webserver-cyan deployments. We also have async worker processes (web-worker, phoenix-worker) - Deployment timestamps to see when each deployment occurred and the image that was used. - The ability to switch to a different “colored” deployment with one click - The ability to rollback the workers to a previous version (not shown in the diagram) - Automated Slack notifications when a Production color switch is done, with details about who did the change and when - Chaos Monkey features (sending invalid data, testing connectivity breaks etc.) and more.. The HOW: Option I: As mentioned earlier, you can use the official Kubernetes client libraries (Go/Python) and build a dashboard leveraging either these libraries or the community supported ones. More details available here Option II: Create a simple Node.js Express App that hits the various Kubernetes endpoints on our master API servers and fetches the data. You would need to know the IP Addresses of the API servers in your clusters beforehand in order to achieve this. With this approach, one app can serve the DevOps dashboard as it is able to fetch deployment specifics from multiple clusters in different projects. The above image shows this approach When you run a kubectl command with the verbosity set to 8: kubectl <your command> --v=8 You will get the IP address of the API server and the actual GET/PATCH call that is invoked. For example, if you run: kubectl get deployment —-v=8 You will see output that looks something like: GET https://<API-SERVER-IP-ADDRESS>/apis/extensions/v1beta1/namespaces/default/deployments Option III: Another option would be to spin up a container in the cluster, that runs a ‘kubectl proxy’ locally and can be accessed by an application running in that container via the….. endpoint. Note: this approach would be required for each individual cluster. Fun feature: How to show this dashboard (or any other web page) via voice command with the Google Assistant Here are a couple of ways to display your dashboard by voice command (“Show me the DevOps Dashboard”) : - You can use Google Pub/Sub and a local lightweight client that acts as the Pub/Sub consumer to load the Dashboard page (see my previous article on how to create the Entities, Intents and Fulfillment endpoints for the Google Assistant) The Cloud Function snippet in node.js would look like: const PubSub = require('@google-cloud/pubsub'); // Instantiates a client const pubsub = PubSub(); function publish (req, res) { console.log(`Publishing message to topic <YOUR TOPIC>`); const topic = pubsub.topic('<YOUR TOPIC>'); const message = { data: { url: '<MY DASHBOARD URL' } }; // Publishes a message return topic.publish(message) .then(() => res.status(200).send('Message published.')) .catch((err) => { console.error(err); res.status(500).send(err); return Promise.reject(err); }); }; The local Desktop client (Pub/Sub consumer) in Ruby would look like the example below (it runs continuously and simply opens a browser window with the URL obtained from the Pub/Sub message). # Local Pub/Sub client (ruby) require 'google/cloud/pubsub' pubsub = Google::Cloud::Pubsub.new( project: '<MY GCP PROJECT>', keyfile: '<MY SERVICE ACCOUNT CREDENTIALS JSON FILE' ) loop do sub = pubsub.subscription '<MY SUBSCRIPTION NAME>' msgs = sub.pull sub.acknowledge msgs received_message = msgs[0] unless msgs.nil? next if received_message.nil? json_obj = JSON.parse(received_message.data) url = json_obj['data']['url'] `open #{url}` end Note: Due to the asynchronous nature of this approach, be prepared for it to take about ~10 secs to load the dashboard. 2. Using a custom endpoint in a Node.js application with socket.io to send the user’s requested URLs to the client browser in real time. You would need to use the socket.io server and client libraries to achieve this. This will obviously be faster than the above approach. Conclusion: We’ve covered a lot of ground that describes in detail some of the deployment strategies that have helped us in the past. Hopefully, this article achieves what it set out to do: sharing different approaches to deploying our applications to GKE while taking it up a notch by using more modern ways of viewing Dashboards with the Google Assistant.
https://medium.com/google-cloud/talking-a-good-devops-game-google-container-engine-deployments-kubernetes-dashboards-and-google-63a576b3e29b
CC-MAIN-2019-04
refinedweb
1,866
50.87
Daniel Lindsley, of django-haystack, django-tastypie, and itty fame, just launched a small but useful project. This extremely minimal Python module allows you to set a list of command strings to be executed by the system, and it eats through them in a process pool. Example usage: from littleworkers import Pool # Define your commands. commands = [ 'ls -al', 'cd /tmp && mkdir foo', 'date', 'echo "Hello There."', 'sleep 2 && echo "Done."' ] # Setup a pool. Since I have two cores, I'll use two workers. lil = Pool(workers=2) # Run! lil.run(commands) Simple. Pythonic. Have comments? Send a tweet to @TheChangelog on Twitter. Subscribe to The Changelog Weekly – our weekly email covering everything that hits our open source radar.
http://thechangelog.com/littleworkers-petite-python-command-runners/
CC-MAIN-2015-11
refinedweb
118
68.67
Stephen Pelc is managing director of MicroProcessor Engineering Ltd. He can be contacted at Engineering is the art of making what you want from things you can get. --Jerry Avins There is increasing realisation that there are no magic bullets for software development. Interactive languages returned to fashion in the web developer world, which also saw a return to multi-language programming -- use each language for what it's good at. I'm going to look at one of these interactive languages, Forth, and show what people are doing with it, and why it's a powerful tool. Modern Forth systems are not like their ancestors. The introduction of the ANS/ISO Forth standard in 1994 separated implementation from the language. One result was a proliferation of native-code compiling systems that preserved the traditional Forth interactivity while providing the performance of batch-compiled languages. It is the combination of performance and interactivity that makes modern Forth systems so attractive. What is Forth? Forth is a member of the class of interactive extensible languages. Interactive means that you can type a command at any time and it will be executed. Extensible means that there's no difference between functions, procedures and subroutines (called "words" in Forth parlance) that you defined and the ones that the system provides. Forth is an untyped language whose primary data item is a cell. A cell is the size of an item on the stacks, normally 16, 32, or 64 bits. Underlying all languages, there's an execution model or virtual machine (VM). C and the other languages have one and so does Forth. Forth is rare among languages in exposing the VM to the programmer. The VM consists of a CPU, two stacks, and main memory; see Figure 1. The stacks are conceptually not part of main memory. When I refer to "the stack" I mean the data stack. The two stacks are called the data stack and the return stack. The data stack holds arguments to words, return values, and transient data. The return stack holds return addresses and can also be used for temporary storage. There are several consequences of having two stacks, the most important of which are: - return addresses do not get in the way of the data stack, so words can return any number of results. - Items on the stack do not have names -- they are anonymous. Because items on the stack are anonymous, you sometimes have indulge in stack juggling using words like DUP and SWAP to manipulate the stack. In the spirit of the marketing principle "advertise your worst feature", the anonymous stack leads, in part, to Forth's well-deserved reputation for small applications. Because stack juggling requires thought, you improve productivity by reducing thought. Many small words reduce the thought required by splitting the job into manageable chunks. A side effect of this is that you reuse code at a much finer grain in Forth culture than in most others. The ease of code reuse at this level leads to small applications. Factoring at this level means that access to many performance-critcal routines is through small well-defined interfaces. A consequence is that you can make major changes easily. For example, an embedded system can move from EEPROM data storage to a file system very quickly. Forth is a very agile language. For those who are unfamiliar with Forth, the Forth text interpreter is a little odd. It deals with whitespace delimited tokens. These tokens are looked up in a dictionary. If found they are either executed or compiled. If a token is not found, its is considered to be a number. If number conversion fails, an error occurs. That's all! A consequence of this is that a valid Forth word name can contain any non-whitespace characters. Because there's a lot of interactive testing, many commonly used words are short. The common strange-looking ones are @ (fetch), ! (store) and . (integer print). The collection of word names is called the "dictionary," which can be split into vocabularies (namespaces). For more about the Forth interpreter, see JForth: Implementing Forth in Java by Craig A. Lindley. Some modern Forth implementations include: I don't have the space to teach you Forth, and I've referenced a few tutorials at the end of this article. I have also referenced a few modern Forths that can be freely downloaded.
http://www.drdobbs.com/open-source/modern-forth/210600604?pgno=1
CC-MAIN-2018-17
refinedweb
733
65.12
.NET: #nullable enable public class KeywordData { public int Id { get; set; } public string SomeNonNullValue { get; set; } public string? SomeNullableValue { get; set; } } #nullable restore. Making Defects Impossible Matt Eland ・ Aug 31 '19 ・ 5 min read I personally love this new syntax and find the parallels it offers to TypeScript type definitions very appealing. If you'd like more information on getting started with these, take a look at my in-depth article on nullable reference types in C# 8.0. Safer Code with C# 8 Non-Null Reference Types Matt Eland ・ Sep 14 '19 ・ 5 min read: public void MyMethod(string value) { if (value == null) { value = "Batman"; } // Do something with value } We can simplify this code using the null coalescing assignment operator to the following: public void MyMethod(string value) { value ??= "Batman"; // Do something with value }: public int Foo {get; set;} public readonly int CalculateFooPlusFortyTwo() => Foo + 42; Here we define the CalculateFooPlusFortyTwo method as readonly which prevents it from modifying any instance state at the compiler level. Let's say some future programmer tried to change the code to the following: public int Foo {get; set;} public readonly int CalculateFooPlusFortyTwo() { Foo += 42; return Foo; }. switch (entry.Name) { case "Bruce Wayne": case "Matt Eland": return Heroes.Batman; case "The Thing": if (entry.Source == "John Carpenter") { return Heroes.AntiHero; } return Heroes.ComicThing; case "Bruce Banner": return Heroes.TheHulk; // Many heroes omitted... default: return Heroes.NotAHero; }: return entry.Name switch { "Bruce Wayne" => Heroes.Batman, "Matt Eland" => Heroes.Batman, "The Thing" when entry.Source == "John Carpenter" => Heroes.AntiHero, "The Thing" => Heroes.ComicThing, "Bruce Banner" => Heroes.TheHulk, _ => Heroes.NotAHero };: VehicleBase myVehicle = GetVehicle(); return myVehicle switch { Tank { Movement = TankType.Treads } => 100000M, Tank => 75000M, RocketShip => 99999999M, Car { Color = Colors.Red } => 21999M, Car => 20000M, _ => 1000M };? Discussion (15) Love this writeup, and I agree 100%. "Syntax sugar" on the surface appears to only benefit the creator of the software, but over time once it becomes status quo it makes for less code and greater readability for the next person working on it. Remember we don't write code for compilers/runtimes we write code for humans. The more efficient, clean, and readable the code is the less chance for mistakes to creep in. Nice article Matt. That's a huge part of why I'm learning F#. I believe the language's conciseness improves software quality significantly by reducing potential points of failure and increasing the amount of meaningful code that can be focused on (in addition to the other quality benefits it offers via null handling, preferring immutable objects, etc). Great article as usual! So have you tried using Entity Framework Core (EFCore) with the non-nullable references? I’m wondering if that improves things. For instance, in some ORMs a findByIdoperation returns Tbut in reality, a findByIdoperation always has the potential to not find the item. So the correct return type is T | null. It’s just another good example of communicating intent. “Communicating your needs” / TypeScript’s value from a Buddhist perspective (part 1) Cubicle Buddha ・ May 29 ・ 4 min read generics and nullable won't play nicely I yet have to experiment, but bear in mind that this nullable feature when used on generics requires you to express whether the generic type is either a reference type or a value type (aka where T: class or where T: struct). And since you can't combine both constraints - well... :) It leaves me wondering ;) Interesting. I’d love to hear your findings after you experiment. If you write an article, please link it here. :) Damn this one hit me right when I needed it. Today at work I saw some vs suggestion to change how my switch worked and replaced it for the new sintax, I wrote a note to look it up later and found this post that gave much more than I was looking for. Great post Be sure to check out the link to the Microsoft What's New in C# 8 article, because it also includes tupled switch expressions, which are pretty cool too, but I didn't cover them in this article. I love all the things introduced, and I agree, it promotes readibility. However Nullable has my head scratching quite a bit. C#8.0 introduces a way to deal with null, yet it also provides numerous escape roads. So I advise not to use it, explained in the article why not to do it... Whilst I admire the efforts that have gone into making the world more null-safe, I fear that all the escape roads given make this feature not only useless but dangerous and confusing. Crucially this is where the feature fails massively in its goal to get rid of null reference exceptions. In fact, you could argue it makes it even worse as it masks, gives you exit-strategies (like goto). Microsoft has a history of killing old technologies like these recent technologies were killed in .NET Core 3.0: • ASP.NET Web Forms • ASP.NET MVC • WCF Server • Windows Workflow • AppDomains • Code Access Security (c-sharpcorner.com/article/future-o... All community developed many codes with .NET Framework 4.x that is good and don't need to be upgraded to .NET Core x.x and why we need stop in time using .NET Framework if c# is opensource? The Visual Studio team must have some work to make c# 8.0 compatible. We don't want to throw our code in the trash of time. We will design in .NET Core 3.x, 5, but our current code must not be forgotten. Based on this I created today a Facebook page to POST protests against Microsoft and call the community to demand c# 8.0 works with .NET Framework 4.x. Share, like, comment: facebook.com/CSharp-8-to-Microsoft... Great article, thanks. I do like the "nullable reference types" naming though. I think the implementation plays well with the existing "nullable value types" paradigm. I'm oddly picky on names. I'm okay with the term when saying that string? is what's new, as that is a nullable ref type. But the reality is we've been living with nullable ref types since the betas. Just my own neuroses. Thanks for the feedback! Nice article Matt. Thanks! This one blew up today on Medium and I'm not sure why. Always nice to have writing benefit others. Thank you Matt. Your article is really great! Readonly Members this looks similar to the old c++ way of marking method as const where value is a member field.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/techelevator/how-c-8-helps-software-quality-4ln
CC-MAIN-2021-17
refinedweb
1,094
67.96
Hi, I'm trying to transfer data between two atmega 328p (arduino uno board). I have made a small program understand how SPI work. I need it for a bigger project. Here is the code on slave and master : Master : #include "Arduino.h" void setup(){ DDRB = (1 << 3) | (1 << 5) | (1<<2); // Set MOSI, SCK and SS as Output DDRB &= ~(1<<4); //set MISO as INPUT PORTB &= ~(1 << 2); // SS Low //SPi,interup and master SPCR = (1<<SPE)| (1 << SPIE) |(1<<MSTR); char clr; clr=SPSR; clr=SPDR; SPDR = 40; //start Serial.begin(9600); } ISR(SPI_STC_vect){ uint8_t val; val=SPDR; Serial.print(val); //should display 250 Serial.print("\n"); SPDR = 12; } void loop(){ } Slave : #include "Arduino.h" //SLAVE void setup(){ DDRB = (1 << 4); //MISO as OUTPUT SPCR = (1 << SPE) | (1 << SPIE); //enable spi and interupt Serial.begin(9600); } ISR(SPI_STC_vect){ uint8_t val; val = SPDR; Serial.print(val); //should display 12 Serial.print("\n"); SPDR = 250; } void loop(){ } Sadly, both boards display "12". Am I doing something wrong ? Thanks in advance. My understanding of the SPI protocol is that the slave only responds to the master, hence SPDR won't contain 250 until after the master has transmitted 12. Top - Log in or register to post comments Try sending a dummy byte from SPI master after the first byte, to read out a byte from the slave. Tom signature Top - Log in or register to post comments I don't see an sei() anywhere. Does Arduino need it? Second, I don't think that you are continuing to clock the SPI during the return time. The master HAS to provide the clock during all transfers, whether they originate from the master or from the slave. You don't put 250 into the slave data register until AFTER the data from the master is received. So, have the master send the data byte, then one more; the second only serves to transfer the slave's data back to the master. You CAN put 250 into the slave data register, before hand, and have it waiting there for action by the master. Then, while the master is sending, the slave will also be sending, and you will have your double transfer with one 8-bit clock burst and the second clock burst described in the preceding paragraph is not needed. Jim Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA Top - Log in or register to post comments 1. Only master sends values. 2. The wire between SS pins is not needed when there is only one slave. Still the master SS pin should be configured as output. Slave SS is configured as input and wired to GND. 3. Command "SPDR = byte" on the master side causes that SPDR registers in master and slave exchange their values. The same command in slave only writes a value to SPDR with no further action. Example Suppose we want to send number 3 to the slave and we want the slave to multiply received number by 4 and return the result to master. In slave SPDR register is zero. In master we write "SPDR = 3". The communication starts and numbers in SPDR-slave and SPDR-master are exchanged. Now we have number 3 in slave and 0 in master. Master is not interested about this number so it only waits a moment. In slave an interrupt is fired. In ISR we read SPDR value, multiply it by 4 and write the result back to SPDR. Master sends a dummy byte, say 255. Registers again exchange. Then we have number 255 in the slave and 12 (3x4) in master. Master reads SPDR and prints the result. Top - Log in or register to post comments
https://www.avrfreaks.net/comment/2636006
CC-MAIN-2019-26
refinedweb
618
75.2
I accidentally compiled these headers in a simple test program I was running to get a better understanding of strtok: Code: #include <string.h> #include <iostream> #include <cstring> And I received this warning: warning: #include_next is a GCC extension. I Googled and found out that it is a GCC extension, well that is quite obvious from the warning I received but what exactly does it mean or do? If I get rid of the string.h header (which is what I meant to do ) or include it after iostream this warning goes away! This leads me to another question is there a case where one would need to include to similar header files like string.h and cstring? :confused:
http://cboard.cprogramming.com/cplusplus-programming/71121-what-sharpinclude_next-printable-thread.html
CC-MAIN-2015-48
refinedweb
119
71.55
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I have a Groovy custom listener in which I am trying to parse the history to see if a custom field (text type) has changed. The guts of the code I have looks like this: def chiList = chm.getAllChangeItems(issue).reverse(); for (def chi in chiList) { log.debug "Field is "+chi.getField()+" changed from "+chi.getFroms()+" to "+chi.getTos().toString() } What shows up in the log is: Field is IP Version changed from [:] to {} Field is IP Version changed from [:] to {} Field is status changed from [1:Open] to {6=Closed} I'm not seeing any values in getFroms or getTos maps. Can someone help me? George George, Why aren't you using the getChangeItemsForField method? I am guessing this should return a list of ChangeItemBean for a particular custom field too right? Or is it just for system fields? def historyItemList = changeHistoryManager.getChangeItemsForField(issue, "status"); What is getFroms and getTos? Are you thinking of getFromString() ? Can you point to the javadoc for these methods...? changeHistoryManager.getAllChangeItems(Issue issue) returns a list of ChangeHistoryItem. ChangeHistoryItem has the methods getFroms() and getTos() Oh yeah. So what is chm? If you keep updating the field, do you get more and more debug lines? Can you post your entire code? I have figure a different approach to my problem, but chm is below. ComponentManager cm = ComponentManager.getInstance() def chm = cm.getChangeHistoryManager().
https://community.atlassian.com/t5/Marketplace-Apps-questions/Not-seeing-custom-field-values-in-getFroms/qaq-p/324568
CC-MAIN-2018-51
refinedweb
250
70.19
United Kingdom en Installation and Service Manual Gas Fired Wall Mounted Condensing Combination Boiler EcoBlue Advance Combi 24 - 28 - 33 - 40 These instructions include the Benchmark Commissioning Checklist and should be left with the user for safe keeping. They must be read in conjunction with the Flue Installation Guide. Model Range Building Regulations and the Benchmark Commissioning Checklist Baxi EcoBlue Advance 24 Combi ErPD G.C.No 47-077-14 Baxi EcoBlue Advance 28 Combi ErPD G.C.No 47-077-15 Baxi EcoBlue Advance 33 Combi ErPD G.C.No 47-077-16 Baxi EcoBlue Advance 40 Combi ErPD G.C.No 47-077-17certification scheme for gas heating appliances.. 0086 ISO 9001 FM 00866 2017. WARNING: Any person who does any unauthorised act in relation to a copyright work may be liable to criminal prosecution and civil claims for damages. You have just purchased one of our appliances and we thank you for the trust you have placed in our products. Please note that the product will provide good service for a longer period of time if it is regularly checked and maintained. Our customer support network is at your disposal at all times. 2 EcoBlue Advance Combi 7219715 - 03 (04/17) Installer Notification Guidelines 7219715 - 02 (09/16) LABC will record the data and will issue a certificate of compliance EcoBlue Advance Combi 3 Contents Contents 1 Introduction 1.1 1.2 1.3 1.4 1.5 1.6 2 Safety 2.1 2.2 2.3 3 4 7 7 8 8 9 9 10 10 10 11 12 General Safety Instructions Recommendations Specific Safety Instructions 2.3.1 Handling 12 12 13 13 14 3.1 3.2 3.3 3.4 14 15 16 17 Technical Data Technical Parameters Dimensions and Connections Electrical Diagram Description of the Product 18 4.1 4.2 18 19 19 19 19 19 20 21 22 22 22 General Description Operating Principle 4.2.1 Central Heating Mode 4.2.2 Domestic Hot Water Mode 4.2.3 Boiler Frost Protection Mode 4.2.4 Pump Protection Main Components Control Panel Description Standard Delivery Accessories & Options 4.6.1 Optional Extras Before Installation 23 5.1 5.2 23 24 24 24 24 24 24 25 25 25 26 27 27 27 27 27 28 30 31 33 5.3 4 7 Technical Specifications 4.3 4.4 4.5 4.6 5 General Additional Documentation Symbols Used Abbreviations Extent of Liabilities 1.5.1 Manufacturer’s Liability 1.5.2 Installer’s Responsibility Homologations 1.6.1 CE Marking 1.6.2 Standards Installation Regulations Installation Requirements 5.2.1 Gas Supply 5.2.2 Electrical Supply 5.2.3 Hard Water Areas 5.2.4 Bypass 5.2.5 System Control 5.2.6 Treatment of Water Circulating Systems 5.2.7 Showers 5.2.8 Expansion Vessel (CH only) 5.2.9 Safety Pressure Relief Valve Choice of the Location 5.3.1 Location of the Appliance 5.3.2 Data Plate 5.3.3 Bath & Shower Rooms 5.3.4 Ventilation 5.3.5 Condensate Drain 5.3.6 Clearances 5.3.7 Flue/Chimney Location 5.3.8 Horizontal Flue/Chimney Systems EcoBlue Advance Combi 7219715 - 03 (04/17) Contents 5.4 5.5 5.6 6 Installation 37 6.1 6.2 37 37 37 38 38 38 39 39 41 41 41 41 42 42 42 6.3 6.4 6.5 6.6 6.7 7 43 7.1 7.2 43 43 43 43 44 44 44 44 46 46 46 46 47 7.4 7.5 7.6 9 General Assembly 6.2.1 Fitting the Pressure Relief Discharge Pipe 6.2.2 Connecting the Condensate Drain Preparation 6.3.1 Panel Removal Air Supply / Flue Gas Connections 6.4.1 Connecting the Flue/Chimney Electrical Connections 6.5.1 Electrical Connections of the Appliance 6.5.2 Connecting External Devices Filling the Installation External Controls 6.7.1 Installation of External Sensors 6.7.2 Optional Outdoor Sensor Commissioning 7.3 8 34 34 34 34 34 34 35 35 35 35 36 36 36 5.3.9 Flue/Chimney Lengths 5.3.10 Flue/Chimney Trim 5.3.11 Terminal Guard 5.3.12 Flue/Chimney Deflector 5.3.13 Flue/Chimney Accessories Transport Unpacking & Initial Preparation 5.5.1 Unpacking 5.5.2 Initial Preparation 5.5.3 Flushing Connecting Diagrams 5.6.1 System Filling and Pressurising 5.6.2 Domestic Hot Water Circuit General Checklist before Commissioning 7.2.1 Preliminary Electrical Checks 7.2.2 Checks Commissioning Procedure 7.3.1 De-Aeration Function Gas Settings 7.4.1 Check Combustion - ‘Chimney Sweep’ Mode Configuring the System 7.5.1 Check the Operational (Working Gas Inlet Pressure & Gas Rate) Final Instructions 7.6.1 Handover 7.6.2 System Draining Operation 48 8.1 8.2 8.3 8.4 8.5 48 48 48 49 49 General To Start-Up To Shutdown Use of the Control Panel Frost Protection Settings 9.1 7219715 - 03 (04/17) 50 50 Parameters EcoBlue Advance Combi 5 Contents 10 11 12 13 14 6 Maintenance 51 10.1 10.2 10.3 51 52 53 53 54 54 55 55 56 56 56 56 56 57 57 57 58 58 59 59 60 60 60 61 61 62 62 63 General Standard Inspection & Maintenance Operation Specific Maintenance Operations Changing Components 10.3.1 Spark Ignition & Flame Sensing Electrodes 10.3.2 Fan 10.3.3 Air / Gas Venturi 10.3.4 Burner 10.3.5 Insulation 10.3.6 Flue Sensor 10.3.7 Igniter 10.3.8 Heating Flow & Return Sensors 10.3.9 Safety Thermostat 10.3.10 DHW NTC Sensor 10.3.11 Pump - Head Only 10.3.12 Pump - Complete 10.3.13 Automatic Air Vent 10.3.14 Safety Pressure Relief Valve 10.3.15 Heating Pressure Gauge 10.3.16 Plate Heat Exchanger 10.3.17 Hydraulic Pressure Sensor 10.3.18 DHW Flow Regulator & Filter 10.3.19 DHW Flow Sensor (‘Hall Effect’ Sensor) 10.3.20 Diverter Valve Motor 10.3.21 Main P.C.B. 10.3.22 Boiler Control P.C.B. 10.3.23 Expansion Vessel 10.3.24 Gas Valve 10.3.25 Setting the Gas Valve (CO2 Check) Troubleshooting 64 11.1 11.2 64 64 Error Codes Fault Finding Decommissioning Procedure 70 12.1 70 Decommissioning Procedure Spare Parts 71 13.1 13.2 71 71 General Spare Parts List Notes 72 Benchmark Commissioning Checklist 74 EcoBlue Advance Combi 7219715 - 03 (04/17) Introduction 1 1 Introduction 1.1. General WARNING Installation, repair and maintenance must only be carried out only by a competent person. This document is intended for use by competent persons. All Gas Safe registered engineers carry an ID card with their licence number and a photograph. You can check your engineer is registered by telephoning 0800 408 5500 or online at This appliance must be installed in accordance with the manufacturer’s instructions and the regulations in force. If the appliance is sold or transferred, or if the owner moves leaving the appliance behind you should ensure that the manual is kept with the appliance for consultation by the new owner and their installer. Read the instructions fully before installing or using the appliance. In GB, this must be carried out by a competent person as stated in the Gas Safety (Installation & Use) Regulations (as may be amended from time to time).. The appliance is designed as a boiler for use in residential domestic environments on a governed meter supply only. The selection of this boiler is entirely at the owner’s risk. If the appliance is used for purposes other than or in excess of these specifications, the manufacturer will not accept any liability for resulting loss, damage or injury. The manufacturer will not accept any liability whatsoever for loss, damage or injury arising as a result of failure to observe the instructions for use, maintenance and installation of the appliance. WARNING Check the information on the data plate is compatible with local supply conditions. 1.2 Additional Documentation These Installation & Service Instructions must be read in conjunction with the Flue Installation Guide supplied in the Literature Pack. Various timers, external controls, etc. are available as optional extras. Full details are contained in the relevant sales literature. 7219715 - 03 (04/17) EcoBlue Advance Combi 7 1 Introduction 1.3 Symbols Used In these instructions, various levels are employed to draw the user's attention to particular information. In so doing, we wish to safeguard the user's safety, prevent hazards and guarantee correct operation of the appliance. Each level is accompanied by a warning triangle DANGER Risk of a dangerous situation causing serious physical injury. WARNING Risk of a dangerous situation causing slight physical injury. CAUTION Risk of material damage. Signals important information . Signals a referral to other instructions or other pages in the instructions. 1.4 Abbreviations DHW: Domestic hot water CH: Central heating GB: Great Britain IE: Ireland BS: British standard HHIC: Heating and Hotwater Industry Council Pn: Nominal output Pnc: Condensing output Qn: Nominal heat input Qnw: Nominal domestic hot water heat input Hs: Gross calorific value 8 EcoBlue Advance Combi 7219715 - 03 (04/17) Introduction 1.5 1.5.1 1 Extent of Liabilities Manufacturer's Liability Our products are manufactured in compliance with the requirements of the various european applicable Directives. They are therefore delivered with marking and all relevant documentation. In the interest of customers, we are continuously endeavouring to make improvements in product quality. All the specifications stated in this document are therefore subject to change without notice. The manufacturer will not accept any liability for loss, damage or injury arising as a result of:Failure to abide by the instructions on using the appliance. Failure to regularly maintain the appliance, or faulty or inadequate maintenance of the appliance. Failure to abide by the instructions on installing the appliance. This company declares current and relevant requirements of legislation and guidance including. Prior to commissioning all systems must be thoroughly flushed and treated with inhibitor (see section 5.2.6). Failure to do so will invalidate the appliance warranty. Incorrect installation could invalidate the warranty and may lead to prosecution. 7219715 - 03 (04/17) EcoBlue Advance Combi 9 1 Introduction 1.5.2 Installer's Responsibility The installer is responsible for the installation and initial start up of the appliance. The installer must adhere to the following instructions: Read and follow the instructions given in the manuals provided with the appliance. Carry out installation in compliance with the prevailing legislation and standards. Ensure the system is flushed and inhibitor added. Install the flue/chimney system correctly ensuring it is operational and complies with prevailing legislation and standards, regardless of location of the boiler’s installation. Only the installer should perform the initial start up and carry out any checks necessary. Explain the installation to the user. Complete the Benchmark Commissioning Checklist - this is a condition of the warranty ! Warn the user of the obligation to check the appliance and maintain it in good working order. Give all the instruction manuals to the user. 1.6 Homologations 1.6.1 CE Marking EC - Declaration of Conformity Baxi Heating UK Limited being the manufacturer / distributor within the European Economic Area of the following:Baxi EcoBlue Advance 24 - 28 - 33 - 40 Combi ErPD declare that the above is in conformity with the provisions of the Council Directive 2009/142/EC 92/42/EEC 2004/108/EC 2006/95/EC 2009/125/EC 2010/30/E86. For GB/IE only. 10 EcoBlue Advance Combi 7219715 - 03 (04/17) Introduction 1.6.2 1 Standards Codes of Practice - refer to the most recent version In GB the following Codes of Practice apply: Standard Scope BS 6891. BS 4814 Specification for Expansion Vessels using an internal diaphragm, for sealed hot water systems. IGE/UP/7/1998 Guide for gas installations in timber framed housing.. 7219715 - 03 (04/17) EcoBlue Advance Combi 11 2 2 Safety Safety 2.1 General Safety Instructions DANGER If you smell gas: 1. Turn off the gas supply at the meter 2. Open windows and doors in the hazardous area 3. Do not operate light switches 4. Do not operate any electrical equipment 5. Do not use a telephone in the hazardous area 6. Extinguish any naked flame and do not smoke 7. Warn any other occupants and vacate the premises 8. Telephone the National Gas Emergency Service on:- 0800 111 999 2.2 Recommendations WARNING Installation, repair and maintenance must be carried out by a Gas Safe Registered Engineer (in accordance with prevailing local and national regulations). When working on the boiler, always disconnect the boiler from the mains and close the main gas inlet valve. After maintenance or repair work, check the installation to ensure that there are no leaks. CAUTION The boiler should be protected from frost. Only remove the casing for maintenance and repair operations. Replace the casing after maintenance and repair operations. 12 EcoBlue Advance Combi 7219715 - 03 (04/17) Safety 2.3 2.3.1 2 Specific Safety Instructions Handling General • The following advice should be adhered to, from when first handling the boiler to the final stages of installation, and also during maintenance. • Most injuries as a result of inappropriate handling and lifting are to the back, but all other parts of the body are vulnerable, particularly shoulders, arms and hands. Health & Safety is the responsibility of EVERYONE. • There is no ‘safe’ limit for one man - each person has different capabilities. The boiler should be handled and lifted by TWO PEOPLE. • Do not handle or lift unless you feel physically able. • Wear appropriate Personal Protection Equipment e.g. protective gloves, safety footwear etc. Preparation • Co-ordinate movements - know where, and when, you are both going. • Minimise the number of times needed to move the boiler plan ahead. • Always ensure when handling or lifting the route is clear and unobstructed. If possible avoid steps, wet or slippery surfaces, unlit areas etc. and take special care on ladders/into lofts. Technique • When handling or lifting always use safe techniques - keep your back straight, bend your knees. Don’t twist - move your feet, avoid bending forwards and sideways and keep the load as close to your body as possible. • Where possible transport the boiler using a sack truck or other suitable trolley. • Always grip the boiler firmly, and before lifting feel where the weight is concentrated to establish the centre of gravity, repositioning yourself as necessary. See the ‘Installation’ section of these instructions for recommended lift points. Remember • The circumstances of each installation are different. Always assess the risks associated with handling and lifting according to the individual conditions. • If at any time when installing the boiler you feel that you may have injured yourself STOP !! DO NOT ‘work through’ the pain - you may cause further injury. IF IN ANY DOUBT DO NOT HANDLE OR LIFT THE BOILER OBTAIN ADVICE OR ASSISTANCE BEFORE PROCEEDING ! 7219715 - 03 (04/17) EcoBlue Advance Combi 13 3 Technical Specifications 3 Technical Specifications 3.1 Appliance Type Electrical Supply 230V~ 50Hz (Appliance must be connected to an earthed supply) C13 C33 C53 Appliance Category CAT I 2H Heat Input CH Qn Hs (Gross) Max 24 model kW 22.2 28 model kW 26.6 33 model kW 31.1 40 model kW 35.5 Min 5.2 6.3 7.6 8.9 Heat Output CH Pn (Non-Condensing) Max Min 24 model kW 20.0 4.6 28 model kW 24.0 5.5 33 model kW 28.0 6.6 40 model kW 32.0 7.8 Heat Output CH Pnc (Condensing) Max Min 24 model kW 21.2 4.9 28 model kW 25.3 6.0 33 model kW 29.6 7.1 40 model kW 33.9 8.4 Heat Input DHW Qnw Hs (Gross) Max 24 model kW 27.4 28 model kW 32.1 33 model kW 37.8 40 model kW 45.8 Heat Output DHW 24 model 28 model 33 model 40 model kW kW kW kW Gas Nozzle Injector 24 model mm 28 model mm 33 model mm 40 model mm NOx Class Technical Data Max 24.0 28.0 33.0 40.0 Ø 5.0 Ø 5.6 Ø 6.6 Ø 6.6 5 Temperatures C.H. Flow Temp (adjustable) 25°C to 80°C max (± 5°C) D.H.W. Flow Temp (adjustable) 40°C to 60°C max (± 5°C) dependent upon flow rate Power Consumption 24 model W 28 model W 33 model W 40 model W Safety Discharge Max Operating Min Operating Recommended Operating Range 85 90 95 100 Electrical Protection IPX5D (without integral timer) IP20 (with integral timer) External Fuse Rating 3A Flow Rates Internal Fuse Rating F2L DHW Flow Rate @ 30o C Rise 10.9 12.9 15.3 18.3 DHW Flow Rate @ 35o C Rise 9.8 11.5 13.5 16.4 Min Working DHW Flow Rate 2 2 2 2 Condensate Drain To accept 21.5mm (3/4 in) plastic waste pipe Flue Terminal Dimensions Diameter Projection Connections Gas Inlet Heating Flow Heating Return Cold Water Inlet Hot Water Outlet Pressure Relief Discharge - 100mm 125mm copper tails 22mm 22mm 22mm 15mm 15mm 15mm Outercase Dimensions Casing Height Overall Height Inc Flue Elbow Casing Width Casing Depth Clearances Above Casing Below Casing Front Front L.H. Side R.H. Side - *This is MINIMUM recommended dimension. Greater clearance will aid installation and maintenance. NOTE: All data in this section are nominal values and subject to normal production tolerances. Pump Available Head Packaged Boiler Carton Installation Lift Weight bar 8 0.15 (24) (28) (33) (40) l/min l/min l/min l/min Where Low Flow Taps or Fittings are intended to be used in the DHW system connected to a Baxi EcoBlue Advance Combi it is strongly recommended that the DHW flow rate DOES NOT fall below 2.5 l/min. This will ensure reliable operation of the DHW function. Expansion Vessel - (For Central Heating only. Integral with appliance) bar Min Pre-charge Pressure 1.0 (24 & 28) (33 & 40) litre litre Max Capacity of CH System 125 155 Primary Water Content of Boiler (unpressurised) 2.5 2.5 NATURAL GAS ONLY ! Max Gas Rate (Natural Gas - G20) (After 10 mins) 24 model m3/h 2.61 28 model m3/h 3.05 33 model m3/h 3.59 40 model m3/h 4.35 Dynamic (nominal) Inlet Pressure (Natural Gas - G20) mbar 20 with a CV of 37.78 MJ/m3 See graph below Product Characteristics Database (SEDBUK) 6 5.5 5 SAP 2009 Annual Efficiency is 89% 4.5 4 3.5 This value is used in the UK Government’s 3 Metre (wg) Packaged Boiler Carton Installation Lift Weight 763mm 923mm 450mm 345mm 175 mm Min 150 mm* Min 450 mm Min (For Servicing) 5 mm Min (In Operation) 5 mm Min 5 mm Min Pump - Available Head Packaged Boiler Carton Installation Lift Weight bar 3 2.5 0.5 1-2 DHW Circuit Pressures Max Operating Min Operating Weights (24/28 model) 42.3kg 36kg (33 model) 44.3kg 38kg (40 model) 45.3kg 39kg Central Heating Primary Circuit Pressures 2.5 Standard Assessment Procedure (SAP) for energy 2 1.5 rating of dwellings. The test data from which it has 1 0.5 been calculated has been certified by 0085. 0 0 200 400 600 800 1000 1200 Flow Rate (l/h) 14 EcoBlue Advance Combi 7219715 - 03 (04/17) Technical Specifications 3.2 3 Technical Parameters Technical parameters for boiler combination heaters Baxi EcoBlue Advance Combi ErPD 24 28 33 40 Condensing boiler Yes Yes Yes Yes Low-temperature boiler(1) No No No No B1 boiler No No No No Cogeneration space heater No No No No Yes Yes Yes Yes Combination heater.7 8.0 9.4 10.7 Seasonal space heating energy efficiency Šs % 93 93 93 93 Useful efficiency at rated heat output and Š4 high temperature regime(2) % 88.0 87.9 88.0 87.9 Š1 % 98.0 98.0 98.1 98.0 Full load elmax kW 0.030 0.035 0.040 0.040 Part load elmin kW 0.014 0.014 0.014 0.014 Standby mode PSB kW 0.003 0.003 0.003 0.003 Standby heat loss Pstby kW 0.035 0.035 0.040 0.045 Ignition burner power consumption Pign kW - - - - Annual energy consumption QHE kWh GJ 17204 62 20645 74 24086 87 27527 99 Sound power level, indoors LWA dB 51 52 53 55 Emissions of nitrogen oxides NOX mg/kWh 22 20 24 24 XL XL XXL XXL Rated heat output Useful efficiency at 30% of rated heat output and low temperature regime(1) Auxiliary electricity consumption Other items Domestic hot water parameters Declared load profile Daily electricity consumption Qelec kWh 0.151 0.168 0.215 0.172 Annual electricity consumption AEC kWh 33 37 47 38 Water heating energy efficiency Šwh % 90 88 86 85 Daily fuel consumption Qfuel kWh 21.340 21.980 27.850 28.570 Annual fuel consumption AFC GJ 16 17 22 23 (1) Low temperature means for condensing boilers 30°C, for low temperature boilers 37°C and for other heaters 50°C return temperature (at heater inlet). (2) High temperature regime means 60°C return temperature at heater inlet and 80°C feed temperature at heater outlet. See The back cover for contact details. 7219715 - 03 (04/17) EcoBlue Advance Combi 15 3 Technical Specifications 3.3 Dimensions and Connections There must be no part of the air duct (white tube) visible outside the property. Dimensions At least 1.5° G E A 763mm B 345mm C 450mm A D 116mm Ø Min. E 160mm (207mm for 80/125mm flue systems) F 140mm B G 106mm 360° Orientation H 170mm J 280mm H D C J Flue Ø 100mm Tap Rail F Condensate Drain 50 mm 45 mm Pressure Relief Valve (15mm) 65 mm Heating Flow (22mm) 16 EcoBlue Advance Combi 65 mm Hot Water Outlet (15mm) 65 mm Gas Inlet (22mm) 65 mm Cold Water Inlet (15mm) 30 95 mm Heating Return (22mm) 7219715 - 03 (04/17) Technical Specifications 3.4 M2 Low Voltage External Control Connection 9 7 8 6 5 4 3 2 3 Electrical Diagram r 1 bk w 10 Boiler Controls br R /R /P + + bk Hall Effect Sensor g/y b DHW NTC Sensor p b r y w g b br br Pump g Flue Sensor b b bk X20 X21 X40 X30 X41 r X42 X60 g Hydraulic Pressure Switch X22 X23 X50 Heating Return Sensor br b b b Heating Flow Sensor X1 X2 r X10 X11 b X13 Igniter br b bk g bk b b br b br b b Safety Thermostat w NL bk Mains Input Cable br g/y bk br g bk Diverter Valve Motor Fan g br 2 3 4 g/y g/y Gas Valve b 1 b g/y b Timer Connector Spark Ignition Electrode Flame Sensing Electrode r br E X14 g/y 2 Link y g/y X12 r M1 Mains Voltage Connection 1 b br 5 Timer Bridge Key To Wiring Colours 7219715 - 03 (04/17) b - Blue r - Red bk - Black g - Green br - Brown g/y - Green/Yellow w - White y - Yellow gr - Grey p - Purple EcoBlue Advance Combi 17 4 Description of the Product 4 Description of the Product 4.1 General Description 1. The Baxi EcoBlue Advance Combi boilers are fully automatic gas fired wall mounted condensing combination boilers. They are room sealed and fan assisted, and will serve central heating and mains fed domestic hot water. 2. The boiler is set to give a maximum output of :24 model 28 model 33 model - Information Label 40 model - 24 kW DHW 21.2 kW CH Pnc (Condensing) 28 kW DHW 25.3 kW CH Pnc (Condensing) 33 kW DHW 29.6 kW CH Pnc (Condensing) 40 kW DHW 33.8 kW CH Pnc (Condensing) 3. The boiler is factory set for use on Natural Gas (G20). 4. The boiler is suitable for use only on fully pumped sealed heating systems. Priority is given to domestic hot water. Boiler Control Flap Fig. 1 5. The boiler data badge gives details of the model, serial number and Gas Council number and is situated on the boiler lower panel. It is visible when the control box is lowered . All systems must be thoroughly cleansed, flushed and treated with inhibitor (see section 5.2.6). These Installation & Servicing Instructions MUST be read in conjunction with the Flue Installation Guide supplied in the Literature Pack. Data Badge Fig. 2 Control Box removed for clarity 18 EcoBlue Advance Combi 7219715 - 03 (04/17) Description of the Product 15 4.2 Operating Principle The boiler can be set in 3 operating modes:- ‘Summer’ (DHW only), ‘Winter’ (CH & DHW) or ‘Heating Only’ (CH only) by use of the button. 16 14 4 4.2.1 Central Heating Mode 17 1. With a demand for heating, the pump circulates water through the primary circuit. 18 2. Once the burner ignites the fan speed controls the gas rate to maintain the heating temperature measured by the temperature sensor. 19 20 3. When the flow temperature exceeds the setting temperature, a 3 minute delay occurs before the burner relights automatically (anti-cycling). The pump continues to run during this period. 21 22 4. When the demand is satisfied the burner is extinguished and the pump continues to run for a period of 3 minutes (pump overrun). 24 23 13 4.2.2 12 Domestic Hot Water Mode 1. Priority is given to the domestic hot water supply. A demand at a tap or shower will override any central heating requirement. 11 2. The flow of water will operate the DHW Sensor (Hall Effect Sensor) which requests the 3 way valve to change position. This will allow the pump to circulate the primary water through the DHW plate heat exchanger. 10 3 1 2 5 7 3. The burner will light automatically and the temperature of the domestic hot water is controlled by the temperature sensor. 6 4. When the domestic hot water demand ceases the burner will extinguish and the diverter valve will remain in the domestic hot water mode, unless there is a demand for central heating. 4 8 9 A B C E D F Boiler Schematic Layout Key 1. Pump with Automatic Air Vent 2. Boiler Drain Tap 3. Pressure Gauge 4. Safety Pressure Relief Valve 5. DHW Flow Sensor/Filter/Restrictor 6. Domestic Hot Water Priority Sensor (‘Hall Effect’ Sensor) 7. Domestic Hot Water NTC Sensor 8. Hydraulic Pressure Switch 9. Three Way Valve & Motor 10. Plate Heat Exchanger 11. Gas Valve 12. Safety Thermostat (105° C) 13. Heating Flow Sensor 14. Flue Sensor 15. Boiler Adaptor 16. Primary Heat Exchanger 17. Spark Ignition Electrode 18. Burner 19. Flame Sensing Electrode 20. Air/Gas Collector 21. Heating Return Sensor 22. Fan 23. Air/Gas Venturi 24. Expansion Vessel Connections:A – Condensate Drain B – Heating Flow C – Domestic Hot Water Outlet D – Gas Inlet E – Cold Water Inlet On/Off Valve and filter F – Heating Return 7219715 - 03 (04/17) 4.2. 4.2.4 Pump Protection 1. This activates once a week if there has been no demand. The pump runs for 30 seconds to prevent sticking. EcoBlue Advance Combi 19 4 Description of the Product 4.3 Main Components 20 2 3 1 1. Expansion Vessel 2. Expansion Vessel Valve - Do NOT use as vent 3. Primary Heat Exchanger 4. Plate Heat Exchanger 5. Pump with Automatic Air Vent 6. Central Heating System Pressure Gauge 9 12 7. Fan Assembly with Venturi 11 8. Air/Gas Collector 10 9. Flue Sensor 10. Flame Sensing Electrode 11. Spark Ignition Electrode 12. Combustion Box Cover & Burner 13. Control Box Display 14. Condensate Trap 15. Safety Pressure Relief Valve 23 8 24 16. 7 4 5 13 19 Drain Off Point 17. Gas Valve 18. Diverter Valve Motor 19. Boiler Controls 20. Boiler Adaptor 21. Heating Flow Sensor 22. Safety Thermostat 23. Igniter 24. Air Box 25. Heating Return Sensor 26. Hydraulic Pressure Sensor 27. Domestic Hot Water Priority Sensor (‘Hall Effect’ Sensor) 6 25 21 22 18 RQ ADJ. 14 5 1 2 3 EV1 4 26 17 27 16 20 EcoBlue Advance Combi 16 7219715 - 03 (04/17) Description of the Product 4.4 4 Control Panel Description Key to Controls R Standby - Reset - Esc Boiler Information View Standby - Reset - Esc Button Boiler Information View Button Increase CH Temperature Button R /R /P Decrease CH Temperature Button + + Increase DHW Temperature Button Decrease DHW Temperature Button Domestic Hot Water (DHW) Temperature Adjustment Central Heating (CH) Summer / Winter / Only Heating Mode Button Temperature Adjustment Summer - DHW only mode Summer - Winter - Heating Only Mode Winter - DHW & CH mode Heating Only - Only CH mode Display Description DHW and CH OFF (frost protection still enabled) Indicate errors that prevent burner from igniting R Error - Not resettable by user Water pressure too low R Indicates an error resettable by the user Indicates navigation in programming mode (parameter) Indicates navigation in programming mode Generic error Burner lit DHW mode (symbol will flash with demand) Heating mode (symbol will flash with demand) Display showing all available characters Units for temperature Units for pressure R Service due 7219715 - 03 (04/17) EcoBlue Advance Combi 21 4 Description of the Product 4.5 Standard Delivery 1. The pack contains: Boiler Wall mounting plate (pre-plumbing jig) including isolation valves Fittings pack Literature pack • Installation & Servicing Manual (including ‘benchmark’) • User Guide Instructions • Flue Accessories & Fitting Guide • Registration Card • Fernox Leaflet • Adey Leaflet • Wall Template • Product Leaflet • Package Leaflet 4.6 Accessories & Options 4.6.1 Optional Extras 1. Various timers, external controls, etc. are available as optional extras. Plug-in Mechanical Timer Kit ----------------------------------------- 7212341 Plug-in Digital Timer Kit ------------------------------------------------ 7212342 Wireless RF Mechanical Thermostat Kit --------------------------- 7212343 Wireless RF Digital Programmable Room Thermostat Kit ---- 7212344 Single Channel Wired Programmable Room Thermostat Kit - 7212438 Wired Outdoor Weather Sensor ------------------------------------- 7213356 Two Channel Wired Programmer Kit ------------------------------- 7212443 Single Channel Wired Programmer Kit ---------------------------- 7212444 Mechanical Room Thermostat -------------------------------------- 7209716 Flue Accessories (elbows, extensions, clamps etc.) (refer to the Flue Accessories & Fitting Guide supplied in the literature pack.) Remote relief valve kit ------------------------------------------------- 512139 Boiler discharge pump ------------------------------------------------- 720648301 1M Drain Pipe ‘Trace Heating’ Element --------------------------- 720644401 2M Drain Pipe ‘Trace Heating’ Element --------------------------- 720664101 3M Drain Pipe ‘Trace Heating’ Element --------------------------- 720664201 5M Drain Pipe ‘Trace Heating’ Element --------------------------- 720664401* *Where the drain is between 3 & 5 metres a 5 metre kit can be used and “doubled back” upon itself. Any of the above MUST be fitted ONLY by a qualified competent person. Further detail can be found in the relevant sales literature and at 22 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 5 5 Before Installation 5.1 Installation Regulations WARNING Installation, repair and maintenance must only be carried out only by a competent person. This document is intended for use by competent persons, Installation must be carried out in accordance with the prevailing regulations, the codes of practice and the recommendations in these instructions. Please refer to 1.5.1 and 1.6.2 Installation must also respect this instruction manual and any other applicable documentation supplied with the boiler. 7219715 - 03 (04/17) EcoBlue Advance Combi 23 5 Before Installation 5.2 5.2.1 Installation Requirements Gas Supply 1. The gas installation should be in accordance with the relevant standards. In GB this is BS 6891 (NG). In IE this is the current edition of I.S. 813 “Domestic Gas Installations”. 2. The connection to the appliance is a 22mm copper tail located at the rear of the gas service cock (Fig. 3).. 4. The gas service cock incorporates a pressure test point. The service cock must be on to check the pressure. 5.2.2 Electrical Supply 1. External wiring must be correctly earthed, polarised and in accordance with relevant regulations/rules. In GB this is the current I.E.E. Wiring Regulations. In IE reference should be made to the current edition of ETCI rules. Fig. 3 Gas Service Cock 2. The mains supply is 230V ~ 50Hz fused at 3A. The method of connection to the electricity supply must facilitate complete electrical isolation of the appliance. Connection may be via a fused double-pole isolator with a contact separation of at least 3mm in all poles and servicing the boiler and system controls only. 5.2.3 Hard Water Areas Only water that has NOT been artificially softened must be used when filling or re-pressurising the primary system. If the mains cold water to the property is fitted with an artificial softening/treatment device the source utilised to fill or re-pressurise the system must be upstream of such a device. 5.2.4 Bypass 1. The boiler is fitted with an automatic integral bypass. 5.2.5 System Control 1. Further external controls (e.g. room thermostat sensors) MUST be fitted to optimise the economical operation of the boiler in accordance with Part L of the Building Regulations. A range of optional controls is available. Full details are contained in the relevant Sales Literature. 24 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 5.2.6 5 Treatment of Water Circulating Systems 1. All recirculatory water systems will be subject to corrosion unless an appropriate water treatment is applied. This means that the efficiency of the system will deteriorate as corrosion sludge accumulates within the system, risking damage to pump and valves, boiler noise and circulation problems. 2. When fitting new systems flux will be evident within the system, which can lead to damage of system components. 3. BS7593 gives extensive recommendations on system cleansing and water treatment. 4. All systems must be thoroughly drained and flushed out using an appropriate proprietary flushing agent. 5. A suitable inhibitor must then be added to the system. 6.. 7. It is important to check the inhibitor concentration after installation, system modification and at every service in accordance with the inhibitor manufacturer. (Test kits are available from inhibitor stockists.) 8. For information or advice regarding any of the above contact Baxi Customer Support 0344 871 1545. 5.2.7. 5.2.8 Expansion Vessel (CH only) 1. The appliance expansion vessel is pre-charged to 1.0 bar. Therefore, the minimum cold fill pressure is 1.0 bar. The vessel is suitable for correct operation for system capacities up to 125 litres (24 & 28)/155 litres (33 & 40). For greater system capacities an additional expansion vessel must be fitted. For GB refer to BS 7074 Pt 1. For IE, the current edition of I.S. 813 “Domestic Gas Installations”. 2. Checking the charge pressure of the vessel -. 7219715 - 03 (04/17) EcoBlue Advance Combi 25 5 Before Installation 5.2.9 Safety Pressure Relief Valve See B.S. 6798 for full details. 1. The pressure relief valve (Fig. 5) is set at 3 bar, therefore all pipework, fittings, etc. should be suitable for pressures in excess of 3 bar and temperature in excess of 100°C. 2. The pressure relief discharge pipe should be not less than 15mm diameter,. 4). 3. The discharge must not be above a window, entrance or other public access. Consideration must be given to the possibility that boiling water/steam could discharge from the pipe. The end of the pipe should terminate facing The relief valve must never be used to drain the system down and towards the wall 4. A remote relief valve kit is available to enable the boiler to be installed in cellars or similar locations below outside ground level. Fig. 4 5. A boiler discharge pump is available which will dispose of both condensate & high temperature water from the relief valve. It has a maximum head of 5 metres. Section 6.2.1 gives details of how to connect the pipe to the boiler. Control Box removed for clarity Discharge Pipe Pressure Relief Valve 26 EcoBlue Advance Combi Fig.5 7219715 - 03 (04/17) Before Installation 5.3 5.3.1 5 Choice of the Location Location of the Appliance 5.3.4). 2. Where the boiler is sited in an unheated enclosure and during periods when the heating system is to be unused it is recommended that the permanent live is left on to give BOILER frost protection. NOTE: THIS WILL NOT PROTECT THE SYSTEM !. Fig. 6 Data Badge Zone 2 4. If the boiler is to be fitted into a building of timber frame construction then reference must be made to the current edition of Institute of Gas Engineers Publication IGE/UP/7 (Gas Installations in Timber Framed Housing). Window Recess Zone 1 Zone 2 5.3.2 Data Plate Zone 0 1. The boiler data badge gives details of the model, serial number and Gas Council number and is situated on the boiler lower panel. It is visible when the control box is lowered (Fig. 6). 0.6 m Window Recess Zone 2 Fig. A 5.3.3 Where an integral timer is NOT FITTED the boiler has a protection rating of IPX5D and if installed in a room containing a bath or shower can be within Zone 2 (but not 0 or 1). In GB Only Ceiling Window Recess Zone 2 Zone 1 Zone 2 Outside Zones 7219715 - 03 (04/17) If the boiler is fitted with an integral timer it CANNOT be installed in Zone 0, 1 or 2. 5.3.4 0.6 m Fig. In GB Only Bath & Shower Rooms Ventilation 1. Where the appliance is installed in a cupboard or compartment, no air vents are required. BS 5440: Part 2 refers to room sealed appliances installed in compartments. The appliance will run sufficiently cool without ventilation. EcoBlue Advance Combi 27 5 Before Installation. 32mm 21.5mm Insulation 5.3 5.3.6.12 to 5.3.6. Key to Pipework i) Termination to an internal soil and vent pipe 50mm per m etre o 2.5° M inimum f pipe fall run 450mm min* *450mm is applicable to properties up to 3 storeys. For multi-storey building installations consult BS 6798.. Boiler Sink ii) External termination via internal discharge branch e.g sink waste - downstream* 50mm p of pip er metre e run 2.5° M inimum fall Pipe must terminate above water level but below surrounding surface. Cut end at 45°. *It is NOT RECOMMENDED to connect upstream of the sink or other waste water receptacle ! 28 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 9. If the boiler is fitted in an unheated location the entire condensate discharge pipe should be treated as an external run and sized and insulated accordingly. iii) Termination to a drain or gully Boiler 10. In all cases discharge pipe must be installed to aid disposal of the condensate. To reduce the risk of condensate being trapped, as few bends and fittings as possible should be used and any burrs on cut pipe removed. 50mm p of pip er metre e run 2.5° M inimum 11. When discharging condensate into a soil stack or waste pipe the effects of existing plumbing must be considered. If soil pipes or waste pipes are subjected to internal pressure fluctuations when WC's are flushed or sinks emptied then backpressure may force water out of the boiler trap and cause appliance lockout. fall Pipe must terminate above water level but below surrounding surface. Cut end at 45° iv) Termination to a purpose made soakaway 12. A boiler discharge pump is available which will dispose of both condensate & high temperature water from the relief valve. It has a maximum head of 5 metres. Follow the instructions supplied with the pump. Further specific requirements for soakaway design are referred to in BS 6798. Boiler 50mm pe of pipe r metre run 2.5° M inimum 51 500mm min 13. Condensate Drain Pipe ‘Trace Heating’ Elements are available in various lengths, 1, 2, 3 & 5 metres. Where the drain is between 3 & 5 metres a 5 metre kit can be used and “doubled back” upon itself. fall 14. It is possible to fit the element externally on the condensate drain or internally as detailed in the instructions provided. Holes in the soak-away must face away from the building 15. The fitting of a ‘Trace Heating’ Element is NOT a substitute for correct installation of the condensate drain. ALL requirements in this section must still be adhered to. v) pumped into an internal discharge branch (e.g. sink waste) downstream of the trap 50mm p er metre of pipe ru 2.5° Min n imum fa ll Sink Boiler Basement or similar Pipe must terminate (heated) above water level but below surrounding surface. Cut end at 45° Condensate Pump vi) pumped into an external soil & vent pipe 50mm per me tre of 2.5° M pipe run inimum fall Unheated Location (e.g. Garage) Boiler vii) to a drain or gully with extended external run & trace heating Boiler Basement or similar (heated) 50mm The ‘Trace Heating’ element must be installed in accordance with the instructions supplied. External runs & those in unheated locations still require insulation. per me tre of p ipe run inimum fall 2.5° M Pipe must terminate above water level but below surrounding surface. Cut end at 45° Condensate Pump 7219715 - 03 (04/17) EcoBlue Advance Combi 29 5 Before Installation 5.3.6 Clearances 1. A flat vertical area is required for the installation of the boiler.. 5mm Min 450mm 5mm Min 175mm Min (300mm Min if using 80/125mm flueing system) At least 1.5° 450mm Min 763mm For Servicing Purposes & Operating the Controls 5mm Min In Operation 345mm 150mm* Min *This is MINIMUM recommended dimension. Greater clearance will aid installation and maintenance. Fig.7 30 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 5.3.7 5 Flue/Chimney Location 1. The following guidelines indicate the general requirements for siting balanced flue terminals. For GB recommendations are given in BS 5440 Pt 1. For IE recommendations are given in the current edition of I.S. 813 “Domestic Gas Installations”. Due to the nature of the boiler a plume of water vapour will be discharged from the flue. This should be taken into account when siting the flue terminal. T J,K U N R I M C I I F D E A S I F J,K B L A A G H H I Likely flue positions requiring a flue terminal guard Fig. 8 Terminal Position with Minimum Distance (Fig. 8) A1 B1 C1 D2 E2 F2 G2 H2 I J K L M N R S T U Directly below an opening, air brick, opening windows, etc. Above an opening, air brick, opening window etc. Horizontally to an opening, air brick, opening window etc. Below gutters, soil pipes or drain pipes. Below eaves. Below balconies or car port roof. From a vertical drain pipe or soil pipe. From an internal or external corner. Above ground, roof or balcony level. From a surface or boundary line facing a terminal. From a terminal facing a terminal (Horizontal flue). From a terminal facing a terminal (Vertical flue). From an opening in carport (e.g. door, window) into the dwelling. Vertically from a terminal on the same wall. Horizontally from a terminal on the same wall. From adjacent wall to flue (vertical only). From an adjacent opening window (vertical only). Adjacent to windows or openings on pitched and flat roofs Below windows or openings on pitched roofs 7219715 - 03 (04/17) (mm) 300 300 300 25 (75) 25 (200) 25 (200) 25 (150) 25 (300) 300 600 1200 600 1200 1500 300 300 1000 600 2000 1 In addition, the terminal should be no nearer than 150 mm to an opening in the building fabric formed for the purpose of accommodating a built-in element such as a window frame. 2 Only ONE 25mm clearance is allowed per installation. If one of the dimensions D, E, F, G or H is 25mm then the remainder MUST be as shown in brackets, in accordance with B.S.5440-1. EcoBlue Advance Combi 31 5 Before Installation Under car ports we recommend the use of the plume displacement kit. The terminal position must ensure the safe and nuisance - free dispersal of combustion products.. Terminal Assembly 300 min * *4. Reduction to the boundary is possible down to 25mm but the flue deflector must be used (see 5.3.12). The distance from a fanned draught appliance terminal installed parallel to a boundary may not be less than 300mm in accordance with the diagram opposite (Fig. 9). Top View Rear Flue Fig. 9 Property Boundary Line Plume Displacement Kit If fitting a Plume Displacement Flue Kit, the air inlet must be a minimum of 150mm from any opening windows or doors (see Fig. 10). Air Inlet Opening Window or Door Fig. 10 32 EcoBlue Advance Combi 150mm MIN. The Plume Displacement flue gas discharge terminal and air inlet must always terminate in the same pressure zone i.e. on the same facing wall. 7219715 - 03 (04/17) Before Installation 5.3.8 5 Horizontal Flue/Chimney Systems 1. The standard telescopic Flue length is measured from point (i) to (ii) as shown. The elbow supplied with the standard horizontal telescopic flue kit is not included in any equivalent length calculations. Horizontal Flues Read this section in conjunction with the Flue Installation Guide supplied with the boiler. WARNING SUPPORT - All flue systems MUST be securely supported at a MINIMUM of once every metre & every change of direction. It is recommended that every straight piece is supported irrespective of length. Additional supports are available as accessories. VOIDS - Consideration must be given to flue systems in voids and the provision of adequate access for subsequent periodic visual inspection. This bend is equivalent to 1 metre C A Plume Displacement Kit 60 /100 dia 1M Extensions 45° & 93° elbows are also available - see the separate Flue Guide. (ii) B (i) This bend is equivalent to 1 metre Total equivalent length = A+B+C+2 x 90° Bends NOTE: Horizontal flue pipes should always be installed with a fall of at least 1.5° from the terminal to allow condensate to run back to the boiler. 7219715 - 03 (04/17) EcoBlue Advance Combi 33 5 Before Installation 5.3.9 m 0m Flue/Chimney Lengths 1. The standard horizontal telescopic flue kit allows for lengths between 315mm and 500mm from elbow to terminal without the need for cutting (Fig. 11). Extensions of 250mm, 500mm & 1m are available. 50 m 5m 31 The maximum permissible equivalent flue length is: 10 metres (60/100 system - vertical & horizontal) 20 metres (80/125 system - vertical & horizontal) 15 metres (80/80 twin pipe) 8 metres (60/100 system - vertical connected to ridge terminal) Fig. 11 5.3.10 Flue/Chimney Trim 1. The flexible flue trims supplied can be fitted on the outer and inner faces of the wall of installation. Ensure that no part of the white outer chimney duct is visible 5.3.11 Terminal Guard 1. When codes of practice dictate the use of terminal guards (Fig. 12) ‘Multifit’ accessory part no. 720627901 can be used (NOTE: This is not compatible with Flue Deflector referred to below). 2. There must be a clearance of at least 50mm between any part of the terminal and the guard. 3. When ordering a terminal guard, quote the appliance name and model number. 4. The flue terminal guard should be positioned centrally over the terminal and fixed as illustrated. Fig. 12 Flue Deflector 5.3.12 Flue/Chimney Deflector 1. Push the flue deflector over the terminal end. It may point upwards as shown, or up to 45° either way from vertical. Secure the deflector to the terminal with screws provided (Fig. 13). Fig. 13 G U I DA N C E N OT E S 5.3.13 Flue/Chimney Accessories For full details of Flue Accessories (elbows, extensions, clamps etc.) refer to the Flue Accessories & Fitting Guide supplied in the literature pack. Flue Accessories & Fitting Guide Ø 60/100 Flue Systems Ø 80/125 Flue Systems Ø 80/80 Twin Flue Systems Plume Displacement Kit (Ø 60/100 Flue Systems) READ THESE INSTRUCTIONS IN CONJUNCTION WITH THE BOILER INSTALLATION INSTRUCTIONS & Servicing Instructions. 5.4 Transport 1. This product should be lifted and handled by two people. When lifting always keep your back straight and wear protective equipment where necessary. Carrying and lifting equipment should be used as required. e.g. when install in a loft. © Baxi Heating UK Ltd 2011 34 EcoBlue Advance Combi 7219715 - 03 (04/17) Before Installation 5 Pre-plumbing Fig. 14 5.5 5.5.1 To remove only the Wall jig slide banding to the edge and open flaps. Slide the wall jig out of carton then close the flaps. Slide banding back on. Unpacking & Initial Preparation Unpacking. 1. See ‘Section 2.3.1 Handling’ before unpacking or lifting the boiler. 2. Follow the procedure on the carton to unpack the boiler or see Fig. 14a. 3. If pre-plumbing (Fig. 14) - the wall jig and fitting kit can be removed without removing the carton sleeve. Simply slide banding to the edge and open the perforated flap, lift out the jig, fitting kit and instructions. If the boiler is to be install at a later date, close the flap and reposition the banding straps, the boiler can now be store safely away. Remove Sealing Caps from under the Boiler before lifting into position A small amount of water may drain from the boiler in the upright position. 5.5.2 SNAP OFF Initial Preparation 1. After considering the location position the fixing template on the wall ensuring it is level both horizontally and vertically. Fig. 14a 2. Mark the position of the fixing slots for the wall mounting plate indicated on the template. Insert Sealing Washers 130mm 3. Mark the position of the centre of the flue hole (rear exit). For side flue exit, mark as shown (Fig. 15). 4. If required, mark the position of the gas and water pipes. Remove the template (Fig. 17). LIFT HERE BOTH SIDES For Side Flue Exit Fig. 15 6. Drill the wall as previously marked to accept the wall plugs supplied. Secure the wall mounting plate using the fixing screws. Part No. 7212144 Side Flue Centre Line 175 mm Minimum Clearance 116mm Dia Minimum Aperture For Flue Tube Vertical Flue Centre Line 177 mm Boiler Wall Mounting Plate Fixing Slots Ø 8 mm 50 mm 7. Using a spirit level ensure that the plate is level before finally tightening the screws (Fig. 16). Profile of Outercase 5 mm Minimum Side Clearance 5 mm Minimum Side Clearance 3/4” BSP Connections Condensate Drain 50 mm 45 mm Pressure Relief Valve (15mm) 65 mm Heating Flow 200 mm (22mm) Recommended 150 mm Minimum Clearance 5. Cut the hole for the flue (minimum diameter 116mm). 65 mm Hot Water Outlet (15mm) 65 mm Gas Inlet (22mm) 65 mm Cold Water Inlet (15mm) 30 95 mm Heating Return (22mm) 8. Connect the gas and water pipes to the valves on the wall mounting plate using the copper tails supplied. Ensure that the sealing washers are fitted between the connections. Part No. 7212144 DRAFT A NOTE: 40kW models ONLY - ensure the flow restrictor is inserted in cold water inlet connection (Fig. 16). On other models the restrictor is factory fitted internally. Wall Template Fig. 17 Fit the filling loop as described in the instructions supplied with it. 5.5.3 Heating Flow Fig.16 7219715 - 03 (04/17) Flow Restrictor (40 kW model only) Flushing 1. Flush thoroughly and treat the system according to guidance given in B.S. 7593. EcoBlue Advance Combi 35 5 Before Installation 5.6 Stop Valve Double Check Valve 5.6.1 Stop Valve System Filling and Pressurising 1. A filling point connection on the central heating return pipework must be provided to facilitate initial filling and pressurising and also any subsequent water loss replacement/refilling. 2. A filling loop is supplied with the boiler. Follow the instructions provided with it. Temporary Loop DHW Mains Inlet Fig. 18 Connecting Diagrams CH Return”. Other Tap Outlets Expansion Vessel* Boiler Check Valve* Pressure Reducer Valve*. 5.6.2 To Hot Taps Domestic Hot Water Circuit 1. All DHW circuits, connections, fittings, etc. should be fully in accordance with relevant standards and water supply regulations. Stop Tap Fig. 19 *See 5.6.2. for instances when these items may be required In instances where the mains water supply incorporates a non-returnreturn. Where Low Flow Taps or Fittings are intended to be used in the DHW system connected to a Baxi EcoBlue Combi it is strongly recommended that the DHW flow rate DOES NOT fall below 2.5l/min. This will ensure reliable operation of the DHW function. 36 EcoBlue Advance Combi. If a check valve, loose jumpered stop cock, water meter or water treatment device is fitted (or may be in the future) to the wholesome water supply connected to the boiler domestic hot water (DHW) inlet supply then a suitable expansion device may be required.. 7219715 - 03 (04/17) Installation 6 6 Installation 6.1 Engage the Boiler Mounting Bracket on the Boiler into the Retaining Lugs using the Aligning Lugs for position General 1. Remove the sealing caps from the boiler connections including the condensate trap. A small amount of water may drain from the boiler once the caps are removed. Retaining Lugs Remove Sealing Caps from under the Boiler before lifting into position 2. Lift the boiler as indicated by the shaded areas. The boiler should be lifted by TWO PEOPLE. Engage the mounting bracket at the top rear of the boiler into the retaining lugs on the wall jig using the aligning lugs for position (Fig.21) (see ‘Handling’ section 2.3.1). 3. Insert the sealing washers between the valves and pipes on the wall jig and the boiler connections. Aligning Lugs 4. Tighten all the connections. Sealing Washers (x 5) 6.2 6.2.1 Bottom Polystyrene Fitting the Pressure Relief Discharge Pipe 1. Remove the discharge pipe from the kit. 2. Determine the routing of the discharge pipe in the vicinity of the boiler. Make up as much of the pipework as is practical, including the discharge pipe supplied. Fig. 21 Lift Here Both Sides Assembly When the Boiler Mounting Bracket on the Boiler is in position on the Retaining Lugs, the bottom polystyrene may be discarded allowing the boiler to swing into position Make all soldered joints before connecting to the pressure relief valve. Do not adjust the position of the valve. The discharge pipe must be installed before pressurising the system. 3. The pipework must be at least 15mm diameter and run continuously downwards to a discharge point outside the building. See section 5.2.9 for further details. 4. Utilising one of the sealing washers, connect the discharge pipe to the adaptor and tighten the nut hand tight, plus 1/4 turn to seal. Pressure Relief Valve 5. Complete the discharge pipework and route it to the outside discharge point. Fig. 22 Discharge Pipe Front Panel and Control Box removed for clarity 7219715 - 03 (04/17) EcoBlue Advance Combi 37 6 Installation Prime Trap by pouring 300ml of water into flue spigot 6.2.2 Connecting the Condensate Drain 1. Remove the blanking cap, and using the elbow supplied, connect the condensate drain pipework to the boiler condensate trap outlet pipe. Ensure the discharge of condensate complies with any national or local regulations in force (see British Gas “Guidance Notes for the Installation of Domestic Gas Condensing Boilers” & HHIC recommendations). 2. The elbow will accept 21.5mm (3/4in) plastic overflow pipe which should generally discharge internally into the household drainage system. If this is not possible, discharge into an outside drain is acceptable. See section 5.3.5 for further details. Front Panel and Control Box removed for clarity RQ ADJ. 3. The boiler condensate trap should be primed by pouring approximately 300ml of water into the flue spigot. Do not allow any water to fall into the air inlet. 1 2 3 EV1 4 Fig. 23 Condensate Trap Connection (elbow supplied) 6.3 6.3.1 Locating Lug Preparation Panel Removal 1. Remove the securing screws from the bottom of the case front panel. 2. Lift the panel slightly to disengage it from the locating lugs on top of the case and remove it. Case Front Panel Case Front Panel Securing Screws 38 EcoBlue Advance Combi 7219715 - 03 (04/17) Installation 6 m 0m 6.4 50 5 31 Air Supply / Flue Gas Connections mm 6.4.1 Connecting the Flue/Chimney HORIZONTAL TELESCOPIC FLUE (concentric 60/100) Terminal Assembly 1. There are two telescopic sections, the terminal assembly and the connection assembly, a roll of sealing tape and two self tapping screws. A 93° elbow is also supplied. Connection Assembly Fig. 24 2. The two sections can be adjusted to provide a length between 315mm and 500mm (Fig. 24) when measured from the flue elbow (there is 40mm engagement into the elbow). 3. Locate the flue elbow on the adaptor at the top of the boiler. Set the elbow to the required orientation (Fig. 25). Wall Thickness. 6. In instances where the dimension ‘X’ (Fig. 25) is between 250mm and 315mm it will be necessary to shorten the terminal assembly by careful cutting to accommodate walls of these thicknesses. (X) Wall Thickness Fig. 25 ‘TOP’ Label ion ‘Y’ 7. To dimension ‘X’ add 40mm. This dimension to be known as ‘Y’. with the telescopic flue (Fig. 28). s en Dim Sealing Tape ‘Peak’ to be uppermost ‘TOP’ Label Fig. 27 Securing Screw Fig. 28 7219715 - 03 (04/17) EcoBlue Advance Combi 39 6 Installation 10. Remove the flue elbow and insert the flue through the hole in the wall. Fit the flue trims if required, and refit the elbow to the boiler adaptor, ensuring that it is pushed fully in. Secure the elbow with the screws supplied in the boiler fitting kit (Fig. 29). Boiler Elbow Apply the lubricant supplied for ease of assembly (do not use any other type). Adaptor 11. Draw the flue back through the wall and engage it in the elbow. It may be necessary to lubricate to ease assembly of the elbow and flue (Fig. 30). Ensure elbow is fully engaged into boiler adaptor 12. Ensure that the terminal is positioned with the slots to the bottom (Fig. 31). Secure to the elbow with the screws supplied with the telescopic flue (Fig. 30). Fig. 29 It is essential that the flue terminal is fitted as shown to ensure correct boiler operation and prevent water entering the flue. 13. Make good between the wall and air duct outside the building, appropriate to the wall construction and fire rating. Apply the lubricant supplied for ease of assembly (do not use any other type). 14. If necessary fit a terminal guard (see Section 5.3.11). Ensure Flue is fully engaged into Elbow There must be no part of the air duct (white tube) visible outside the property. Fig. 30 Slots at bottom Fig. 31 40 EcoBlue Advance Combi 7219715 - 03 (04/17) Installation 6. • These points must be considered when initially wiring the boiler to the installation, and if replacing any wiring during the service life of the boiler. 6.5 6.5.1 Electrical Connections Electrical Connections of the appliance The boiler must be connected to the mains fused 3A 230V 50HZ supply & control system using cable of 3 core 0.75mm 3183Y multi strand flexible type (see IMPORTANT note opposite). 1. See Section 5.2.2. for details of the electrical supply. Undo the securing screws and lift the case front panel off. 2. Hinge the control box downwards. Disengage the securing tabs and open the terminal block cover. (Fig. 34). 3. If the mains cable fitted is not long enough slacken the gland nut in the right of the boiler lower panel and pass the new mains cable through it. Remove the grommet adjacent to the gland nut, pierce the diaphragm and insert the cable from the external control system. 4. Leave sufficient slack in the cables to allow the control box to be hinged fully open. Tighten the gland nut and refit the grommet and gland nut. 5. Connect the Earth, Permanent Live and Neutral wires to the terminal strip. Control Box Both the Permanent Live and Neutral connections are fused. Fused Spur L Fig. 34 N 6. Refer to the instructions supplied with the external control(s). Any thermostat must be suitable for 230V switching. Room ‘Stat N Terminal M1 230V b 1 bk 2 230V g/y b N 7. Remove the link between connections 1 & 2. The 230V supply at connection 2 must be connected to the thermostat. The switched output from the thermostat must be connected to connection 1. (Figs. 35 & 36). If the room thermostat being used incorporates an anticipator it MUST be wired as shown in Figs. 35 & 36. 8. Replace the terminal block cover. br L The 230V switched signal for external controls (Frost Stat - Room Stat - Timer) must always be taken from terminal 2 at the boiler. Live, Neutral and Earth to power these controls must be taken from the Fused Spur. For the Frost Stat to operate the boiler MUST BE IN CENTRAL HEATING MODE i.e. symbol shown. Fig. 35 Frost Thermostat Pipe Thermostat Fused Spur L N Room ‘Stat 9. Engage the front panel onto the locating lugs on top of the case & secure with the securing screws at the bottom of the case. 230V N Terminal M1 230V b 1 230V g/y External Clock N L Fig. 36 7219715 - 03 (04/17) bk 2 b br 6.5.2 Connecting External Devices 1. See Section 6.7.2. for details of fitting the optional outdoor sensor accessory. 6.6 Filling the Installation 1. See Section 5.2.6 and 5.6.1 for details of flushing and filling the installation. EcoBlue Advance Combi 41 6 Installation 6.7 6.7.1 External Controls Installation of External Sensors 1. Various Sensors are available. 6.7.2 1/2 H 2.5m Min Optional Outdoor Sensor Full instructions are provided with the Outdoor Sensor Kit ! H Positioning the Sensor 1. The sensor must be fixed to an external wall surface of the property it is serving. The wall must face north or west. West North DO NOT position it on a south facing wall in direct sunlight ! N E X W 2. The sensor should be approximately half the height of the living space of the property, and a minimum of 2.5m above ground level. 3. It must be positioned away from any sources of heat or cooling (e.g. flue terminal) to ensure accurate operation. Siting the sensor above doors and windows, adjacent to vents and close to eaves should be avoided. S X Connecting the Sensor 1. Ensure the electrical supply to the boiler is isolated. Undo the securing screws and lift the case front panel off. 2. Hinge the control box downwards. Disengage the securing tabs and open the terminal block cover. 3. Remove one of the grommets in the boiler lower panel, pierce the diaphragm and insert the wires from the outdoor sensor. 4. Leave sufficient slack in the wires to allow the control box to be hinged fully open. Refit the grommet. 5. Connect the wires from the outdoor sensor to positions 4 & 5 on M2 as shown. Refit the cover. From Relay Box 10 9 5 4 10 9 87 65 43 21 M2 Low Voltage Terminal Block Setting the Sensor Curve 1. With the outdoor sensor fitted, the boiler central heating flow temperature is adjusted automatically to accommodate the change in heat required to optimise the efficient performance of the boiler whilst maintaining a comfortable room temperature. The central heating buttons on the boiler adjust a “simulated room temperature” used for this optimisation. 2. This functionality requires the setting of three parameters on the boiler, to suit the heating system and the optimisation can be adjusted by the user with the central heating control buttons on the boiler control panel. Full instructions are provided with the Outdoor Sensor Kit ! Continue with the installation and commissioning of the boiler as described in this manual. 42 EcoBlue Advance Combi 7219715 - 03 (04/17) Commissioning 7 7 Commissioning Automatic Air Vent 7.1 Cap General 1. Reference should be made to BS:EN 12828, 12831 & 14336 when commissioning the boiler. Ensure that the condensate drain trap has been primed - see Section 6.2.2. paragraph 3. 2. At the time of commissioning, complete all relevant sections of the Benchmark Checklist at the rear of this publication. 3. Open the mains water supply to the boiler and all hot water taps to purge the DHW system. 4. Ensure that the filling loop is connected and open, then open the heating flow and return valves on the boiler. Ensure that the cap on the automatic air vent on the pump body is opened (Fig. 37). Fig. 37 Pump Control Box removed for clarity 5. The system must be flushed in accordance with BS 7593 (see Section 5.2.6)". 7.2 Checklist before Commissioning 7.2.1 Preliminary Electrical Checks R /R /P + + 1. Prior to commissioning the boiler preliminary electrical system checks should be carried out. 2. These should be performed using a suitable meter, and include checks for Earth Continuity, Resistance to Earth, Short Circuit and Polarity. 7.2.2 2 1. Checked: 1 3 4 0 bar Digital pressure reading which can be accessed by via the button and scrolling to setting ‘5’. Note there may be a slight difference between the digital & gauge reading depending on boiler operating mode. 7219715 - 03 (04/17) Checks Heating Pressure Gauge That the boiler has been installed in accordance with these instructions. The integrity of the flue system and the flue seals. The integrity of the boiler combustion circuit and the relevant seals. Fig. 38 EcoBlue Advance Combi 43 7 Commissioning 7.3 7.3.1 Commissioning Procedure De-Aeration Function R The display backlight remains lit approx. 10 minutes. If the backlight goes out during commissioning it does not mean that the process has been completed. /R /P + + This procedure MUST be carried out ! 1. Ensure the gas is turned Off! Turn the power to the boiler ON. The software version will be displayed, followed by , then •• flashing briefly before displaying the ‘Standby’ symbol . These buttons for De-Aeration Fig. 39 2. Press & together and hold for at least 6 seconds until is briefly displayed followed by . FUNCTION INTERRUPTION • If the De-aeration is interrupted due to a fault the pump will cease to circulate but the function timer (approx.10 minutes) will continue to run. For this reason it is recommended to monitor the boiler display during De-aeration. If the fault cannot be rectified quickly the function must be restarted to ensure complete De-aeration. • In the event of a loss of power the De-aeration function needs to be restarted once the power is re-established. • In the event of low water pressure the fault code E118 will be displayed, along with the flashing and symbols. This error can be rectified by repressurising the system to at least 1.0 bar. The pump will restart automatically once the water pressure is successfully re-established and will reappear in the display. • The De-aeration function can be repeated as necessary until all air is expelled from the system. Flue Sampling 3. The De-Aeration Function is now activated. The boiler pump will run for approx. 10 minutes. During this time the pump will alternate on and off and the diverter valve will switch between heating & hot water to purge air from the system. 4. At the end of the process the boiler will return to the ‘Standby’ position. 7.4 7.4.1 Gas Settings Checking Combustion - ‘Chimney Sweep’ Mode The case front panel must be fitted when checking combustion. Ensure the system is cold & the gas supply turned on & purged. The person carrying out a combustion measurement should have been assessed as competent in the use of a flue gas analyser and the interpretation of the results. See Section 10.1.3. Important: Allow the combustion to stabilise before inserting the Combustion Analyser Probe into the Test Point. This will prevent saturation of the analyser. R& 1. Press together and hold for at least 6 seconds. is displayed briefly followed by and the current setting point (eg. , or ) flashing alternately. Important: There may be a delay before the boiler fires. Point Plug Analyser Probe 2. To adjust the boiler input setting, press or button and the current setting point will flash (eg. , or ). Press the or button again to alter the boiler input setting. After refitting the sampling point plug ensure there is no leakage of products = MAX. HEATING input, = MAX. DHW input, = MIN. input 4. The combustion (CO level & CO/CO2 ratio) must be measured and recorded at MAXIMUM DHW input & MINIMUM input. 5. Follow the flow chart on the next page to comply with the requirement to check combustion on commissioning.. R /R /P + + 6. Press These buttons for ‘Chimney Sweep’ R& 7. Press the again for at least 6 seconds to exit. R once to bring the boiler out of ‘Standby’ mode. 8. Use the button to toggle through the CH settings to activate the required modes. 44 EcoBlue Advance Combi and DHW 7219715 - 03 (04/17) Commissioning Set Boiler to Maximum Rate (see 7.4.1) Allow the combustion to stabilise. Do not insert probe to avoid ‘flooding’ the analyser. 7.4.1 7 Checking Combustion (cont) 9. Follow the flow chart opposite. 0344 871 1545 for advice. The appliance MUST NOT be commissioned until all problems are identified and resolved. Perform Flue Integrity Combustion Check Insert the analyser probe into the air inlet test point, allowing the reading to stabilise. No Is O2 20.6% and CO2 < 0.2% ? Yes Yes Check CO & Combustion Ratio at Maximum Rate Whilst the boiler is still operating at maximum insert the analyser probe into the flue gas test point, allowing the reading to stabilise. (see 7.4.1) 0344 871 1545 0344 871 1545. 7219715 - 03 (04/17). EcoBlue Advance Combi 45 7 Commissioning 7.5 7.5.1 Configuring the System Check the Operational (Working Gas Inlet Pressure & Gas Rate). 1. Press & together and hold for at least 6 seconds. is displayed briefly followed by & flashing alternately. ‘3’ represents MAXIMUM HEATING input. 2. Press or MAXIMUM DHW input. Control Box removed for clarity to adjust the input. represents 3. With the boiler operating in the maximum rate condition check that the operational (working) gas pressure at the inlet gas pressure test point is in accordance with B.S. 6798 & B.S. 6891. This must be AT LEAST 17mb ! 4. Ensure that this inlet pressure can be obtained with all other gas appliances in the property working. The pressure should be measured at the test point on the gas cock (Fig. 41). Gas Cock Inlet Pressure Test Point Measure the Gas Rate 5. With any other appliances & pilot lights turned OFF the gas rate can be measured. It should be:Natural Gas 24 model 2.61 m3/h 28 model 3.05 m3/h 33 model 3.59 m3/h 40 model 4.35 m3/h Fig. 41 P. OUT VENT RQ ADJ. Gas Valve EV2 6. Press & to exit the function. together and hold for at least 6 seconds 1 2 17-21 mbar 3 EV1 4 Gas Cock 7.6.1 18-22 mbar 19-23 mbar Gas Meter Fig. 41a Working Gas Pressures If the pressure drops are greater than shown in Fig. 41a (above) a problem with the pipework or connections is indicated. Permissible pressure drop across system pipework < 1 mbar. 46 EcoBlue Advance Combi 7.6 Final Instructions Handover 1. Carefully read and complete all sections of the Benchmark Commissioning Checklist at the rear of this publication that are relevant to the boiler and installation. These details will be required in the event of any warranty work. The warranty will be invalidated if the Benchmark section is incomplete. 2. The publication must be handed to the user for safe keeping and each subsequent regular service visit recorded. 3. Hand over the User’s Operating, Installation and Servicing Instructions, giving advice on the necessity of regular servicing. 4 . For IE, it is necessary to complete a “Declaration of Conformity” to indicate compliance with I.S. 813. An example of this is given in I.S. 813 “Domestic Gas Installations”. This is in addition to the Benchmark Commissioning Checklist. 7219715 - 03 (04/17) Commissioning 7 5. Set the central heating and hot water temperatures to the requirements of the user. Instruct the user in the operation of the boiler and system. Information Display 6. Instruct the user in the operation of the boiler controls. The button can be pressed so that the display shows the following information:- 7. Demonstrate to the user the action required if a gas leak occurs or is suspected. Show them how to turn off the gas supply at the meter control, and advise them not to operate electric light or power switched, and to ventilate the property. ‘00’ alternates with Sub-Code (only when fault on boiler) or ‘000’ ‘01’ alternates with CH Flow Temperature ‘02’ alternates with Outside Temperature (where Sensor fitted) ‘03’ alternates with DHW Temperature ‘04’ alternates with DHW Temperature ‘05’ alternates with System Water Pressure 8. Show the user the location of the system control isolation switch, and demonstrate its operation. ‘06’ alternates with CH Return Temperature ‘07’ alternates with Flue Temperature 9. Advise the user that they may observe a plume of vapour from the flue terminal, and that it is part of the normal operation of the boiler. ‘08’ - not used 7.6.2 System Draining 1. If at any time after installation it is necessary to drain & refill the central heating system (e.g. when replacing a radiator) the De-Aeration Function should be activated. Re-pressurise the system to 1.5 bar ‘20’ alternates with Manufacturer information Depending upon boiler model and any system controls connected to the appliance, not all information codes will be displayed and some that are will not have a value. Press R to return to the normal display. 2. On refilling the system ensure that there is no heating or hot water demand, but that there is power to the boiler. It is also recommended that the gas supply is turned off to prevent inadvertent ignition of the burner. recommission the appliance and check that the inhibitor concentration is sufficient. See Section 7.3.1 for more detail. 7219715 - 03 (04/17) EcoBlue Advance Combi 47 8 Operation 8 Operation 8.1 General 1. It is the responsibility of the installer to instruct the user in the day to day operation of the boiler and controls and to hand over the completed Benchmark Checklist at the back of this manual. 2. Set the central heating and hot water temperatures to the requirements of the user. Instruct the user in the operation of the boiler and system. 3. The temperature on the boiler must be set to a higher temperature than the cylinder thermostat to achieve the required hot water demand. 4. Instruct the user in the operation of the boiler and system controls. 5. Demonstrate to the user the action required if a gas leak occurs or is suspected. Show them how to turn off the gas supply at the meter control, and advise them not to operate electric light or power switched, and to ventilate the property. 6. Show the user the location of the system control isolation switch, and demonstrate its operation. 7. Advise the user that they may observe a plume of vapour from the flue terminal, and that it is part of the normal operation of the boiler. 8. The method of repressurising the primary system should be demonstrated. 9. If at any time after installation it is necessary to drain & refill the central heating system (e.g. when replacing a radiator) the De-Aeration Function should be activated (see 7.6.2). 8.2 To Start-up Switch on the boiler at the fused spur unit and ensure that the time control is in the on position and any other controls (e.g. room thermostat) are calling for heat. Press the R once to bring the boiler out of Standby mode. The boiler will begin its start sequence. 8.3 To Shutdown Isolate the mains power supply at the fused spur unit. Isolate the gas supply at the boiler valve. 48 EcoBlue Advance Combi 7219715 - 03 (04/17) Operation Boiler Information View Standby - Reset - Esc 8.4 8 Use of the Control Panel Key to Controls R Standby - Reset - Esc Button R /R /P + Boiler Information View Button + Increase CH Temperature Button Decrease CH Temperature Button Domestic Hot Water Temperature Adjustment Central Heating Temperature Adjustment Increase DHW Temperature Button Decrease DHW Temperature Button Summer - Winter Heating Only Mode Summer / Winter / Only Heating Mode Button Display Screen Summer - Winter - Heating Only Mode R 1. Press button until the required mode appears:Summer - DHW only mode Winter - DHW & CH mode Heating Only - Only CH mode R To increase or decrease the boiler temperature 1. Press to increase the Central Heating temperature. 2. Press to decrease the Central Heating temperature. An overheat thermostat (NTC) is positioned in the heat exchanger which shuts down the appliance if the boiler temperature exceeds 100°C. Press R button to re-establish normal operating conditions. R To adjust the domestic hot water temperature 1. Press temperature. to increase the Domestic Hot Water 2. Press temperature. to decrease the Domestic Hot Water 8.5 Frost Protection 1. The boiler incorporates an integral frost protection feature that will operate in both Central Heating and Domestic Hot Water modes, and also when in standby ( displayed) see section 4.2.3 Boiler Frost Protection Mode. 7219715 - 03 (04/17) EcoBlue Advance Combi 49 9 Settings 9 Settings 9.1 Parameters The operating parameters of the boiler have been factory set to suit most systems. 50 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10 Maintenance. 10.1 General. During routine servicing, and after any maintenance or change of part of the combustion circuit, the following must be checked:• The integrity of the complete flue system and the flue seals by checking air inlet sample to eliminate the possibility of recirculation. O2 20.6% & CO2 < 0.2% • The integrity of the boiler combustion circuit and relevant seals. • The operational gas inlet pressure and the gas rate as described in Section 7.5.1. • The combustion performance as described in ‘Check the Combustion Performance’ below. 3. Competence to carry out Checking Combustion Performance B.S. 6798 ‘Specification for Installation & Maintenance of Gas Fired Boilers not exceeding 70kWh’. Flue Sampling Point Air Sampling Point After refitting the sampling point plug ensure there is no leakage of products • Competence can be demonstrated by satisfactory completion of the CPA1 ACS assessment, which covers the use of electronic portable combustion gas analysers in accordance with BS 7967, Parts 1 to 4. 4. Check the Combustion Performance (CO/CO2 ratio) Set the boiler to operate at maximum rate as described in Section 7.4. 5. Remove the plug from the flue sampling point, insert the analyser probe and obtain the CO/CO2 ratio. This must be less than 0.004. If the combustion reading (CO/CO2 ratio) is greater than this, and the integrity of the complete flue system and combustion circuit seals has been verified, and the inlet gas pressure and gas rate are satisfactory either:• Perform the ‘Standard Inspection and Maintenance’ (Section 10.2) & re-check. • Perform ‘Setting the Gas Valve’ (Section 10.3.25) & re-check. • Replace and set the gas valve (Sections 10.3.24 & 25) & re-check. 7219715 - 03 (04/17) EcoBlue Advance Combi 51 10 Maintenance 10.2 Standard Inspection and Maintenance Operation When performing any inspection or maintenance personal protective equipment must be used where appropriate. 1. Ensure that the boiler is cool and that both the gas and electrical supplies to the boiler are isolated. 2. Remove the screws securing the case front panel. Lift the panel slightly to disengage it from the tabs on top of the case (Fig. 43). Hinge down the control box. 3. To aid access disconnect the igniter plug & disconnect the two pipes from the top of the condensate trap and the drain pipe from the trap outlet. Undo the screw & washer securing the trap to the boiler lower panel. Case Front Panel Securing Screws Control Box removed for clarity 4. Disengage the lip on the trap from the slotted bracket and remove the trap. Take care not to spill any residual condensate on the controls and P.C.B. Thoroughly rinse the trap and examine the gasket on the trap base, replacing if necessary. 5. Remove the clip securing the gas feed pipe to the air/gas venturi. Disconnect the pipe. Do not break the joint between the pipe and gas valve unless necessary. 6. Note their position and disconnect the electrode leads and the fan electrical plugs (Fig. 46). Condensate Trap 7. Undo the four 10mm nuts retaining the combustion box cover to the heat exchanger. Gasket 8. Carefully draw the fan, collector and cover assembly forward (Fig. 46). Condensate Drain Pipe Connection 9. Clean any debris from the heat exchanger and check that the gaps between the coils are clear. 10. Inspect the burner, electrodes position (Fig. 45) and insulation, cleaning or replacing if necessary. Clean any dirt or dust from the boiler. Fan, Collector and Cover Assembly 11. Carefully examine all seals, insulation & gaskets, replacing as necessary. Look for any evidence of leaks or corrosion, and if found determine & rectify the cause. Air Box Electrode Position Fig. 45 Fig. 46 Securing Clip Flame Sensing Electrode Spark Ignition Electrode 7.5 ±1 Gas Feed Pipe 4 ±0.5 12. Prime the trap and reconnect the pipes to the top. Reassemble in reverse order. Electrode Leads 10 ±1 52 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 Expansion Vessel Charge - 1.0 bar 13.. Expansion Vessel Valve Control Box removed for clarity A right angled valve extension will aid checking and repressurising. Hall Effect Sensor DHW Filter (Fig. 48) 14. If the flow of domestic hot water is diminished, it may be necessary to clean the filter. 15. Turn the cold mains isolation cock (Fig. 47) off and draw off from a hot tap. Hydraulic Inlet Assembly 16. Disconnect the pump cable, remove the retaining clip and extract the filter cartridge and rinse thoroughly in clean water. Reassemble and check the flow. View underneath appliance Restrictor (not on 40 models) Filter 17. Check the operation of the Safety Pressure Relief Valve. Simulate ‘Flame Failure’ fault by isolating the supply at gas cock and operating the boiler. 133 should be displayed. 18. Reassemble the appliance in reverse order, ensuring the front case panel is securely fitted. Recommission the boiler. 19. Complete the relevant Service Interval Record section of the Benchmark Commissioning Checklist at the rear of this publication and then hand it back to the user. Fig. 48 DHW Isolation Cock 10.3 Specific Maintenance Operations Changing Components Fig. 47. Sealing Gasket Spark Ignition Electrode See Section 10.2 paragraph 2 for removal of case panel door etc. 10.3.1 Electrode Leads Sealing Gasket Flame Sensing Electrode Fig. 49 7219715 - 03 (04/17) Spark Ignition & Flame Sensing Electrodes 1. Note their position and disconnect the electrode leads. Remove the retaining screws securing each of the electrodes to the combustion box cover and remove the electrodes, noting their orientation. 2. Check the condition of the sealing gaskets and replace if necessary. Reassemble in reverse order (Fig. 49). 3. If satisfactory combustion readings are not obtained ensure the electrode position is correct and perform the combustion check again. EcoBlue Advance Combi 53 10 Maintenance 10.3.2 Fan (Figs. 50 & 51) 1. Remove the clip securing the gas feed pipe to the air/gas venturi. Disconnect the pipe. 2. Undo the screws securing the air/gas collector to the extension piece and disconnect the fan electrical plugs (Fig. 50). 3. Remove the collector and fan assembly, being careful to retain the gasket. 4. Undo the securing screw and remove the airbox, disengaging it from the fan venturi. Undo the screws securing the fan to the collector. Retain the gasket. 5. Undo the screws securing the venturi to the fan (noting its position) and transfer to the new fan, replacing the seal if necessary. Examine the gasket(s) and replace if necessary. Control Box removed for clarity Cover Gasket Air/Gas Collector Air Box Fan Air/Gas Venturi Clip Gas Feed Pipe Fig. 50 10.3.3 Air/Gas Venturi (Figs. 50 & 51) 1. Undo the securing screw and remove the airbox, disengaging it from the fan venturi. Remove the clip securing the gas feed pipe to the venturi. 2. Undo the screws securing the collector to the extension piece and disconnect the fan electrical plugs. Seal Fan Venturi Fig. 51 3. Remove the collector and fan assembly, being careful to retain the gasket. 4. Undo the screws securing the venturi to the fan (noting its position) and fit the new venturi, replacing the seal if necessary. Examine the gasket and replace if necessary. 5. After changing the venturi check the combustion - see Section 7.4.1. 54 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.4 Burner Cover 1. Undo the securing screw and remove the airbox, disengaging it from the fan venturi. Remove the clip securing the gas feed pipe to the air/gas venturi and disconnect the fan electrical plugs. Burner Burner Gasket 2. Undo the screws securing the air/gas collector to the extension piece. Note its position and remove the extension piece (where fitted) from the cover. Extension Piece (note orientation) 3. Undo the screws securing the burner. Withdraw the burner from the cover and replace with the new one. Gasket 4. Examine the gasket(s), replacing if necessary. Note that the gaskets are not the same ! 5. After changing the burner check the combustion. Air Box Fig. 52 Air/Gas Collector 10.3.5 Insulation (Fig. 53) 1. Undo the securing screw and remove the airbox, disengaging it from the fan venturi. Remove the clip securing the gas feed pipe to the air/gas venturi and disconnect the fan electrical plugs. 2. Remove the electrodes as described in section 13.1. Control Box removed for clarity 3. Undo the nuts holding the cover to the heat exchanger. Draw the air/gas collector, fan and cover assembly away. Heat Exchanger Rear Insulation 4. Remove the cover insulation piece. 5. Fit the new insulation carefully over the burner and align it with the slots for the electrodes. Seal Air/Gas Collector Spark Ignition Electrode Cover Insulation 6. If the rear insulation requires replacement, remove it and all debris from the heat exchanger. Also it may be necessary to separately remove the spring clip from the pin in the centre of the heat exchanger and the ‘L’ shaped clips embedded in the insulation. 7. Do not remove the shrink-wrapped coating from the replacement rear insulation. Keep the insulation vertical and press firmly into position. Air Box Electrode Leads Flame Sensing Electrode 7219715 - 03 (04/17) Fig. 53 8. Examine the cover seal and replace if necessary. Reassemble in reverse order. EcoBlue Advance Combi 55 10 Maintenance 10.3.6 Electrical Plug Flue Sensor Flue Sensor 1. Ease the retaining tab on the sensor away and disconnect the electrical plug. 3. Turn the sensor 90° anticlockwise to remove - it is a bayonet connection. 4. Reassemble in reverse order. 10.3.7 Fig. 54 Igniter (Fig. 54) 1. Note the position of the ignition & sensing leads and disconnect them. Also disconnect the igniter feed plug. 2. Undo the screw securing the igniter mounting bracket to the left hand side panel. Remove the igniter and bracket and transfer the bracket to the new igniter. 3. Reassemble in reverse order, reconnecting the plug and leads to the igniter. Igniter 10.3.8 Heating Flow & Return Sensors (Fig. 55) Control Box removed for clarity Mounting Bracket Spark Connection (‘A’) Earth Connection (‘B’) 1. There is one sensor on the flow (red wires) and one sensor on the return (blue wires). Note: For access to the return sensor first remove the fan and air/gas collector (see 10.3.2). 2. After noting the position prise the sensor clip off the pipe and disconnect the plug. 3. Connect the plug to the new sensor and ease the clip onto the pipe as close to the heat exchanger as possible. Heating Flow Sensor 10.3.9 Safety Thermostat (Fig. 56) 1. Pull the two spade connections off the safety thermostat. Fig. 55 2. Remove the screws securing the thermostat to the mounting plate on the flow pipe. Safety Thermostat Pump, Gas Valve Assemblies and Pipework removed for clarity Retaining Clip 3. Reassemble in reverse order, ensuring that the connections are pushed fully on. 10.3.10 DHW NTC Sensor (Fig. 56) 1. Turn off the mains cold water supply tap and draw off the residual domestic hot water. 2. Ease the retaining tab on the sensor away and disconnect the electrical plug. 3. Unscrew the sensor from the hydraulic outlet assembly. Examine the sealing washer, replacing if necessary. Plug 4. Reassemble in reverse order. The plug will only fit one way. DHW NTC Sensor Hydraulic Pressure Sensor Fig. 56 56 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.11 Pump - Head Only (Fig. 57) 1. Drain the boiler primary circuit and disconnect the electrical plug from the pump motor. 2. Remove the socket head screws securing the pump head to the body and draw the head away. 3. Reassemble in reverse order. 10.3.12 Pump - Complete (Fig. 58) 1. Drain the boiler primary circuit and disconnect the electrical plug from the pump motor. 2. Undo the two screws securing the body to the pipe and manifold and draw the pump forwards. 3. Unscrew the automatic air vent from the pump body. Control Box removed for clarity Socket Headed Screw 4. Examine the ‘O’ ring seals on the return pipe and manifold, replacing if necessary. 5. Fit the air vent to the pump body and reassemble in reverse order. 10.3.13 Automatic Air Vent (Fig. 58) Pump Body 1. Drain the boiler primary circuit and unscrew the automatic air vent from the pump body. Pump Head 2. Examine the ‘O’ ring seal, replacing if necessary and fit it to the new automatic air vent. Fig. 57 3. Reassemble in reverse order. Automatic Air Vent Pump Flow Pipe Fig. 58 7219715 - 03 (04/17) EcoBlue Advance Combi 57 10 Maintenance 10.3.14 Safety Pressure Relief Valve (Fig. 59) 1. Close the flow and return isolation taps and drain the primary circuit. 2. Disconnect the discharge pipework from the valve. Remove the sealing grommet. ‘O’ ring seal 3. Slacken the grub screw securing the pressure relief valve and remove from the inlet assembly. Grub Screw 4. On reassembly ensure that the ‘O’ ring is in place and the sealing grommet is correctly refitted to maintain the integrity of the case seal. Safety Pressure Relief Valve 10.3.15 Heating Pressure Gauge (Figs. 60 & 61) Discharge Pipe 1. Close the flow and return isolation taps and drain the primary circuit. Fig. 59 2. Hinge the control box downwards. Remove the clip securing the pressure gauge capillary to the hydraulic assembly. 3. Disengage the securing tabs and open the terminal block cover. Prise apart the clips that hold the gauge cap. 4. Remove the gauge, cap and gasket. 5. Fit the new gauge, ensuring that the capillary is routed to prevent any sharp bends. Locate the ridge on the gauge body in the slot in the control box. Clip Control Box removed for clarity 6. Reassemble in reverse order and ensure the gasket is in position to maintain the integrity of the case seal. Heating Pressure Gauge Capillary Fig. 60 Heating Pressure Gauge Fig. 61 58 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.16 Plate Heat Exchanger (Figs. 62 & 63) 1. Close the flow & return isolation taps and the cold mains inlet. Drain the primary circuit and draw off any residual DHW. 2. Refer to Section 10.2 paragraphs 5 to 9 and remove the fan etc. 3. Undo the screws securing the plate heat exchanger to the hydraulic assembly. 4. Withdraw the plate heat exchanger by manoeuvring it to the rear of the boiler, then upwards and to the left to remove. Plate Heat Exchanger Seals 5. There are four rubber seals between the hydraulic assembly and heat exchanger which may need replacement. Control Box removed for clarity 6. Ease the seals out of the hydraulic assembly. Replace carefully, ensuring that the seal is inserted parallel and pushed fully in. 7. When fitting the new heat exchanger note that the right hand location stud is offset towards the centre (Fig. 62). 8. Reassemble in reverse order. Rubber Seal R.H. Stud Fig. 62 Note offset 10.3.17 Hydraulic Pressure Sensor (Fig. 63) 1. Close the flow and return isolation taps and drain the primary circuit. For ease of access remove the fan and collector assembly. Retaining Clip 2. Remove the plug from the sensor and pull the retaining clip upwards. 3. Reassemble in reverse order. Plug Hydraulic Pressure Sensor Fig. 63 7219715 - 03 (04/17) Pump, Gas Valve Assemblies and Pipework removed for clarity EcoBlue Advance Combi 59 10 Maintenance 10.3.18 DHW Flow Regulator & Filter (Fig. 64) 1. Close the cold mains inlet and draw off any residual DHW. 2. Pull off the hall effect sensor. Undo the filter assembly from the inlet/return manifold. 10.3.19 DHW Flow Sensor (‘Hall Effect’ Sensor) (Fig. 65) 1. Pull the sensor off the DHW inlet manifold. 2. Disconnect the plug from the sensor and connect it to the new component. Control Box removed for clarity DHW Flow Sensor 3. Fit the new sensor, ensuring it is correctly oriented and fully engaged over the manifold. DHW Flow Regulator & Filter (‘Hall Effect’ Sensor) 10.3.20 Diverter Valve Motor (Fig. 66) 1. Disconnect the multi-pin plug. Pull off the retaining clip and remove the motor. Fig. 65 2. The motor can now be replaced, 3. When fitting the new motor it will be necessary to hold the unit firmly while depressing the valve return spring. Fig. 64 Hydraulic Inlet Assembly Diverter Valve Motor Multi-pin Plug Retaining Clip Valve Assembly Fig. 66 Pump, Gas Valve Assemblies and Pipework removed for clarity 60 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.21 Main. 4. Undo the 5 securing screws and remove the P.C.B. It is retained at the left by two spring latches and the right hand edge locates in a slot. 5. Reassemble in reverse order, ensuring that the harnesses to the Control P.C.B. and terminal M2 are routed under the Main P.C.B. Check the operation of the boiler. 10.3.22 Boiler Control. Control Box Cover 4. Undo the 5 securing screws and remove the P.C.B. It is retained at the left by two spring latches and the right hand edge locates in a slot. 5. Disconnect the link harness between the Main & Control P.C.B.’s and undo the 4 screws securing the Control P.C.B. 6. Remove the Control P.C.B. and fit the new component. Reassemble in reverse order, ensuring that the harnesses to the Control P.C.B. and terminal M2 are routed under the Main P.C.B. Check the operation of the boiler. Main P.C.B. Boiler Control P.C.B. Fig. 67 7219715 - 03 (04/17) EcoBlue Advance Combi 61 10 Maintenance 10.3.23 Expansion Vessel Lock Nut Expansion Vessel 1. Close the flow and return isolation taps and drain the boiler primary circuit. 2. Undo the nut on the pipe connection at the bottom of the vessel, and slacken the nut on the hydraulic inlet assembly. 3. Remove the screws securing the support bracket, and withdraw the bracket. 4. Whilst supporting the vessel undo and remove the locknut securing the vessel spigot to the boiler top panel. Support Bracket 5. Manoeuvre the vessel out of the boiler. 6. Reassemble in reverse order. 10.3.24 Gas Valve (Fig. 69) Fig. 69 After replacing the valve the CO2 must be checked and adjusted as detailed in Section 10.3.25 Setting the Gas Valve. Only change the valve if a suitable calibrated combustion analyser is available, operated by a competent person - see section 10.1.3. 1. Undo the screw and disconnect the electrical plug. 2. Turn the gas cock off and undo the nut on the gas valve inlet underneath the boiler. 3. Undo the nut on the gas valve outlet. Ease the pipe aside. NOTE: The gas nozzle injector is inserted in the gas valve outlet. 4. Remove the screws securing the gas valve to the boiler bottom panel. Remove the valve. Gas Feed Pipe 5. Transfer the gas nozzle injector to the new valve, ensuring it sits in the valve outlet. Examine the sealing washers, replacing if necessary. 6. Reassemble in reverse order. Washer Gas Nozzle Injector Check gas tightness & CO2 ! Gas Valve Electrical Plug Washer Gas Cock Fig. 68 62 EcoBlue Advance Combi 7219715 - 03 (04/17) Maintenance 10 10.3.25 Setting the Gas Valve (CO2 Check) R /R /P + The CO2 must be only be checked and adjusted to set the valve if a suitable calibrated combustion analyser is available, operated by a competent person - see Section 10.1.3. + 1. The combustion (CO2) may be checked after running the boiler for several minutes. To do this it is necessary to operate the boiler in ‘Chimney Sweep Mode’. ‘Chimney Sweep Mode’ • This function must not be activated whilst the burner is lit. • The case front panel must be fitted when checking combustion. • Ensure the system is cold R& 1. Press together and hold for at least 6 seconds. is displayed briefly followed by flashing alternately with , or . 2. represents MAXIMUM HEATING input, represents MAXIMUM DHW input whilst denotes MINIMUM input. 3. Press or The CO2 should be 8.7% ± 0.2 at MAXIMUM 2. It is possible to alter the CO2 by adjustment of the gas valve. At maximum rate the Throttle Adjustment Screw should be turned, using a suitable 2.5 hexagon key, until the correct reading is obtained (Fig. 100). Turning clockwise will reduce the CO2. Anticlockwise will increase the CO2. 3. The CO2 must then be checked at minimum rate. The CO2 should be 8.4% ± 0.2 at MINIMUM to adjust the input. 4. The valve must be checked and set at MAXIMUM DHW input & MINIMUM input. 4. With the boiler on minimum, the Offset Adjustment Screw must be altered, using a suitable 4mm hexagon key, after removing the cap (Fig. 100). Turning anti-clockwise will reduce the CO2. Clockwise will increase the CO2. Flue Sampling Point Plug 5. Check the Combustion Performance (CO/CO2 ratio). This must be less than 0.004. Analyser Probe Refit the sampling point plug and ensure there is no leakage of products. P. OUT VENT P. R. ADJ . EV2 Offset Adjustment Screw (cap fitted) RQ ADJ. Fig. 70 1 Throttle Adjustment Screw (cover removed) 2 EV1 3 4 Reduce CO2 Increase CO2 at min. rate at min. rate Reduce CO2 Increase CO2 at max. rate at max. rate Fig. 71 If the CO2 is reset at minimum rate it must be rechecked at maximum rate again and adjusted if required. If the CO2 is reset at maximum rate it must be rechecked at minimum rate and adjusted if required. 7219715 - 03 (04/17) Gas Valve Do not turn the adjustment screws more than 1/8 of a turn at a time. Allow the analyser reading to settle before any further adjustment EcoBlue Advance Combi 63 11 Troubleshooting 11 Troubleshooting 11.1 Error Codes Table Of Error Codes 20 Central Heating NTC Fault 28 Flue NTC Fault 40 Central Heating Return NTC Fault 109 R Possible Circulation Fault 110 R Safety Thermostat Operated (pump fault) 111 R Safety Thermostat Operated (over temperature) 117 Primary System Water Pressure Too High 118 Primary System Water Pressure Too Low 125 R Circulation Fault (Primary) 128 Flame Failure (no lock-out) 130 Flue NTC Operated 133 R Interruption Of Gas Supply or Flame Failure 151 R Flame Failure 160 R Fan or Fan Wiring Fault 321 Hot Water NTC Fault 384 False Flame The button can be pressed so that the display shows the following information:‘00’ alternates with Sub-Code (only when fault on boiler) or ‘000’ ‘01’ alternates with CH Flow Temperature ‘02’ alternates with Outside Temperature (where Sensor fitted) ‘03’ alternates with stored DHW Temperature ‘04’ alternates with DHW Temperature ‘05’ alternates with System Water Pressure ‘06’ alternates with CH Return Temperature ‘07’ alternates with Flue Temperature ‘08’ - not used ‘09’ alternates with Collector Temperature ‘20’ alternates with Manufacturer information 1. If a fault occurs on the boiler an error code may be shown by the facia display. 2. The codes are a flashing number, either two or three digit, preceded by the symbol :followed by 20, 28, 40, 160 or 321 indicates possible faulty components. 110 and 111 indicate overheat of the primary system water. 117 is displayed when the primary water pressure is greater than 2.7 bar. Restoring the correct pressure will reset the error. 118 is displayed when the primary water pressure is less than 0.5 bar. Restoring the correct pressure will reset the error. 133, indicates that the gas supply has been interrupted, ignition has failed or the flame has not been detected. 128 is displayed if there has been a flame failure during normal operation. 125 is displayed in either of two situations:i) If within 15 seconds of the burner lighting the boiler temperature has not changed by 1°C. ii) If within 10 minutes of the burner lighting the boiler actual temperature twice exceeds the selected temperature by 30°. In these instances poor primary circulation is indicated. 3. By pressing the 'Reset' button for 1 to 3 seconds when 110, 125 & 133 are displayed it is possible to relight the boiler. 4. If this does not have any effect, or the codes are displayed regularly further investigation is required. 11.2 Fault Finding 1. Check that gas, water and electrical supplies are available at the boiler. 2. Electrical supply = 230V ~ 50 Hz. 3. The preferred minimum gas pressure is 20 mb (NG). 4. Carry out electrical system checks, i.e. Earth Continuity, Resistance to Earth, Short Circuit and Polarity with a suitable meter. NOTE: These checks must be repeated after any servicing or fault finding. 5. Ensure all external controls are calling for heat and check all external and internal fuses. Before any servicing or replacement of parts, ensure the gas and electrical supplies are isolated. 64 EcoBlue Advance Combi 7219715 - 03 (04/17) Troubleshooting Refer to “Illustrated Wiring Diagram” for position of terminals and components Central Heating - Follow operational sequence Turn on mains power The display illuminates NO 11’ Ensure controls are set to demand and verify the contacts are closed NO NO Set Central Heating temperature to Maximum. symbol flashing, pump runs NO Ensure all controls and programmers are calling for heat YES Go to section ‘B’ YES Fan runs at correct speed NO 160 flashing Go to section ‘C’ NO 133 and R flashing YES Spark at ignition electrodes up to 5 seconds & for 4 attempts YES YES Go to section ‘F’. Press the reset button for 1 to 3 seconds YES 133 flashing Go to section ‘G’ Go to section ‘E’ NO Burner lights YES Burner goes out after 5 seconds YES Flame Displayed NO Check polarity YES 109 flashing YES Go to section ‘J’ 125 flashing after 1 min NO 110 or 111 flashing YES Go to section ‘H’ NO Diverter Valve open to Central Heating circuit NO Go to section ‘K’ YES Burner modulates to maintain set temperature NO Check Heating Flow sensor. Go to section ‘D’ YES 130 flashing YES Go to section ‘M’ NO Burner goes out 7219715 - 03 (04/17) YES Fan stops after 15 seconds YES Boiler operation correct EcoBlue Advance Combi 65 11 Troubleshooting Domestic Hot Water - Follow operational sequence Turn on mains power The display illuminates NO’ Go to section ‘L’ NO NO Set Hot Water temperature to Maximum & fully open hot tap. symbol flashing, pump runs Go to section ‘L’ NO NO DHW flow rate greater than 2litres/min YES Burner lights YES YES Fan runs at correct speed NO 160 flashing Go to section ‘C’ NO 133 and R flashing Go to section ‘B’ NO Spark at ignition electrodes up to 5 seconds & for 4 attempts YES Go to section ‘F’. Press the reset button for 1 to 3 seconds YES 133 flashing Go to section ‘G’ Go to section ‘E’ YES NO Burner lights YES Burner goes out after 5 seconds YES Flame Displayed 109 flashing NO YES Check polarity Go to section ‘J’ 125 flashing after 1 min NO 110 or 111 flashing YES Go to section ‘H’ NO 3 Way Valve open to Domestic Hot Water circuit NO Go to section ‘K’ YES Burner modulates to maintain set temperature NO Check CH NTC sensor. Go to section ‘D’ NO 130 flashing YES Go to section ‘M’ NO Burner goes out 66 YES EcoBlue Advance Combi Fan stops after 15 seconds YES Boiler operation correct 7219715 - 03 (04/17) Troubleshooting 11 Fault Finding Solutions Sections A Is there 230V at: 1. 2. 3. B NO Main terminals L and N Check electrical supply NO Main terminal fuse Connection OK at X40 Replace fuse NO Check wiring PCB - X10 connector Main terminals L and N Display illuminated NO Display or Main PCB fault Switch to DHW mode maximum flow & press reset. During next three minutes check :- YES 230V at PCB - X11 connector 3 to 4 YES 230V at pump Replace pump NO NO Check wiring Replace PCB C Fan connections correct at fan & PCB X11 and X23 connectors see Wiring Diagram. NO Make connections YES 230V at PCB - X11 connector (between blue & brown - see Wiring Diagram) YES Fan jammed or faulty wiring YES Replace fan or wire NO Replace PCB D Temperature sensor faulty. Check correct location and wiring. YES Cold resistance approximately 10k @ 25° C (CH & DHW sensor) 20k @ 25° C (Flue sensor) (resistance reduces with increase in temp.) E Gas at burner NO NO Replace sensor & reset boiler Ensure gas is on and purged Check wiring and PCB - X14 connector see Wiring Diagram YES Replace gas valve & Check combustion NO Replace PCB 7219715 - 03 (04/17) EcoBlue Advance Combi 67 11 F Troubleshooting Check and correct if necessary 1. Ignition electrode and lead 2. Electrode connection 3. Spark gap and position YES Check wiring (see Diagram) and 230V at PCB X14 (between blue & brown from igniter) YES NO Replace PCB 4 ±0.5 Replace Igniter Burner Viewing Window Flame Sensing Electrode 7.5 ±1 Spark Ignition Electrode 10 ±1 Fig. 72 Electrode Position G 1. 2. Check supply pressure at the gas cock test point (Fig. 41):Natural Gas - Minimum 17 mbar Check and correct if necessary 1. The set of the gas valve (CO2 values - see Section 10.3.25) 2. Flame sensing electrode and lead connections 3. Flame sensing electrode position Replace sensing electrode or PCB H Safety Thermostat operated or faulty NO Check for and correct any system faults NO Allow to cool. Continuity across thermostat terminals more than 1.5 ohm YES Replace safety thermostat NO Check Flow & Return Sensors - see section ‘D’ YES Is 110 or 111 still flashing ? YES Replace PCB 68 EcoBlue Advance Combi 7219715 - 03 (04/17) Troubleshooting CH system pressure less than 0.5 bar or greater than 2.7 bar on digital display I NO J YES Restore correct system pressure YES Check wiring and PCB X22 connector for approx. 5V DC between green & black - see Wiring Diagram Ensure that the boiler and system are fully vented NO Replace hydraulic pressure sensor NO Replace PCB System fault - correct Check flow temperature sensor connections and position. Cold resistance approximately 10k @ 25° C (CH sensors) (resistance reduces with increase in temp.) YES 11 NO Replace sensor YES Go to section ‘B’ K Is there 230V at: 1. PCB - X13 connector terminals between:Blue & Black (central heating mode) Blue & Brown (domestic hot water mode) see Wiring Diagram NO Replace PCB YES Check diverter valve motor cable Is there 230V at: 2. Diverter valve motor Is mains water filter & assembly clean, and rotor moving freely ? L YES YES Replace diverter valve motor Check wiring and PCB - X22 connector for approx. 5V DC between red & blue from Hall effect sensor - see Wiring Diagram YES Replace Hall Effect Sensor NO NO Clean or replace Replace PCB M 1. 2. Temperature sensors faulty. Cold resistance approximately 10k @ 25° C (CH sensor) 20k @ 25° C (Flue sensor) (resistance reduces with increase in temp.) If pump is running the heat exchanger could be obstructed 7219715 - 03 (04/17) NO YES Replace sensor Replace heat exchanger EcoBlue Advance Combi 69 12 Decommissioning 12 Decommissioning 12.1 Decommissioning Procedure 1. Disconnect the gas & electric supplies and isolate them. 2. Drain the primary circuit and disconnect the filling device. 3. Dismantle the chimney system and remove the boiler from the wall mounting frame. 70 EcoBlue Advance Combi 7219715 - 03 (04/17) Spare Parts 13 13 Spare Parts 13.1 General 1. If, following the annual inspection or maintenance any part of the boiler is found to need replacing, use Genuine Baxi Spare Parts only. C 13.2 Spare Parts List Key No. D A Description No. Manufacturers Part No. A Fan Fan - 40 only 720768101 7211861 B Burner - 24 & 28 Burner - 33 Burner - 40 7212447 7212449 7212448 C Spark Ignition Electrode 720767301 D Flame Sensing Electrode 7211855 E Gas Valve 7214341 F Safety Thermostat 720765301 G Hall Effect Sensor 720788201 I Plate Heat Exchanger 720852401 J Diverter Valve Motor 720788601 K Pump 7220533 M Heating Flow/Return Sensor 720747101 N DHW NTC Sensor 720789201 O Pump Automatic Air Vent 720787601 P Hydraulic Pressure Switch 720789001 Q Heating Pressure Gauge 7212896 R Flue Sensor 720851401 S PCB - 24 ErP PCB - 28 ErP PCB - 33 ErP PCB - 40 ErP 7222707 7222709 7222711 7222712 U Ø5.0 Gas Nozzle Injector - 24 Ø5.6 Gas Nozzle Injector - 28 Ø6.6 Gas Nozzle Injector - 33 Ø6.8 Gas Nozzle Injector - 40 7211862 7214344 7211864 7214346 V Air/Gas Venturi - 24 Air/Gas Venturi - 28 Air/Gas Venturi - 33 & 40 7211858 7211859 7211860 W Boiler Control HMI PCB 7211868 B E G F J M P O U N R Q V S W I K 7219715 - 03 (04/17) EcoBlue Advance Combi 71 14 Notes 14 Notes 72 EcoBlue Advance Combi 7219715 - 03 (04/17) 7219715 - 03 (04/17) Notes 14 EcoBlue Advance Combi 73 Fitted Not required Heating zone valves OR ft³/hr Burner operating pressure (at maximum rate) mbar OR Gas inlet pressure at maximum rate The heating and hot water system complies with the appropriate Building Regulations Ratio *All installations in England and Wales must be to Local Authority Building Control (LABC) either directly or through a Competent Persons Scheme. A Building Regulations Compliance will then be issued to the customer. 74) 75. 7219715 - 03 (04/17)
https://manualzz.com/doc/48425055/7219715-ecoblue-advance-install
CC-MAIN-2018-34
refinedweb
18,801
66.94
Kinds of types¶ User-defined types¶ Each class is also a type. Any instance of a subclass is also compatible with all superclasses. All values are compatible with the object type (and also the Any type). class A: def f(self) -> int: # Type of self inferred (A) return 2 class B(A): def f(self) -> int: return 3 def g(self) -> int: return 4 a = B() # type: A # OK (explicit type for a; override type inference) print(a.f()) # 3 a.g() # Type check error: A has no method g The Any type¶ A value with the Any type is dynamically typed. Mypy doesn’t know anything about the possible runtime types of such value. Any operations are permitted on the value, and the operations are checked at runtime, similar to normal Python code without type annotations. Any is compatible with every other type, and vice versa. No implicit type check is inserted when assigning a value of type Any to a variable with a more precise type: a = None # type: Any s = '' # type: str a = 2 # OK s = a # OK Declared (and inferred) types are erased at runtime. They are basically treated as comments, and thus the above code does not generate a runtime error, even though s gets an int value when the program is run. Note that the declared type of s is actually str! If you do not define a function return value or argument types, these default to Any: def show_heading(s) -> None: print('=== ' + s + ' ===') # No static type checking, as s has type Any show_heading(1) # OK (runtime error only; mypy won't generate an error) You should give a statically typed function an explicit None return type even if it doesn’t return a value, as this lets mypy catch additional type errors: def wait(t: float): # Implicit Any return value print('Waiting...') time.sleep(t) if wait(2) > 1: # Mypy doesn't catch this error! ... If we had used an explicit None return type, mypy would have caught the error: def wait(t: float) -> None: print('Waiting...') time.sleep(t) if wait(2) > 1: # Error: can't compare None and int ... The Any type is discussed in more detail in section Dynamically typed code. Note A function without any types in the signature is dynamically typed. The body of a dynamically typed function is not checked statically, and local variables have implicit Any types. This makes it easier to migrate legacy Python code to mypy, as mypy won’t complain about dynamically typed functions. Tuple types¶ The type Tuple[T1, ..., Tn] represents a tuple with the item types T1, ..., Tn: def f(t: Tuple[int, str]) -> None: t = 1, 'foo' # OK t = 'foo', 1 # Type check error A tuple type of this kind has exactly a specific number of items (2 in the above example). Tuples can also be used as immutable, varying-length sequences. You can use the type Tuple[T, ...] (with a literal ... – it’s part of the syntax) for this purpose. Example: def print_squared(t: Tuple[int, ...]) -> None: for n in t: print(n, n ** 2) print_squared(()) # OK print_squared((1, 3, 5)) # OK print_squared([1, 2]) # Error: only a tuple is valid Note Usually it’s a better idea to use Sequence[T] instead of Tuple[T, ...], as Sequence is also compatible with lists and other non-tuple sequences. Note Tuple[...] is not valid as a base class outside stub files. This is a limitation of the typing module. One way to work around this is to use a named tuple as a base class (see section Named tuples). Callable types (and lambdas)¶ You can pass around function objects and bound methods in statically typed code. The type of a function that accepts arguments A1, ..., An and returns Rt is Callable[[A1, ..., An], Rt]. Example: from typing import Callable def twice(i: int, next: Callable[[int], int]) -> int: return next(next(i)) def add(i: int) -> int: return i + 1 print(twice(3, add)) # 5 You can only have positional arguments, and only ones without default values, in callable types. These cover the vast majority of uses of callable types, but sometimes this isn’t quite enough. Mypy recognizes a special form Callable[..., T] (with a literal ...) which can be used in less typical cases. It is compatible with arbitrary callable objects that return a type compatible with T, independent of the number, types or kinds of arguments. Mypy lets you call such callable values with arbitrary arguments, without any checking – in this respect they are treated similar to a (*args: Any, **kwargs: Any) function signature. Example: from typing import Callable def arbitrary_call(f: Callable[..., int]) -> int: return f('x') + f(y=2) # OK arbitrary_call(ord) # No static error, but fails at runtime arbitrary_call(open) # Error: does not return an int arbitrary_call(1) # Error: 'int' is not callable Lambdas are also supported. The lambda argument and return value types cannot be given explicitly; they are always inferred based on context using bidirectional type inference: l = map(lambda x: x + 1, [1, 2, 3]) # Infer x as int and l as List[int] If you want to give the argument or return value types explicitly, use an ordinary, perhaps nested function definition. Extended Callable types¶. This allows one to more closely emulate the full range of possibilities given by the def statement in Python.) Arg(int, 'b'), DefaultArg(int, 'c'), VarArg(int), NamedArg(int, 'd'), DefaultNamedArg(int, 'e'), KwArg(int)], int] f: F = func Argument specifiers are special function calls that can specify the following aspects of an argument: - its type (the only thing that the basic format supports) - its name (if it has one) - whether it may be omitted - whether it may or must be passed using a keyword - whether it is a *argsargument (representing the remaining positional arguments) - whether it is a **kwargsargument (representing the remaining keyword arguments) The following functions are available in mypy_extensions for this purpose: def Arg(type=Any, name=None): # A normal, mandatory, positional argument. # If the name is specified it may be passed as a keyword. def DefaultArg(type=Any, name=None): # An optional positional argument (i.e. with a default value). # If the name is specified it may be passed as a keyword. def NamedArg(type=Any, name=None): # A mandatory keyword-only argument. def DefaultNamedArg(type=Any, name=None): # An optional keyword-only argument (i.e. with a default value). def VarArg(type=Any): # A *args-style variadic positional argument. # A single VarArg() specifier represents all remaining # positional arguments. def KwArg(type=Any): # A **kwargs-style variadic keyword argument. # A single KwArg() specifier represents all remaining # keyword arguments. In all cases, the type argument defaults to Any, and if the name argument is omitted the argument has no name (the name is required for NamedArg and DefaultNamedArg). A basic Callable such as MyFunc = Callable[[int, str, int], float] is equivalent to the following: MyFunc = Callable[[Arg(int), Arg(str), Arg(int)], float] A Callable with unspecified argument types, such as MyOtherFunc = Callable[..., int] is (roughly) equivalent to MyOtherFunc = Callable[[VarArg(), KwArg()], int] Note This feature is experimental. Details of the implementation may change and there may be unknown limitations. IMPORTANT: Each of the functions above currently just returns its type argument, so the information contained in the argument specifiers is not available at runtime. This limitation is necessary for backwards compatibility with the existing typing.py module as present in the Python 3.5+ standard library and distributed via PyPI. Union types¶ Python functions often accept values of two or more different types. You can use overloading to model this in statically typed code, but union types can make code like this easier to write. Use the Union[T1, ..., Tn] type constructor to construct a union type. For example, the type Union[int, str] is compatible with both integers and strings. You can use an isinstance() check to narrow down the type to a specific type: from typing import Union def f(x: Union[int, str]) -> None: x + 1 # Error: str + int is not valid if isinstance(x, int): # Here type of x is int. x + 1 # OK else: # Here type of x is str. x + 'a' # OK f(1) # OK f('x') # OK f(1.1) # Error The type of None and optional types¶ Mypy treats the type of None as special. None is a valid value for every type, which resembles null in Java. Unlike Java, mypy doesn’t treat primitives types specially: None is also valid for primitive types such as int and float. Note See Experimental strict optional type and None checking for an experimental mode which allows mypy to check None values precisely. When initializing a variable as None, None is usually an empty place-holder value, and the actual value has a different type. This is why you need to annotate an attribute in a case like this: class A: def __init__(self) -> None: self.count = None # type: int Mypy will complain if you omit the type annotation, as it wouldn’t be able to infer a non-trivial type for the count attribute otherwise. Mypy generally uses the first assignment to a variable to infer the type of the variable. However, if you assign both a None value and a non- None value in the same scope, mypy can often do the right thing: def f(i: int) -> None: n = None # Inferred type int because of the assignment below if i > 0: n = i ... Often it’s useful to know whether a variable can be None. For example, this function accepts a None argument, but it’s not obvious from its signature: def greeting(name: str) -> str: if name: return 'Hello, {}'.format(name) else: return 'Hello, stranger' print(greeting('Python')) # Okay! print(greeting(None)) # Also okay! Mypy lets you use Optional[t] to document that None is a valid argument type: from typing import Optional def greeting(name: Optional[str]) -> str: if name: return 'Hello, {}'.format(name) else: return 'Hello, stranger' Mypy treats this as semantically equivalent to the previous example, since None is implicitly valid for any type, but it’s much more useful for a programmer who is reading the code. You can equivalently use Union[str, None], but Optional is shorter and more idiomatic. Note None is also used as the return type for functions that don’t return a value, i.e. that implicitly return None. Mypy doesn’t use NoneType for this, since it would look awkward, even though that is the real name of the type of None (try type(None) in the interactive interpreter to see for yourself). Experimental strict optional type and None checking¶ Currently, None is a valid value for each type, similar to null or NULL in many languages. However, you can use the experimental --strict-optional command line option to tell mypy that types should not include None by default. The Optional type modifier is then used to define a type variant that includes None, such as Optional[int]: from typing import Optional def f() -> Optional[int]: return None # OK def g() -> int: ... return None # Error: None not compatible with int Also, most operations will not be allowed on unguarded None or Optional values: def f(x: Optional[int]) -> int: return x + 1 # Error: Cannot add None and int Instead, an explicit None check is required. Mypy has powerful type inference that lets you use regular Python idioms to guard against None values. For example, mypy recognizes is None checks: def f(x: Optional[int]) -> int: if x is None: return 0 else: # The inferred type of x is just int here. return x + 1 Mypy will infer the type of x to be int in the else block due to the check against None in the if condition. Note --strict-optional is experimental and still has known issues. The NoReturn type¶ Mypy provides support for functions that never return. For example, a function that unconditionally raises an exception: from mypy_extensions import NoReturn def stop() -> NoReturn: raise Exception('no way') Mypy will ensure that functions annotated as returning NoReturn truly never return, either implicitly or explicitly. Mypy will also recognize that the code after calls to such functions is unreachable and will behave accordingly: def f(x: int) -> int: if x == 0: return x stop() return 'whatever works' # No error in an unreachable block Install mypy_extensions using pip to use NoReturn in your code. Python 3 command line: python3 -m pip install --upgrade mypy-extensions This works for Python 2: pip install --upgrade mypy-extensions Class name forward references¶ Python does not allow references to a class object before the class is defined. Thus this code does not work as expected: def f(x: A) -> None: # Error: Name A not defined .... class A: ... In cases like these you can enter the type as a string literal — this is a forward reference: def f(x: 'A') -> None: # OK ... class A: ... Of course, instead of using a string literal type, you could move the function definition after the class definition. This is not always desirable or even possible, though. Any type can be entered as a string literal, and you can combine string-literal types with non-string-literal types freely: def f(a: List['A']) -> None: ... # OK def g(n: 'int') -> None: ... # OK, though not useful class A: pass String literal types are never needed in # type: comments. String literal types must be defined (or imported) later in the same module. They cannot be used to leave cross-module references unresolved. (For dealing with import cycles, see Import cycles.) Type aliases¶ In certain situations, type names may end up being long and painful to type: def f() -> Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]]: ... When cases like this arise, you can define a type alias by simply assigning the type to a variable: AliasType = Union[List[Dict[Tuple[int, str], Set[int]]], Tuple[str, List[str]]] # Now we can use AliasType in place of the full name: def f() -> AliasType: ... Type aliases can be generic, in this case they could be used in two variants: Subscripted aliases are equivalent to original types with substituted type variables, number of type arguments must match the number of free type variables in generic type alias. Unsubscripted aliases are treated as original types with free variables replaced with Any. Examples (following PEP 484): from typing import TypeVar, Iterable, Tuple, Union, Callable S = TypeVar('S') TInt = Tuple[int, S] UInt = Union[S, int] CBack = Callable[..., S] def response(query: str) -> UInt[str]: # Same as Union[str, int] ... def activate(cb: CBack[S]) -> S: # Same as Callable[..., S] ... table_entry: TInt # Same as Tuple[int, Any] T = TypeVar('T', int, float, complex) Vec = Iterable[Tuple[T, T]] def inproduct(v: Vec[T]) -> T: return sum(x*y for x, y in v) def dilate(v: Vec[T], scale: T) -> Vec[T]: return ((x * scale, y * scale) for x, y in v) v1: Vec[int] = [] # Same as Iterable[Tuple[int, int]] v2: Vec = [] # Same as Iterable[Tuple[Any, Any]] v3: Vec[int, int] = [] # Error: Invalid alias, too many type arguments! Type aliases can be imported from modules like any names. Aliases can target another aliases (although building complex chains of aliases is not recommended, this impedes code readability, thus defeating the purpose of using aliases). Following previous examples: from typing import TypeVar, Generic, Optional from first_example import AliasType from second_example import Vec def fun() -> AliasType: ... T = TypeVar('T') class NewVec(Generic[T], Vec[T]): ... for i, j in NewVec[int](): ... OIntVec = Optional[Vec[int]] Note A type alias does not create a new type. It’s just a shorthand notation for another type – it’s equivalent to the target type. For generic type aliases this means that variance of type variables used for alias definition does not apply to aliases. A parameterized generic alias is treated simply as an original type with the corresponding type variables substituted. NewTypes¶ There are also situations where a programmer might want to avoid logical errors by creating simple classes. For example: class UserId(int): pass get_by_user_id(user_id: UserId): ... However, this approach introduces some runtime overhead. To avoid this, the typing module provides a helper function NewType that creates simple unique types with almost zero runtime overhead. Mypy will treat the statement Derived = NewType('Derived', Base) as being roughly equivalent to the following definition: class Derived(Base): def __init__(self, _x: Base) -> None: ... However, at runtime, NewType('Derived', Base) will return a dummy function that simply returns its argument: def Derived(_x): return _x Mypy will require explicit casts from int where UserId is expected, while implicitly casting from UserId where int is expected. Examples: from typing import NewType UserId = NewType('UserId', int) def name_by_id(user_id: UserId) -> str: ... UserId('user') # Fails type check name_by_id(42) # Fails type check name_by_id(UserId(42)) # OK num = UserId(5) + 1 # type: int NewType accepts exactly two arguments. The first argument must be a string literal containing the name of the new type and must equal the name of the variable to which the new type is assigned. The second argument must be a properly subclassable class, i.e., not a type construct like Union, etc. The function returned by NewType accepts only one argument; this is equivalent to supporting only one constructor accepting an instance of the base class (see above). Example: from typing import NewType. Note Note that unlike type aliases, NewType will create an entirely new and unique type when used. The intended purpose of NewType is to help you detect cases where you accidentally mixed together the old base type and the new derived type. For example, the following will successfully typecheck when using type aliases: UserId = int def name_by_id(user_id: UserId) -> str: ... name_by_id(3) # ints and UserId are synonymous But a similar example using NewType will not typecheck: from typing import NewType UserId = NewType('UserId', int) def name_by_id(user_id: UserId) -> str: ... name_by_id(3) # int is not the same as UserId Named tuples¶ Mypy recognizes named tuples and can type check code that defines or uses them. In this example, we can detect code trying to access a missing attribute: Point = namedtuple('Point', ['x', 'y']) p = Point(x=1, y=2) print(p.z) # Error: Point has no attribute 'z' If you use namedtuple to define your named tuple, all the items are assumed to have Any types. That is, mypy doesn’t know anything about item types. You can use typing.NamedTuple to also define item types: from typing import NamedTuple Point = NamedTuple('Point', [('x', int), ('y', int)]) p = Point(x=1, y='x') # Argument has incompatible type "str"; expected "int" Python 3.6 will have an alternative, class-based syntax for named tuples with types. Mypy supports it already: from typing import NamedTuple class Point(NamedTuple): x: int y: int p = Point(x=1, y='x') # Argument has incompatible type "str"; expected "int" The type of class objects¶ Sometimes you want to talk about class objects that inherit from a given class. This can be spelled as Type[C] where C is a class. In other words, when C is the name of a class, using C to annotate an argument declares that the argument is an instance of C (or of a subclass of C), but using Type[C] as an argument annotation declares that the argument is a class object deriving from C (or C itself). For example, assume the following classes: class User: # Defines fields like name, email class BasicUser(User): def upgrade(self): """Upgrade to Pro""" class ProUser(User): def pay(self): """Pay bill""" Note that ProUser doesn’t inherit from BasicUser. Here’s a function that creates an instance of one of these classes if you pass it the right class object: def new_user(user_class): user = user_class() # (Here we could write the user object to a database) return user How would we annotate this function? Without Type[] the best we could do would be: def new_user(user_class: type) -> User: # Same implementation as before This seems reasonable, except that in the following example, mypy doesn’t see that the buyer variable has type ProUser: buyer = new_user(ProUser) buyer.pay() # Rejected, not a method on User However, using Type[] and a type variable with an upper bound (see Type variables with upper bounds) we can do better: U = TypeVar('U', bound=User) def new_user(user_class: Type[U]) -> U: # Same implementation as before Now mypy will infer the correct type of the result when we call new_user() with a specific subclass of User: beginner = new_user(BasicUser) # Inferred type is BasicUser beginner.upgrade() # OK Note The value corresponding to Type[C] must be an actual class object that’s a subtype of C. Its constructor must be compatible with the constructor of C. If C is a type variable, its upper bound must be a class object. For more details about Type[] see PEP 484. Text and AnyStr¶ Sometimes you may want to write a function which will accept only unicode strings. This can be challenging to do in a codebase intended to run in both Python 2 and Python 3 since str means something different in both versions and unicode is not a keyword in Python 3. To help solve this issue, use typing.Text which is aliased to unicode in Python 2 and to str in Python 3. This allows you to indicate that a function should accept only unicode strings in a cross-compatible way: from typing import Text def unicode_only(s: Text) -> Text: return s + u'\u2713' In other cases, you may want to write a function that will work with any kind of string but will not let you mix two different string types. To do so use typing.AnyStr: from typing import AnyStr def concat(x: AnyStr, y: AnyStr) -> AnyStr: return x + y concat('a', 'b') # Okay concat(b'a', b'b') # Okay concat('a', b'b') # Error: cannot mix bytes and unicode For more details, see Type variables with value restriction. Note How bytes, str, and unicode are handled between Python 2 and Python 3 may change in future versions of mypy. Generators¶ A basic generator that only yields values can be annotated as having a return type of either Iterator[YieldType] or Iterable[YieldType]. For example: def squares(n: int) -> Iterator[int]: for i in range(n): yield i * i If you want your generator to accept values via the send method or return a value, you should use the Generator[YieldType, SendType, ReturnType] generic type instead. For example: def echo_round() -> Generator[int, float, str]: sent = yield 0 while sent >= 0: sent = yield round(sent) return 'Done' Note that unlike many other generics in the typing module, the SendType of Generator behaves contravariantly, not covariantly or invariantly. If you do not plan on recieving or returning values, then set the SendType or ReturnType to None, as appropriate. For example, we could have annotated the first example as the following: def squares(n: int) -> Generator[int, None, None]: for i in range(n): yield i * i Typing async/await¶ Mypy supports the ability to type coroutines that use the async/await syntax introduced in Python 3.5. For more information regarding coroutines and this new syntax, see PEP 492. Functions defined using async def are typed just like normal functions. The return type annotation should be the same as the type of the value you expect to get back when await-ing the coroutine.() The result of calling an async def function without awaiting will be a value of type Awaitable[T]: my_coroutine = countdown_1("Millennium Falcon", 5) reveal_type(my_coroutine) # has type 'Awaitable[str]' If you want to use coroutines in older versions of Python that do not support the async def syntax, you can instead use the @asyncio.coroutine decorator to convert a generator into a coroutine. Note that we set the YieldType of the generator to be Any in the following example. This is because the exact yield type is an implementation detail of the coroutine runner (e.g. the asyncio event loop) and your coroutine shouldn’t have to know or care about what precisely that type is. from typing import Any, Generator import asyncio @asyncio.coroutine def countdown_2(tag: str, count: int) -> Generator[Any, None, str]: while count > 0: print('T-minus {} ({})'.format(count, tag)) yield from asyncio.sleep(0.1) count -= 1 return "Blastoff!" loop = asyncio.get_event_loop() loop.run_until_complete(countdown_2("USS Enterprise", 5)) loop.close() As before, the result of calling a generator decorated with @asyncio.coroutine will be a value of type Awaitable[T]. Note At runtime, you are allowed to add the @asyncio.coroutine decorator to both functions and generators. This is useful when you want to mark a work-in-progress function as a coroutine, but have not yet added yield or yield from statements: import asyncio @asyncio.coroutine def serialize(obj: object) -> str: # todo: add yield/yield from to turn this into a generator return "placeholder" However, mypy currently does not support converting functions into coroutines. Support for this feature will be added in a future version, but for now, you can manually force the function to be a generator by doing something like this: from typing import Generator import asyncio @asyncio.coroutine def serialize(obj: object) -> Generator[None, None, str]: # todo: add yield/yield from to turn this into a generator if False: yield return "placeholder" You may also choose to create a subclass of Awaitable instead: from typing import Any, Awaitable, Generator import asyncio class MyAwaitable(Awaitable[str]): def __init__(self, tag: str, count: int) -> None: self.tag = tag self.count = count def __await__(self) -> Generator[Any, None, str]: for i in range(n, 0, -1): print('T-minus {} ({})'.format(i, tag)) yield from asyncio.sleep(0.1) return "Blastoff!" def countdown_3(tag: str, count: int) -> Awaitable[str]: return MyAwaitable(tag, count) loop = asyncio.get_event_loop() loop.run_until_complete(countdown_3("Heart of Gold", 5)) loop.close() To create an iterable coroutine, subclass AsyncIterator: from typing import Optional, AsyncIterator import asyncio class arange(AsyncIterator[int]): def __init__(self, start: int, stop: int, step: int) -> None: self.start = start self.stop = stop self.step = step self.count = start - step def __aiter__(self) -> AsyncIterator[int]: return self async def __anext__(self) -> int: self.count += self.step if self.count == self.stop: raise StopAsyncIteration else: return self.count async def countdown_4(tag: str, n: int) -> str: async for i in arange(n, 0, -1): print('T-minus {} ({})'.format(i, tag)) await asyncio.sleep(0.1) return "Blastoff!" loop = asyncio.get_event_loop() loop.run_until_complete(countdown_4("Serenity", 5)) loop.close() For a more concrete example, the mypy repo has a toy webcrawler that demonstrates how to work with coroutines. One version uses async/await and one uses yield from.
http://mypy.readthedocs.io/en/stable/kinds_of_types.html
CC-MAIN-2017-26
refinedweb
4,456
61.26
23 min read Bjarne Stroustrup’s The C++ Programming Language has a chapter titled “A Tour of C++: The Basics”—Standard C++. That chapter, in 2.2, mentions in half a page the compilation and linking process in C++. Compilation and linking are two very basic processes that happen all the time during C++ software development, but oddly enough, they aren’t well understood by many C++ developers. Why is C++ source code split into header and source files? How is each part seen by the compiler? How does that affect compilation and linking? There are many more questions like these that you may have thought about but have come to accept as convention. Whether you are designing a C++ application, implementing new features for it, trying to address bugs (especially certain strange bugs), or trying to make C and C++ code work together, knowing how compilation and linking works will save you a lot of time and make those tasks much more pleasant. In this article, you will learn exactly that. The article will explain how a C++ compiler works with some of the basic language constructs, answer some common questions that are related to their processes, and help you work around some related mistakes that developers often make in C++ development. Note: This article has some example source code that can be downloaded from The examples were compiled in a CentOS Linux machine: $ uname -sr Linux 3.10.0-327.36.3.el7.x86_64 Using g++ version: $ g++ --version g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11) The source files provided should be portable to other operating systems, although the Makefiles accompanying them for the automated build process should be portable only to Unix-like systems. The Build Pipeline: Preprocess, Compile, and Link Each C++ source file needs to be compiled into an object file. The object files resulting from the compilation of multiple source files are then linked into an executable, a shared library, or a static library (the last of these being just an archive of object files). C++ source files generally have the .cpp, .cxx or .cc extension suffixes. A C++ source file can include other files, known as header files, with the #include directive. Header files have extensions like .h, .hpp, or .hxx, or have no extension at all like in the C++ standard library and other libraries’ header files (like Qt). The extension doesn’t matter for the C++ preprocessor, which will literally replace the line containing the #include directive with the entire content of the included file. The first step that the compiler will do on a source file is run the preprocessor on it. Only source files are passed to the compiler (to preprocess and compile it). Header files aren’t passed to the compiler. Instead, they are included from source files. Each header file can be opened multiple times during the preprocessing phase of all source files, depending on how many source files include them, or how many other header files that are included from source files also include them (there can be many levels of indirection). Source files, on the other hand, are opened only once by the compiler (and preprocessor), when they are passed to it. For each C++ source file, the preprocessor will build a translation unit by inserting content in it when it finds an #include directive at the same time that it’ll be stripping code out of the source file and of the headers when it finds conditional compilation blocks whose directive evaluates to false. It’ll also do some other tasks like macro replacements. Once the preprocessor finishes creating that (sometimes huge) translation unit, the compiler starts the compilation phase and produces the object file. To obtain that translation unit (the preprocessed source code), the -E option can be passed to the g++ compiler, along with the -o option to specify the desired name of the preprocessed source file. In the cpp-article/hello-world directory, there is a “hello-world.cpp” example file: #include <iostream> int main(int argc, char* argv[]) { std::cout << "Hello world" << std::endl; return 0; } Create the preprocessed file by: $ g++ -E hello-world.cpp -o hello-world.ii And see the number of lines: $ wc -l hello-world.ii 17558 hello-world.ii It has 17,588 lines in my machine. You can also just run make on that directory and it’ll do those steps for you. We can see that the compiler must compile a much larger file than the simple source file that we see. This is because of the included headers. And in our example, we have included just one header. The translation unit becomes bigger and bigger as we keep including headers. This preprocess and compile process is similar for C language. It follows the C rules for compiling, and the way it includes header files and produces object code is nearly the same. How Source Files Import and Export Symbols Let’s see now the files in cpp-article/symbols/c-vs-cpp-names directory. There is a simple C (not C++) source file named sum.c that exports two functions, one for adding two integers and one for adding two floats: int sumI(int a, int b) { return a + b; } float sumF(float a, float b) { return a + b; } Compile it (or run make and all the steps to create the two example apps to be executed) to create the sum.o object file: $ gcc -c sum.c Now look at the symbols exported and imported by this object file: $ nm sum.o 0000000000000014 T sumF 0000000000000000 T sumI No symbols are imported and two symbols are exported: sumF and sumI. Those symbols are exported as part of the .text segment (T), so they are function names, executable code. If other (both C or C++) source files want to call those functions, they need to declare them before calling. The standard way to do it is to create a header file that declares them and includes them in whatever source file we want to call them. The header can have any name and extension. I chose sum.h: #ifdef __cplusplus extern "C" { #endif int sumI(int a, int b); float sumF(float a, float b); #ifdef __cplusplus } // end extern "C" #endif What are those ifdef/ endif conditional compilation blocks? If I include this header from a C source file, I want it to become: int sumI(int a, int b); float sumF(float a, float b); But if I include them from a C++ source file, I want it to become: extern "C" { int sumI(int a, int b); float sumF(float a, float b); } // end extern "C" C language doesn’t know anything about the extern "C" directive, but C++ does, and it needs this directive applied to C function declarations. This is because C++ mangles function (and method) names because it supports function/method overloading, while C doesn’t. This can be seen in the C++ source file named print.cpp: #include <iostream> // std::cout, std::endl #include "sum.h" // sumI, sumF void printSum(int a, int b) { std::cout << a << " + " << b << " = " << sumI(a, b) << std::endl; } void printSum(float a, float b) { std::cout << a << " + " << b << " = " << sumF(a, b) << std::endl; } extern "C" void printSumInt(int a, int b) { printSum(a, b); } extern "C" void printSumFloat(float a, float b) { printSum(a, b); } There are two functions with the same name ( printSum) that only differ in their parameters’ type: int or float. Function overloading is a C++ feature which isn’t present in C. To implement this feature and differentiate those functions, C++ mangles the function name, as we can see in their exported symbol name (I’ll only pick what’s relevant from nm’s output): $ g++ -c print.cpp $ nm print.o 0000000000000132 T printSumFloat 0000000000000113 T printSumInt U sumF U sumI 0000000000000074 T _Z8printSumff 0000000000000000 T _Z8printSumii U _ZSt4cout Those functions are exported (in my system) as _Z8printSumff for the float version and _Z8printSumii for the int version. Every function name in C++ is mangled unless declared as extern "C". There are two functions that were declared with C linkage in print.cpp: printSumInt and printSumFloat. Therefore, they cannot be overloaded, or their exported names would be the same since they aren’t mangled. I had to differentiate them from each other by postfixing an Int or a Float to the end of their names. Since they are not mangled they can be called from C code, as we’ll soon see. To see the mangled names like we would see them in C++ source code, we can use the -C (demangle) option in the nm command. Again, I’ll only copy the same relevant part of the output: $ nm -C print.o 0000000000000132 T printSumFloat 0000000000000113 T printSumInt U sumF U sumI 0000000000000074 T printSum(float, float) 0000000000000000 T printSum(int, int) U std::cout With this option, instead of _Z8printSumff we see printSum(float, float), and instead of _ZSt4cout we see std::cout, which are more human-friendly names. We also see that our C++ code is calling C code: print.cpp is calling sumI and sumF, which are C functions declared as having C linkage in sum.h. This can be seen in the nm output of print.o above, that informs of some undefined (U) symbols: sumF, sumI and std::cout. Those undefined symbols are supposed to be provided in one of the object files (or libraries) that will be linked together with this object file output in the link phase. So far we have just compiled source code into object code, we haven’t yet linked. If we don’t link the object file that contains the definitions for those imported symbols together with this object file, the linker will stop with a “missing symbol” error. Note also that since print.cpp is a C++ source file, compiled with a C++ compiler (g++), all the code in it is compiled as C++ code. Functions with C linkage like printSumInt and printSumFloat are also C++ functions that can use C++ features. Only the names of the symbols are compatible with C, but the code is C++, which can be seen by the fact that both functions are calling an overloaded function ( printSum), which couldn’t happen if printSumInt or printSumFloat were compiled in C. Let’s see now print.hpp, a header file that can be included both from C or C++ source files, which will allow printSumInt and printSumFloat to be called both from C and from C++, and printSum to be called from C++: #ifdef __cplusplus void printSum(int a, int b); void printSum(float a, float b); extern "C" { #endif void printSumInt(int a, int b); void printSumFloat(float a, float b); #ifdef __cplusplus } // end extern "C" #endif If we are including it from a C source file, we just want to see: void printSumInt(int a, int b); void printSumFloat(float a, float b); printSum can’t be seen from C code since its name is mangled, so we don’t have a (standard and portable) way to declare it for C code. Yes, I can declare them as: void _Z8printSumii(int a, int b); void _Z8printSumff(float a, float b); And the linker won’t complain since that’s the exact name that my currently installed compiler invented for it, but I don’t know if it’ll work for your linker (if your compiler generates a different mangled name), or even for the next version of my linker. I don’t even know if the call will work as expected because of the existence of different calling conventions (how parameters are passed and return values are returned) that are compiler specific and may be different for C and C++ calls (especially for C++ functions that are member functions and receive the this pointer as a parameter). Your compiler can potentially use one calling convention for regular C++ functions and a different one if they are declared as having extern “C” linkage. So, cheating the compiler by saying that one function uses C calling convention while it actually uses C++ for it can deliver unexpected results if the conventions used for each happen to be different in your compiling toolchain. There are standard ways to mix C and C++ code and a standard way to call C++ overloaded functions from C is to wrap them in functions with C linkage as we did by wrapping printSum with printSumInt and printSumFloat. If we include print.hpp from a C++ source file, the __cplusplus preprocessor macro will be defined and the file will be seen as: void printSum(int a, int b); void printSum(float a, float b); extern "C" { void printSumInt(int a, int b); void printSumFloat(float a, float b); } // end extern "C" This will allow C++ code to call the overloaded function printSum or its wrappers printSumInt and printSumFloat. Now let’s create a C source file containing the main function, which is the entry point for a program. This C main function will call printSumInt and printSumFloat, that is, will call both C++ functions with C linkage. Remember, those are C++ functions (their function bodies execute C++ code) that only don’t have C++ mangled names. The file is named c-main.c: #include "print.hpp" int main(int argc, char* argv[]) { printSumInt(1, 2); printSumFloat(1.5f, 2.5f); return 0; } Compile it to generate the object file: $ gcc -c c-main.c And see the imported/exported symbols: $ nm c-main.o 0000000000000000 T main U printSumFloat U printSumInt It exports main and imports printSumFloat and printSumInt, as expected. To link it all together into an executable file, we need to use the C++ linker (g++), since at least one file that we’ll link, print.o, was compiled in C++: $ g++ -o c-app sum.o print.o c-main.o The execution produces the expected result: $ ./c-app 1 + 2 = 3 1.5 + 2.5 = 4 Now let’s try with a C++ main file, named cpp-main.cpp: #include "print.hpp" int main(int argc, char* argv[]) { printSum(1, 2); printSum(1.5f, 2.5f); printSumInt(3, 4); printSumFloat(3.5f, 4.5f); return 0; } Compile and see the imported/exported symbols of the cpp-main.o object file: $ g++ -c cpp-main.cpp $ nm -C cpp-main.o 0000000000000000 T main U printSumFloat U printSumInt U printSum(float, float) U printSum(int, int) It exports main and imports C linkage printSumFloat and printSumInt, and both mangled versions of printSum. You may be wondering why the main symbol isn’t exported as a mangled symbol like main(int, char**) from this C++ source since it’s a C++ source file and it isn’t defined as extern "C". Well, main is a special implementation defined function and my implementation seems to have chosen to use C linkage for it no matter whether it’s defined in a C or C++ source file. Linking and running the program gives the expected result: $ g++ -o cpp-app sum.o print.o cpp-main.o $ ./cpp-app 1 + 2 = 3 1.5 + 2.5 = 4 3 + 4 = 7 3.5 + 4.5 = 8 How Header Guards Work So far, I’ve been careful not to include my headers twice, directly or indirectly, from the same source file. But since one header can include other headers, the same header can indirectly be included multiple times. And since header content is just inserted in the place from where it was included, it’s easy to end with duplicated declarations. See the example files in cpp-article/header-guards. // unguarded.hpp class A { public: A(int a) : m_a(a) {} void setA(int a) { m_a = a; } int getA() const { return m_a; } private: int m_a; }; // guarded.hpp: #ifndef __GUARDED_HPP #define __GUARDED_HPP class A { public: A(int a) : m_a(a) {} void setA(int a) { m_a = a; } int getA() const { return m_a; } private: int m_a; }; #endif // __GUARDED_HPP The difference is that, in guarded.hpp, we surround the entire header in a conditional that will only be included if __GUARDED_HPP preprocessor macro isn’t defined. The first time that the preprocessor includes this file, it won’t be defined. But, since the macro is defined inside that guarded code, the next time it’s included (from the same source file, directly or indirectly), the preprocessor will see the lines between the #ifndef and the #endif and will discard all the code between them. Note that this process happens for every source file that we compile. It means that this header file can be included once and only once for each source file. The fact that it was included from one source file won’t prevent it to be included from a different source file when that source file is compiled. It’ll just prevent it to be included more than once from the same source file. The example file main-guarded.cpp includes guarded.hpp twice: #include "guarded.hpp" #include "guarded.hpp" int main(int argc, char* argv[]) { A a(5); a.setA(0); return a.getA(); } But the preprocessed output only shows one definition of class A: $ g++ -E main-guarded.cpp # 1 "main-guarded.cpp" # 1 "<built-in>" # 1 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 1 "<command-line>" 2 # 1 "main-guarded.cpp" # 1 "guarded.hpp" 1 class A { public: A(int a) : m_a(a) {} void setA(int a) { m_a = a; } int getA() const { return m_a; } private: int m_a; }; # 2 "main-guarded.cpp" 2 int main(int argc, char* argv[]) { A a(5); a.setA(0); return a.getA(); } Therefore, it can be compiled without problems: $ g++ -o guarded main-guarded.cpp But the main-unguarded.cpp file includes unguarded.hpp twice: #include "unguarded.hpp" #include "unguarded.hpp" int main(int argc, char* argv[]) { A a(5); a.setA(0); return a.getA(); } And the preprocessed output shows two definitions of class A: $ g++ -E main-unguarded.cpp # 1 "main-unguarded.cpp" # 1 "<built-in>" # 1 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 1 "<command-line>" 2 # 1 "main-unguarded.cpp" # 1 "unguarded.hpp" 1 class A { public: A(int a) : m_a(a) {} void setA(int a) { m_a = a; } int getA() const { return m_a; } private: int m_a; }; # 2 "main-unguarded.cpp" 2 # 1 "unguarded.hpp" 1 class A { public: A(int a) : m_a(a) {} void setA(int a) { m_a = a; } int getA() const { return m_a; } private: int m_a; }; # 3 "main-unguarded.cpp" 2 int main(int argc, char* argv[]) { A a(5); a.setA(0); return a.getA(); } This will cause problems when compiling: $ g++ -o unguarded main-unguarded.cpp In file included from main-unguarded.cpp:2:0: unguarded.hpp:1:7: error: redefinition of 'class A' class A { ^ In file included from main-unguarded.cpp:1:0: unguarded.hpp:1:7: error: previous definition of 'class A' class A { ^ For the sake of brevity, I won’t use guarded headers in this article if it isn’t necessary since most are short examples. But always guard your header files. Not your source files, which won’t be included from anywhere. Just header files. Pass by Value and Constness of Parameters Look at by-value.cpp file in cpp-article/symbols/pass-by: #include <vector> #include <numeric> #include <iostream> // std::vector, std::accumulate, std::cout, std::endl using namespace std; int sum(int a, const int b) { cout << "sum(int, const int)" << endl; const int c = a + b; ++a; // Possible, not const // ++b; // Not possible, this would result in a compilation error return c; } float sum(const float a, float b) { cout << "sum(const float, float)" << endl; return a + b; } int sum(vector<int> v) { cout << "sum(vector<int>)" << endl; return accumulate(v.begin(), v.end(), 0); } float sum(const vector<float> v) { cout << "sum(const vector<float>)" << endl; return accumulate(v.begin(), v.end(), 0.0f); } Since I use the using namespace std directive, I don’t have to qualify the names of symbols (functions or classes) inside the std namespace in the rest of the translation unit, which in my case is the rest of the source file. If this were a header file, I shouldn’t have inserted this directive because a header file is supposed to be included from multiple source files; this directive would bring to the global scope of each source file the entire std namespace from the point they include my header. Even headers included after mine in those files will have those symbols in scope. This can produce name clashes since they were not expecting this to happen. Therefore, don’t use this directive in headers. Only use it in source files if you want, and only after you included all headers. Note how some parameters are const. This means that they can’t be changed in the body of the function if we try to. It’d give a compilation error. Also, note that all the parameters in this source file are passed by value, not by reference (&) or by pointer (*). This means that the caller will make a copy of them and pass to the function. So, it doesn’t matter for the caller whether they are const or not, because if we modify them in the function body we’ll only be modifying the copy, not the original value the caller passed to the function. Since the constness of a parameter that is passed by value (copy) doesn’t matter for the caller, it is not mangled in the function signature, as it can be seen after compiling and inspecting the object code (only the relevant output): $ g++ -c by-value.cpp $ nm -C by-value.o 000000000000001e T sum(float, float) 0000000000000000 T sum(int, int) 0000000000000087 T sum(std::vector<float, std::allocator<float> >) 0000000000000048 T sum(std::vector<int, std::allocator<int> >) The signatures don’t express whether the copied parameters are const or not in the bodies of the function. It doesn’t matter. It mattered for the function definition only, to show at a glance to the reader of the function body whether those values will ever change. In the example, only half of the parameters are declared as const, so we can see the contrast, but if we want to be const-correct they should all have been declared so since none of them are modified in the function body (and they shouldn’t). Since it doesn’t matter for the function declaration which is what the caller sees, we can create the by-value.hpp header like this: #include <vector> int sum(int a, int b); float sum(float a, float b); int sum(std::vector<int> v); int sum(std::vector<float> v); Adding the const qualifiers here is allowed (you can even qualify as const variables that aren’t const in the definition and it’ll work), but this is not necessary and it’ll only make the declarations unnecessarily verbose. Pass by Reference Let’s see by-reference.cpp: #include <vector> #include <iostream> #include <numeric> using namespace std; int sum(const int& a, int& b) { cout << "sum(const int&, int&)" << endl; const int c = a + b; ++b; // Will modify caller variable // ++a; // Not allowed, but would also modify caller variable return c; } float sum(float& a, const float& b) { cout << "sum(float&, const float&)" << endl; return a + b; } int sum(const std::vector<int>& v) { cout << "sum(const std::vector<int>&)" << endl; return accumulate(v.begin(), v.end(), 0); } float sum(const std::vector<float>& v) { cout << "sum(const std::vector<float>&)" << endl; return accumulate(v.begin(), v.end(), 0.0f); } Constness when passing by reference matters for the caller, because it’ll tell the caller whether its argument will be modified or not by the callee. Therefore, the symbols are exported with their constness: $ g++ -c by-reference.cpp $ nm -C by-reference.o 0000000000000051 T sum(float&, float const&) 0000000000000000 T sum(int const&, int&) 00000000000000fe T sum(std::vector<float, std::allocator<float> > const&) 00000000000000a3 T sum(std::vector<int, std::allocator<int> > const&) That should also be reflected in the header that callers will use: #include <vector> int sum(const int&, int&); float sum(float&, const float&); int sum(const std::vector<int>&); float sum(const std::vector<float>&); Note that I didn’t write the name of the variables in the declarations (in the header) as I’d been doing so far. This is also legal, for this example and for the previous ones. Variable names aren’t required in the declaration, since the caller doesn’t need to know how do you want to name your variable. But parameter names are generally desirable in declarations so the user can know at a glance what each parameter mean and therefore what to send in the call. Surprisingly, variable names aren’t either needed in the definition of a function. They are only needed if you actually use the parameter in the function. But if you never use it you can leave the parameter with the type but without the name. Why would a function declare a parameter that it’d never use? Sometimes functions (or methods) are just part of an interface, like a callback interface, which defines certain parameters that are passed to the observer. The observer must create a callback with all the parameters that the interface specifies since they’ll be all sent by the caller. But the observer may not be interested in all of them, so instead of receiving a compiler warning about an “unused parameter,” the function definition can just leave it without a name. Pass by Pointer // by-pointer.cpp: #include <iostream> #include <vector> #include <numeric> using namespace std; int sum(int const * a, int const * const b) { cout << "sum(int const *, int const * const)" << endl; const int c = *a+ *b; // *a = 4; // Can't change. The value pointed to is const. // *b = 4; // Can't change. The value pointed to is const. a = b; // I can make a point to another const int // b = a; // Can't change where b points because the pointer itself is const. return c; } float sum(float * const a, float * b) { cout << "sum(int const * const, float const *)" << endl; return *a + *b; } int sum(const std::vector<int>* v) { cout << "sum(std::vector<int> const *)" << endl; // v->clear(); // I can't modify the const object pointed by v const int c = accumulate(v->begin(), v->end(), 0); v = NULL; // I can make v point to somewhere else return c; } float sum(const std::vector<float> * const v) { cout << "sum(std::vector<float> const * const)" << endl; // v->clear(); // I can't modify the const object pointed by v // v = NULL; // I can't modify where the pointer points to return accumulate(v->begin(), v->end(), 0.0f); } To declare a pointer to a const element (int in the example) you can declare the type as either of: int const * const int * If you also want the pointer itself to be const, that is, that the pointer cannot be changed to point to something else, you add a const after the star: int const * const const int * const If you want the pointer itself to be const, but not the element pointed by it: int * const Compare the function signatures with the demangled inspection of the object file: $ g++ -c by-pointer.cpp $ nm -C by-pointer.o 000000000000004a T sum(float*, float*) 0000000000000000 T sum(int const*, int const*) 0000000000000105 T sum(std::vector<float, std::allocator<float> > const*) 000000000000009c T sum(std::vector<int, std::allocator<int> > const*) As you see, the nm tool uses the first notation (const after the type). Also, note that the only constness that is exported, and matters for the caller, is whether the function will modify the element pointed by the pointer or not. The constness of the pointer itself is irrelevant for the caller since the pointer itself is always passed as a copy. The function can only make its own copy of the pointer to point to somewhere else, which is irrelevant for the caller. So, a header file can be created as: #include <vector> int sum(int const* a, int const* b); float sum(float* a, float* b); int sum(std::vector<int>* const); float sum(std::vector<float>* const); Passing by pointer is like passing by reference. One difference is that when you pass by reference the caller is expected and assumed to have passed a valid element’s reference, not pointing to NULL or other invalid address, while a pointer could point to NULL for example. Pointers can be used instead of references when passing NULL has a special meaning. Since C++11 values can also be passed with move semantics. This topic will not be treated in this article but can be studied in other articles like Argument Passing in C++. Another related topic that won’t be covered here is how to call all those functions. If all those headers are included from a source file but are not called, the compilation and linkage will succeed. But if you want to call all functions, there will be some errors because some calls will be ambiguous. The compiler will be able to choose more than one version of sum for certain arguments, especially when choosing whether to pass by copy or by reference (or const reference). That analysis is out of the scope of this article. Compiling with Different Flags Let’s see, now, a real-life situation related to this subject where hard-to-find bugs can show up. Go to directory cpp-article/diff-flags and look at Counters.hpp: class Counters { public: Counters() : #ifndef NDEBUG // Enabled in debug builds m_debugAllCounters(0), #endif m_counter1(0), m_counter2(0) { } #ifndef NDEBUG // Enabled in debug build #endif void inc1() { #ifndef NDEBUG // Enabled in debug build ++m_debugAllCounters; #endif ++m_counter1; } void inc2() { #ifndef NDEBUG // Enabled in debug build ++m_debugAllCounters; #endif ++m_counter2; } #ifndef NDEBUG // Enabled in debug build int getDebugAllCounters() { return m_debugAllCounters; } #endif int get1() const { return m_counter1; } int get2() const { return m_counter2; } private: #ifndef NDEBUG // Enabled in debug builds int m_debugAllCounters; #endif int m_counter1; int m_counter2; }; This class has two counters, which start as zero and can be incremented or read. For debug builds, which is how I’ll call builds where the NDEBUG macro isn’t defined, I also add a third counter, which will be incremented every time that any of the other two counters are incremented. That will be a kind of debug helper for this class. Many third-party library classes or even built-in C++ headers (depending on the compiler) use tricks like this to allow different levels of debugging. This allows debug builds to detect iterators going out of range and other interesting things that the library maker could think about. I’ll call release builds “builds where the NDEBUG macro is defined.” For release builds, the precompiled header looks like (I use grep to remove blank lines): $ g++ -E -DNDEBUG Counters.hpp | grep -v -e '^$' # 1 "Counters.hpp" # 1 "<built-in>" # 1 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 1 "<command-line>" 2 # 1 "Counters.hpp" class Counters { public: Counters() : m_counter1(0), m_counter2(0) { } void inc1() { ++m_counter1; } void inc2() { ++m_counter2; } int get1() const { return m_counter1; } int get2() const { return m_counter2; } private: int m_counter1; int m_counter2; }; While for debug builds, it’ll look like: $ g++ -E Counters.hpp | grep -v -e '^$' # 1 "Counters.hpp" # 1 "<built-in>" # 1 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 1 "<command-line>" 2 # 1 "Counters.hpp" class Counters { public: Counters() : m_debugAllCounters(0), m_counter1(0), m_counter2(0) { } void inc1() { ++m_debugAllCounters; ++m_counter1; } void inc2() { ++m_debugAllCounters; ++m_counter2; } int getDebugAllCounters() { return m_debugAllCounters; } int get1() const { return m_counter1; } int get2() const { return m_counter2; } private: int m_debugAllCounters; int m_counter1; int m_counter2; }; There is one more counter in debug builds, as I explained earlier. I also created some helper files. // increment1.hpp: // Forward declaration so I don't have to include the entire header here class Counters; int increment1(Counters&); // increment1.cpp: #include "Counters.hpp" void increment1(Counters& c) { c.inc1(); } // increment2.hpp: // Forward declaration so I don't have to include the entire header here class Counters; int increment2(Counters&); // increment2.cpp: #include "Counters.hpp" void increment2(Counters& c) { c.inc2(); } // main.cpp: #include <iostream> #include "Counters.hpp" #include "increment1.hpp" #include "increment2.hpp" using namespace std; int main(int argc, char* argv[]) { Counters c; increment1(c); // 3 times increment1(c); increment1(c); increment2(c); // 4 times increment2(c); increment2(c); increment2(c); cout << "c.get1(): " << c.get1() << endl; // Should be 3 cout << "c.get2(): " << c.get2() << endl; // Should be 4 #ifndef NDEBUG // For debug builds cout << "c.getDebugAllCounters(): " << c.getDebugAllCounters() << endl; // Should be 3 + 4 = 7 #endif return 0; } And a Makefile that can customize the compiler flags for increment2.cpp only: all: main.o increment1.o increment2.o g++ -o diff-flags main.o increment1.o increment2.o main.o: main.cpp increment1.hpp increment2.hpp Counters.hpp g++ -c -O2 main.cpp increment1.o: increment1.cpp Counters.hpp g++ -c $(CFLAGS) -O2 increment1.cpp increment2.o: increment2.cpp Counters.hpp g++ -c -O2 increment2.cpp clean: rm -f *.o diff-flags So, let’s compile it all in debug mode, without defining NDEBUG: $ CFLAGS='' make g++ -c -O2 main.cpp g++ -c -O2 increment1.cpp g++ -c -O2 increment2.cpp g++ -o diff-flags main.o increment1.o increment2.o Now run: $ ./diff-flags c.get1(): 3 c.get2(): 4 c.getDebugAllCounters(): 7 The output is just as expected. Now let’s compile just one of the files with NDEBUG defined, which would be release mode, and see what happens: $ make clean rm -f *.o diff-flags $ CFLAGS='-DNDEBUG' make g++ -c -O2 main.cpp g++ -c -DNDEBUG -O2 increment1.cpp g++ -c -O2 increment2.cpp g++ -o diff-flags main.o increment1.o increment2.o $ ./diff-flags c.get1(): 0 c.get2(): 4 c.getDebugAllCounters(): 7 The output isn’t as expected. increment1 function saw a release version of the Counters class, in which there are only two int member fields. So, it incremented the first field, thinking that it was m_counter1, and didn’t increment anything else since it knows nothing about the m_debugAllCounters field. I say that increment1 incremented the counter because the inc1 method in Counter is inline, so it was inlined in increment1 function body, not called from it. The compiler probably decided to inline it because the -O2 optimization level flag was used. So, m_counter1 was never incremented and m_debugAllCounters was incremented instead of it by mistake in increment1. That’s why we see 0 for m_counter1 but we still see 7 for m_debugAllCounters. weren’t default flags, they had been added by hand). We used an IDE to compile, so to see the flags for each library, you had to dig into tabs and windows, having different (and multiple) flags for different compilation modes (release, debug, profile…), so it was even harder to note that the flags weren’t consistent. This caused that in the rare occasions when an object file, compiled with one set of flags, passed a std::vector to an object file compiled with a different set of flags, which did certain operations on that vector, the application crashed. Imagine that it wasn’t easy to debug since the crash was reported to happen in the release version, and it didn’t happen in the debug version (at least not in the same situations that were reported). The debugger also did crazy things because it was debugging very optimized code. The crashes were happening in correct and trivial code. The Compiler Does a Lot More Than You May Think In this article, you have learned about some of the basic language constructs of C++ and how the compiler works with them, starting from the processing stage to the linking stage. Knowing how it works can help you look at the whole process differently and give you more insight into these processes that we take for granted in C++ development. From a three-step compilation process to mangling of function names and producing different function signatures in different situations, the compiler does a lot of work to offer the power of C++ as a compiled programming language. I hope you will find the knowledge from this article useful in your C++ projects.
https://www.toptal.com/c-plus-plus/c-plus-plus-understanding-compilation
CC-MAIN-2018-51
refinedweb
6,103
61.77
#include "core/or/or.h" #include "app/config/config.h" #include "core/mainloop/connection.h" #include "core/mainloop/mainloop.h" #include "core/mainloop/netstatus.h" #include "core/or/circuitlist.h" #include "core/or/circuituse.h" #include "core/or/connection_edge.h" #include "core/or/policies.h" #include "core/or/relay.h" #include "feature/control/control_events.h" #include "feature/relay/dns.h" #include "feature/relay/router.h" #include "feature/relay/routermode.h" #include "lib/crypt_ops/crypto_rand.h" #include "lib/evloop/compat_libevent.h" #include "lib/sandbox/sandbox.h" #include "core/or/edge_connection_st.h" #include "core/or/or_circuit_st.h" #include "ht.h" #include <event2/event.h> #include <event2/dns.h> Go to the source code of this file. Implements a local cache for DNS results for Tor servers.. Definition in file dns.c. Note that a single test address (one believed to be good) seems to be getting redirected to the same IP as failures are. Definition at line 1863 of file dns.c. References dns_wildcarded_test_address_list, and smartlist_contains_string_case(). Return true iff we have noticed that the dotted-quad ip has been returned in response to requests for nonexistent hostnames. Definition at line 2106 of file dns.c. References dns_wildcard_list, and smartlist_contains_string(). Log an error and abort if any connection waiting for a DNS resolve is corrupted. Definition at line 993 of file dns.c. References assert_connection_ok(), connection_in_array(), connection_t::s, SOCKET_OK, TO_CONN, and tor_assert(). Log an error and abort if conn is waiting for a DNS resolve. Definition at line 966 of file dns.c. Referenced by connection_unlink(). Exit with an assertion if resolve is corrupt. Definition at line 2113 of file dns.c. References cached_resolve_t::addr_ipv4, cached_resolve_t::address, CACHE_STATE_DONE, CACHE_STATE_PENDING, CACHED_RESOLVE_MAGIC, cached_resolve_t::magic, MAX_ADDRESSLEN, cached_resolve_t::pending_connections, cached_resolve_t::result_ipv4, cached_resolve_t::state, tor_assert(), and tor_strisnonupper(). Hash function for cached_resolve objects Definition at line 143 of file dns.c. References cached_resolve_t::address. Return true iff there are no in-flight requests for resolve. Definition at line 376 of file dns.c. References RES_STATUS_INFLIGHT. Compare two cached_resolve_t pointers by expiry time, and return less-than-zero, zero, or greater-than-zero as appropriate. Used for the priority queue implementation. Definition at line 308 of file dns.c. References cached_resolve_t::expire. Referenced by set_expiry(). Configure eventdns nameservers if force is true, or if the configuration has changed since the last time we called this function, or if we failed on our last attempt. On Unix, this reads from /etc/resolv.conf or options->ServerDNSResolvConfFile; on Windows, this reads from options->ServerDNSResolvConfFile or the registry. Return 0 on success or -1 on failure. Definition at line 1406 of file dns.c. References or_options_t::ServerDNSResolvConfFile, and the_evdns_base. Referenced by dns_init(). Remove conn from the list of connections waiting for conn->address. Definition at line 1012 of file dns.c. References CONN_TYPE_EXIT, EXIT_CONN_STATE_RESOLVING, connection_t::state, tor_assert(), and connection_t::type. Referenced by connection_exit_about_to_close(). Return the number of DNS cache entries as an int Definition at line 2137 of file dns.c. Referenced by dump_dns_mem_usage(). Helper: Given a TTL from a DNS response, determine what TTL to give the OP that asked us to resolve it, and how long to cache that record ourselves. Definition at line 275 of file dns.c. References MAX_DNS_TTL_AT_EXIT, and MIN_DNS_TTL_AT_EXIT. Called on the OR side when the eventdns library tells us the outcome of a single DNS resolve: remember the answer, and tell all pending connections about the result of the lookup if the lookup is now done. (address is a NUL-terminated string containing the address to look up; query_type is one of DNS_{IPv4_A,IPv6_AAAA,PTR}; dns_answer is DNS_OK or one of DNS_ERR_*, addr is an IPv4 or IPv6 address if we got one; hostname is a hostname fora PTR request if we got one, and ttl is the time-to-live of this answer, in seconds.) Definition at line 1157 of file dns.c. Initialize the DNS subsystem; called by the OR process. Definition at line 224 of file dns.c. References configure_nameservers(), and dns_randfn_(). Referenced by retry_dns_callback(). If appropriate, start testing whether our DNS servers tend to lie to us. Definition at line 2044 of file dns.c. References dns_launch_wildcard_checks(). Launch DNS requests for a few nonexistent hostnames and a few well-known hostnames, and see if we can catch our nameserver trying to hijack them and map them to a stupid "I couldn't find ggoogle.com but maybe you'd like to buy these lovely encyclopedias" page. Definition at line 2013 of file dns.c. Referenced by dns_launch_correctness_checks(). Helper: passed to eventdns.c as a callback so it can generate random numbers for transaction IDs and 0x20-hack coding. Definition at line 217 of file dns.c. Referenced by dns_init(). Called when DNS-related options change (or may have changed). Returns -1 on failure, 0 on success. Definition at line 238 of file dns.c. References the_evdns_base. Forget what we've previously learned about our DNS servers' correctness. Definition at line 2080 of file dns.c. Referenced by dns_servers_relaunch_checks(). See if we have a cache entry for exitconn->address. If so, if resolve valid, put it into exitconn->addr and return 1. If resolve failed, free exitconn and return -1. (For EXIT_PURPOSE_RESOLVE connections, send back a RESOLVED error cell on returning -1. For EXIT_PURPOSE_CONNECT connections, there's no need to send back an END cell, since connection_exit_begin_conn will do that for us.) If we have a cached answer, send the answer back along exitconn's circuit. Else, if seen before and pending, add conn to the pending list, and return 0. Else, if not seen before, add conn to pending list, hand to dns farm, and return 0. Exitconn's on_circuit field must be set, but exitconn should not yet be linked onto the n_streams/resolving_streams list of that circuit. On success, link the connection to n_streams if it's an exit connection. On "pending", link the connection to resolving streams. Otherwise, clear its on_circuit field. Definition at line 634 of file dns.c. References EXIT_PURPOSE_RESOLVE, edge_connection_t::on_circuit, connection_t::purpose, and TO_OR_CIRCUIT(). Return true iff our DNS servers lie to us too much to be trusted. Definition at line 2066 of file dns.c. References dns_is_completely_invalid. Log memory information about our internal DNS cache at level 'severity'. Definition at line 2152 of file dns.c. References dns_cache_entry_count(). Referenced by dumpmemusage(). Helper: called by eventdns when eventdns wants to log something. Definition at line 162 of file dns.c. References LOG_INFO, LOG_WARN, and strcmpstart(). Helper: free storage held by an entry in the DNS cache. Definition at line 289 of file dns.c. References cached_resolve_t::magic, cached_resolve_t::pending_connections, RES_STATUS_DONE_OK, and tor_free. Return true iff the most recent attempt to initialize the DNS subsystem failed. Definition at line 266 of file dns.c. References nameserver_config_failed. Referenced by retry_dns_callback(). Hash table of cached_resolve objects. Definition at line 122 of file dns.c. References cached_resolve_t::address, assert_resolve_ok(), and MAX_ADDRESSLEN. Given a pending cached_resolve_t that we just finished resolving, inform every connection that was waiting for the outcome of that resolution. Do this by sending a RELAY_RESOLVED cell (if the pending stream had sent us RELAY_RESOLVE cell), or by launching an exit connection (if the pending stream had send us a RELAY_BEGIN cell). Definition at line 1210 of file dns.c. References assert_connection_ok(), connection_edge_end(), EXIT_CONN_STATE_RESOLVEFAILED, EXIT_PURPOSE_CONNECT, connection_t::marked_for_close, cached_resolve_t::pending_connections, connection_t::purpose, connection_t::state, TO_CONN, and tor_free. Return true iff address is one of the addresses we use to verify that well-known sites aren't being hijacked by our DNS servers. Definition at line 1140 of file dns.c. References or_options_t::ServerDNSTestAddresses, and smartlist_contains_string_case(). Launch a single request for a nonexistent hostname consisting of between min_len and max_len random (plausible) characters followed by suffix Definition at line 1939 of file dns.c. References crypto_random_hostname(). Remove a pending cached_resolve_t from the hashtable, and add a corresponding cached cached_resolve_t. This function is only necessary because of the perversity of our cache timeout code; see inline comment for ideas on eliminating it. Definition at line 1294 of file dns.c. References CACHE_STATE_DONE, and cached_resolve_t::state. Send a response to the RESOLVE request of a connection. answer_type must be one of RESOLVED_TYPE_(AUTO|ERROR|ERROR_TRANSIENT|). If circ is provided, and we have a cached answer, send the answer back along circ; otherwise, send the answer back along conn's attached circuit. Definition at line 515 of file dns.c. Send a response to the RESOLVE request of a connection for an in-addr.arpa address on connection conn which yielded the result hostname. The answer type will be RESOLVED_HOSTNAME. If circ is provided, and we have a cached answer, send the answer back along circ; otherwise, send the answer back along conn's attached circuit. Definition at line 582 of file dns.c. Helper function for dns_resolve: same functionality, but does not handle: Return -2 on a transient error. If it's a reverse resolve and it's successful, sets *hostname_out to a newly allocated string holding the cached reverse DNS value. Set *made_connection_pending_out to true if we have placed exitconn on the list of pending connections for some resolve; set it to false otherwise. Set *resolve_out to a cached resolve, if we found one. Definition at line 715 of file dns.c. Given an exit connection exitconn, and a cached_resolve_t resolve whose DNS lookups have all either succeeded or failed, update the appropriate fields (address_ttl and addr) of exitconn. and we could still connect to. If this is a reverse lookup, set *hostname_out to a newly allocated copy of the name resulting hostname. Return -2 on a transient error, -1 on a permenent error, and 1 on a successful lookup. Definition at line 868 of file dns.c. Return number of configured nameservers in the_evdns_base. Definition at line 1367 of file dns.c. References the_evdns_base. Set an expiry time for a cached_resolve_t, and add it to the expiry priority queue Definition at line 386 of file dns.c. References cached_resolve_pqueue, compare_cached_resolves_by_expiry_(), cached_resolve_t::expire, smartlist_pqueue_add(), and tor_assert(). Called when we see id (a dotted quad or IPv6 address) in response to a request for a hopefully bogus address. Definition at line 1831 of file dns.c. References dns_wildcard_response_count. Priority queue of cached_resolve_t objects to let us know when they will expire. Definition at line 321 of file dns.c. Referenced by set_expiry(). True iff all addresses seem to be getting wildcarded. Definition at line 1826 of file dns.c. Referenced by dns_seems_to_be_broken(). If present, a list of dotted-quad IP addresses that we are pretty sure our nameserver wants to return in response to requests for nonexistent domains. Definition at line 1812 of file dns.c. Referenced by answer_is_wildcarded(). Map from dotted-quad IP address in response to an int holding how many times we've seen it for a randomly generated (hopefully bogus) address. It would be easier to use definitely-invalid addresses (as specified by RFC2606), but see comment in dns_launch_wildcard_checks(). Definition at line 1807 of file dns.c. Referenced by wildcard_increment_answer(). List of supposedly good addresses that are getting wildcarded to the same addresses as nonexistent addresses. Definition at line 1822 of file dns.c. Referenced by add_wildcarded_test_address(). Did our most recent attempt to configure nameservers with eventdns fail? Definition at line 92 of file dns.c. Referenced by has_dns_init_failed(). Our evdns_base; this structure handles all our name lookups. Definition at line 87 of file dns.c. Referenced by configure_nameservers(), dns_reset(), and number_of_configured_nameservers().
https://people.torproject.org/~nickm/tor-auto/doxygen/dns_8c.html
CC-MAIN-2019-39
refinedweb
1,896
53.07
international tax issues and reporting requirements - Allan Bishop - 4 years ago - Views: Transcription 1 international tax issues and reporting requirements Foreign income exclusions and foreign tax credits can significantly reduce the taxes you pay on foreign sourced income and help you avoid double taxation. Complex reporting is required for U.S. persons to disclose foreign holdings and bank accounts, to avoid severe penalties for non-compliance. 2 88 EisnerAmper 2012 personal tax guide FOREIGN TAX ISSUES With globalization, multinational clients with cross-border income from employment and investments are in today s mainstream. Many taxpayers are discovering that they are subject to taxation in both U.S. and foreign tax jurisdictions. Not all U.S. citizens and resident aliens are aware of their obligation to report their worldwide income to the Internal Revenue Service. As a result, the United States continues to pursue U.S. persons who fail to report income and file certain tax forms. These complex issues not only impact you if you are on an overseas assignment or retired abroad, but have broad reaching implications even if you never have left the United States. For instance, these issues arise if you invest in hedge funds, private equity funds, and other entities that own interests in foreign operating businesses or invest in foreign securities. Significant legislation enacted in 2010 imposes a new U.S. withholding regime for income earned by non-u.s. persons (effective beginning in 2013) and tightens the reporting requirements for offshore accounts and entities set up in foreign jurisdictions including increased disclosure of beneficial owners, reporting of the transfer of assets, and imposition of a punitive penalty regime for not reporting transactions with foreign trusts. This chapter is intended to provide an overview of the income exclusions, foreign tax credits, reporting requirements, and elections involving foreign employment and investments. A section dedicated to U.S. taxation of non-resident individuals is featured in this year s guide. However, it does not consider the special tax elections associated with a foreigner s move to the United States or foreign currency transactions. FOREIGN EARNED INCOME EXCLUSION AND FOREIGN HOUSING EXCLUSION/DEDUCTION In general, the worldwide income of a U.S. citizen or resident who is working abroad is subject to the same income tax and return filing requirements that apply to U.S. citizens or residents living in the United States. However, if you are working abroad, you may qualify for one or more special tax benefits: Exclude up to $92,900 in 2011 and $95,100 in 2012 in foreign earned income. Exclude part, or all, of any housing income reimbursements you receive or deduct part, or all, of any housing costs paid (i.e., for taxpayers having salary or self-employment earnings). Claim a foreign tax credit against your U.S. tax liability for income taxes you pay to a foreign country, or alternatively, take an itemized deduction for the taxes paid if more beneficial. Reduce your overall tax liability under tax treaties that the U.S. has with foreign countries. tax tip 26 tax benefits of the foreign earned income and housing exclusions For example, your company sends you to work in Dubai in 2011 for several years, so you qualify as a bona fide resident of the UAE based on your time spent in Dubai. Assume you earn $500,000 per year and your company reimburses you for $125,000 of housing costs which are taxable to you. You would be able to exclude the following income from your U.S. income tax return: $92,900 of your salary. $42,310 of the housing expense reimbursements. The 2011 foreign earned income maximum is $92,900, regardless of which foreign country you are working in. The housing exclusion is based on which country and city you are living in (see Chart 13 for some of the more common foreign cities). Dubai is considered to be an expensive city to live in, so the annual housing exclusion amount is $57,174. Of this amount, you are not eligible to exclude $40.72 per day, or $14,864 for a full year. Therefore, your 2011 housing exclusion will be $42,310 ($57,174 - $14,864). When added to your foreign earned income exclusion of $92,900, you can exclude a total of $135,210. Therefore, you will be taxed in the United States on $489,790 related to your employment in Dubai ($500,000 compensation plus $125,000 housing cost reimbursements less the exclusions of $135,210). Note: To the extent you pay income taxes to the foreign country you may also be eligible to receive a foreign tax credit against the U.S. tax imposed on the remaining income. However, only 78.36% of these taxes will be allowable as a foreign tax credit that can offset your U.S. income tax (i.e., only $489,790 of the total $625,000 of income will be subject to tax: $489,790 divided by $625,000 is 78.36%). As you can see in Tax Tip 26, your foreign housing exclusion might be limited depending on where you live. In order to see the differences in limits for housing deductions in 2011, see Chart 13 on the next page. 3 foreign housing exclusions chart 13 The amount of foreign housing exclusions costs that you can exclude from your 2011 U.S. income tax return depends on both the country and city you are living in. Below are listed the maximum amounts you can exclude for some common foreign cities, before the adjustment for the daily living cost of $40.72 per day, or $14,864 for a full year. Country City Maximum Annual Housing Exclusion international tax issues and reporting requirements 89 Canada Toronto $ 51,100 China Hong Kong Beijing 114,300 71,200 France Paris 84,800 Germany Berlin 50,800 India New Delhi 30,252 Italy Rome 56,500 Japan Tokyo 118,500 Russia Moscow 108,000 Switzerland Zurich 39,219 United Arab Emirates Dubai 57,174 United Kingdom London 83,400 Take an exemption from paying Social Security tax in the foreign country based on a Totalization Agreement the U.S. has with some foreign countries to eliminate dual coverage for the same work. You will be required to pay U.S. Social Security and Medicare tax on such income. To qualify for the foreign earned income exclusion and the foreign housing exclusion, you must establish a tax home in a foreign country and meet either the bona fide residence or physical presence test, defined below: Bona fide residence test. To qualify under this test, you must establish residency in a foreign country for an uninterrupted period that includes an entire calendar year. Brief trips outside the foreign country will not risk your status as a bona fide resident, as long as the trips are brief, and there is an intent to return to the foreign country. Physical presence test. This test requires you to be physically present in a foreign country for at least 330 full days in a consecutive 12-month period, but not necessarily a calendar year period. Planning Tip. In certain circumstances it may be more beneficial to forego the exclusion in favor of claiming only a foreign tax credit. If you pay no foreign tax or the effective tax rate in the foreign jurisdiction is lower than the U.S. tax rate, claiming the exclusion will generally lower the U.S. income tax liability. On the other hand, if the foreign jurisdiction imposes tax at a higher effective rate than the U.S., 4 90 EisnerAmper 2012 personal tax guide it is likely that the U.S. tax on the foreign earned income will be completely offset by the foreign tax credit regardless of whether the exclusion is claimed. You should consider whether foregoing the exclusion may result in a lower utilization of foreign tax credits in the current year so that a larger amount of foreign tax credits can be carried back or forward for utilization in other years. You should also consider whether the foreign earned income exclusion and housing exclusion election will mitigate your state tax burden to the extent that you have ceased to be a state resident and you remain taxable on worldwide income in the state of residency. Claiming the exclusion is a binding election. Once you have claimed the benefit of the exclusion in a tax year, you will be required to continue to claim it in all future years. You will be able to revoke the election, but having done so, you will not be allowed to claim the exclusion again until the sixth tax year after the year of revocation unless you receive permission from the IRS. If you have claimed the exclusion in the past, the benefit of revoking the exclusion must be weighed against the possible ramifications of being unable to re-elect the exclusion for five years. There is no downside of forgoing the exclusion if you have never claimed it in the past. FOREIGN TAX CREDIT A foreign tax credit may be claimed by U.S. citizens, resident aliens, and in certain cases by nonresident aliens. Typically states do not allow foreign tax to offset state income tax liability. Unlike the exclusions discussed above, you do not need to live or work in a foreign country in order to be eligible to claim the foreign tax credit. If you pay, or accrue, foreign taxes on foreign sourced income, you may be eligible for the credit. Common examples of foreign sourced income that may generate foreign taxes include dividends paid by foreign corporations, including those paid on your behalf through mutual fund, and foreign business income earned by a flow-through entity. You are entitled to claim either a tax credit or an itemized deduction for taxes paid to foreign countries. Though not always the case, the tax credit is more beneficial since it reduces your U.S. federal tax liability on a dollar-for-dollar basis. Generally, only foreign income taxes qualify for the foreign tax credit. Other taxes, such as foreign real and personal property taxes, do not qualify. However, these other taxes may still be deductible as itemized deductions on your U.S. income tax return. There are other situations which may prevent you from taking a foreign tax credit: Taxes paid on income excluded from U.S. gross income (e.g., foreign earned income exclusion). Taxes paid to international boycott operations countries. Taxes of U.S. persons controlling foreign corporations and partnerships if certain annual international returns are not filed. Certain taxes paid on foreign oil-related, mineral, and oil and gas extraction income. Your ability to claim a credit for the full amount of foreign taxes paid or accrued is limited based on a ratio of your foreign source taxable income to your total taxable income. This ratio is applied to your actual tax before the credit to determine the maximum amount of the credit that you can claim. If you are not able to claim the full amount of the credit in the current year, you can carry the excess back to the immediately preceding tax year, or forward for the next 10 tax years, subject to a similar limitation in those years. The credit calculation is done for each separate type of foreign source income. In other words, foreign taxes paid on dividends are subject to a separate limitation than foreign taxes paid on income from an active trade or business. Foreign sourced income is classified into two different baskets for determining the allowable credit: Passive income: This category includes dividends, interest, rents, royalties, and annuities. General limitation income: This category includes income from foreign sources which does not fall into the passive separate limitation category and generally is income earned from salary, pensions or an active trade or business. Beginning in 2012, you will be required to maintain a separate foreign tax credit limitation basket for each country in which income is resourced under an income tax treaty. This provision will apply to income classified as U.S. sourced income under U.S. tax law, but treated as foreign sourced income under an income tax treaty resourcing article (an example would be the United Kingdom). Expatriation Exit Tax If you plan on giving up your U.S. citizenship or relinquishing your U.S. legal permanent residency status ( green card ) and are considered a covered expatriate, you will pay an income tax at the capital gains rate as though you have sold all of your assets at their fair market value on the day before the expatriation date and any gain on the deemed sale in excess of a floor, of $636,000 for 2011 and $651,000 for 2012, is immediately taxed ( mark-to-market tax ). 5 Losses are taken into account and the wash sale rules do not apply. An election can be made to defer the tax on the deemed sale until the asset is actually sold (or death, if sooner) provided a bond or other security is provided to the IRS. Deferred compensation items and interests in non-grantor trusts are not subject to the tax but are generally subject to a 30% withholding tax on distributions to the expatriate. Individual Retirement Accounts and certain other taxdeferred accounts are treated as if they were completely distributed on the day before the expatriation date with no early distribution penalties to be applied. The tax applies to an expatriate or former long-term resident (i.e., holder of a U.S. green card for eight out of the last 15 years) who: Had average annual net income tax liability for the five years ending before the date of expatriation or termination of residency in excess of an annual ceiling, which for 2011 is $147,000 and $151,000 for 2012; Had a net worth of $2 million or more when citizenship or residency ended; or Lawful permanent residence (green card test); or Substantial presence test. If you are physically present in the United States for at least 31 days during 2011 and have spent 183 days during the period of 2011, 2010, and 2009 counting all of the days of physical presence in 2011, but only 1 /3 of the days of presence in 2010, and only 1 /6 of the number of days in 2009 you will be deemed a resident for U.S. tax purposes. You are treated as being present in the U.S. on any day that you are physically present in the country at any time during the day, though time spent in the U.S for the following circumstances do not count: 1. Days you regularly commute to work in the United States from a residence in Canada or Mexico. 2. Days you were in the United States for less than 24 hours when you were traveling between two places outside the United States. international tax issues and reporting requirements 91 Fails to certify compliance under penalties of perjury on Form 8854, Initial and Annual Expatriation Statement, with all U.S. federal tax obligations for the five tax years preceding the date of expatriation. A U.S. citizen or resident will have to pay tax on a gift or bequest received from an individual who had expatriated after June 17, The tax does not apply to the extent that the gift or bequest during the year is within the annual gift tax exclusion ($13,000 for 2011 and 2012). The tax does not apply if the transfer is reported on a timely filed gift tax return or estate tax return or to transfers that qualify for the marital or charitable deductions. The value of a transfer not covered by an exception is taxable to the recipient at the highest rate on taxable gifts, which is 35% for U.S. INCOME TAXATION OF NONRESIDENT INDIVIDUALS Residents and non-residents are taxed differently for U.S. tax purposes. Resident aliens are taxed on worldwide income at graduated tax rates much the same as a U.S. citizen. A non-resident alien, however, is taxed at graduated rates only on income that is effectively connected with a U.S. trade or business or at a 30% rate on U.S. source income that is not effectively connected with a U.S. trade or business (unless a lower income tax treaty rate applies). A foreign national is deemed a resident alien of the U.S. if one of the two following tests are met: (e.g., foreign government related individual, teacher or trainee, student or a professional athlete competing in a charitable sporting event). Note: If you qualify to exclude days of presence in the United States because you were an exempt individual (other than a foreign government-related individual) or because of a medical condition or medical problem, you must file Form 8843, Statement for Exempt Individuals and Individuals With a Medical Condition. Even though you would otherwise meet the substantial presence test, you will not be treated as a U.S. resident for 2011 if: You were present in the United States for fewer than 183 days during the calendar year in question, You establish that during that calendar year, you had a tax home in a foreign country, and You establish that during the calendar year, you had a closer 6 92 EisnerAmper 2012 personal tax guide connection to one foreign country in which you had a tax home than to the United States, unless you had a closer connection to two foreign countries. You will be considered to have a closer connection to a foreign country other than to the United States if you or the IRS establishes that you have maintained more significant contacts with the foreign country than with the United States. IRS Form 8840, Closer Connection Exception Statement for Aliens, will need to be submitted with your U.S. non-resident income tax return for the year in which you meet the physical presence test and you are exempt from it because you also meet the closer connection test. Alternatively, you may be considered a non-resident if you also would qualify as a resident of your home jurisdiction under the Tie Breaker Clause of an income tax treaty with the U.S. There are certain elections available to non-residents who move to the United States that when considered could minimize global taxation. These elections are beyond the scope of this chapter. U.S. REPORTING REQUIREMENTS FOR NON-RESIDENT ALIENS Form 1040NR / 1040NR-EZ This tax form is used by non-residents of the U.S. to report on an annual basis the income received from U.S. sources and the payments of U.S. tax, made either through withholding by the payor or through estimated tax payments. The U.S. tax liability for the year is computed and any tax due in excess of payments made during the year is remitted to the U.S. Treasury. A U.S. non-resident may be subject to state income tax on the income earned in that jurisdiction. Form 1042-S If you are a foreign national, classified as a nonresident of the U.S. and receive payments from U.S. sources, you will receive Form 1042-S. This is the annual information return prepared by the payor to report to you and the IRS each foreign recipient s name, address, amount and type of income paid and taxes withheld, if any. This form is normally distributed no later than March 15 of the following year. If the recipient of the income is a U.S. person, a 1099 form would be issued instead; Forms 1099 are due to be received by U.S. persons no later than January 31 of the following year. Form W-8 This form is provided by a foreign recipient to a payor to certify the recipient s tax residency and status as beneficial owner of the income paid. If applicable, this form should be completed to claim the benefits of an income tax treaty. FOREIGN REPORTING REQUIREMENTS FOR U.S. CITIZEN AND RESIDENTS There are many IRS tax forms that must be completed and attached to your tax return to disclose foreign holdings and to make elections that could prove valuable to you in the future. As more and more of your investments include foreign holdings, whether held directly by you or through a pass-through entity such as an investment partnership or hedge fund, your reporting requirements increase. These requirements place an additional burden on the amount of information that you must include with your income tax return. Failure to do so could result in substantial penalties and the loss of beneficial tax elections. The most common of these forms are: Form TDF , Report of Foreign Bank and Financial Accounts. Form 8621, Return by a Shareholder of a Passive Foreign Investment Company (PFIC) or Qualified Electing Fund (QEF). Form 926, Return by a U.S. Transferor of Property to a Foreign Corporation. Form 8865, Return of U.S. Persons With Respect to Certain Foreign Partnerships. Form 5471, Information Return of U.S. Persons With Respect To Certain Foreign Corporations. Form 3520, Annual Return To Report Transactions With Foreign Trusts and Receipt of Certain Foreign Gifts. Form 3520-A, Annual Information Return of Foreign Trust With a U.S. Owner. FORM TDF , REPORT OF FOREIGN BANK AND FINANCIAL ACCOUNTS If you are a United States person (including a corporation, partnership, exempt organization, trust or estate) and have a financial interest in or signature authority over a foreign financial account, you are subject to a reporting requirement on Form TDF , Report of Foreign Bank and Financial Accounts ( FBAR ). The FBAR must be filed on an annual basis if you have a financial interest in or signature authority over a financial account(s) in a foreign country with an aggregate value of the financial account(s) exceeding $10,000 at any time during the year. The 7 report is due on June 30 of each succeeding year. There are no extensions available for filing the form. Financial accounts include any bank, securities, derivatives, foreign mutual funds or other financial accounts (including any savings, demand, checking, deposit, annuity, or life insurance contract or other account maintained with a financial institution). The IRS issued final regulations suspending the reporting of offshore commingled funds, such as hedge funds and private equity funds. A financial interest in an account includes being the owner of record or having legal title, even if acting as an agent, nominee, or in some other capacity on behalf of a United States person. A financial interest also includes an account held by a corporation in which you own, directly or indirectly, more than 50% of the total voting power or value of shares; a partnership in which you own an interest of more than 50% in the capital or profits; or a trust as to which you or any other United States person has a present beneficial interest in more than 50% of the assets or receives more than 50% of the current income. In the case of a non-willful failure to file the FBAR, the IRS may impose a maximum penalty of $10,000 per account. The maximum penalty imposed where there is willfulness is the greater of $100,000 or 50% of the highest balance in the account during the year. Criminal penalties could also be assessed for willful violations. In 2011, the IRS offered an Offshore Voluntary Disclosure Initiative ( OVDI ) that provided an opportunity for taxpayers to come forward and disclose unreported foreign income and file informational returns while paying a lower penalty and avoiding criminal prosecution. Although this initiative has come to an end, the IRS has a longstanding voluntary disclosure program available to taxpayers who wish to disclose voluntarily foreign financial accounts, income or assets that have previously not been reported. As of the date of publishing, there is no indication that a special voluntary disclosure program similar to the 2011 OVDI will be offered by the IRS in the near future. in effect for each of the years involved, rather than the beneficial long-term capital gains rate in the year of disposition. An interest charge is also imposed on the tax, and begins running from the period to which such gain is allocated. In certain situations, this tax can exceed 100% of the gain. Classification as a PFIC occurs when 75% or more of the corporation s income is passive or when more than 50% of the corporation s assets generate passive income. Passive income includes, but is not limited to, interest, dividends, and capital gains. A U.S. shareholder who makes the QEF election on Form 8621 is required to annually include in income the pro rata share of the ordinary earnings and net capital gains of the corporation, whether or not distributed, but can avoid the onerous PFIC tax. Alternatively, a shareholder of a PFIC may make a mark-to-market election on Form 8621 for marketable PFIC stock. If the election is made, the shareholder includes in income each year an amount equal to the excess, if any, of the fair market value of the PFIC stock as of the close of the tax year over the shareholder s adjusted basis in the stock or deducts the excess of the PFIC stock s adjusted basis over its fair market value at the close of the tax year (the deduction is limited to prior cumulative income pickups). If the election is made, the PFIC rules above do not apply. Amounts included in income or deducted under the mark-to-market election, as well as gain or loss on the actual sale or other disposition of the PFIC stock, are treated as ordinary income or loss. Taxpayers owning PFICs are now required to file the Form 8621 regardless of whether a triggering event has occurred or an election has been made. However, the IRS issued guidance suspending the information reporting requirements for tax years beginning after March 17, 2010 until the IRS releases revised Form PFIC shareholders will be required to attach the form for the suspended tax year to the following year s income tax return required to be filed. international tax issues and reporting requirements 93 FORM 8621, RETURN BY A SHAREHOLDER OF A PASSIVE FOREIGN INVESTMENT COMPANY (PFIC) OR QUALIFIED ELECTING FUND (QEF) Any U.S. person who invests in a foreign corporation which is a passive foreign investment company ( PFIC ) must pay tax on gains from the sale of the investment or on certain distributions from the PFIC ( triggering event ), unless a qualified electing fund ( QEF ) election or mark-to-market election is made. If neither of these two elections are made, the PFIC rules require a ratable allocation of any gain over the years during which the shares were held and that gain is taxed at the highest rate on ordinary income FORM 926, RETURN BY A U.S. TRANSFEROR OF PROPERTY TO A FOREIGN CORPORATION Form 926 is used to report certain transfers of tangible or intangible property to a foreign corporation. While there are certain exceptions to the filing, generally the following special rules apply to reportable transfers: If the transferor is a partnership, the U.S. partners of the partnership, not the partnership itself, are required to report the transfer on Form 926 based on the partner s proportionate share of the transferred property. 8 EisnerAmper 2012 personal tax guide 94 If the transfer includes cash, the transfer is reportable on Form 926 if immediately after the transfer the person holds, directly or indirectly, at least 10% of the total voting power or the total value of the foreign corporation, or the amount of cash transferred by the person to the foreign corporation during the 12-month period ending on the date of the transfer exceeds $100,000. The penalty for failure to comply with the reporting requirements is equal to 10% of the fair market value of the property at the time of the transfer, limited to $100,000 unless the failure to comply was due to intentional disregard. FORM 8865, RETURN OF U.S. PERSONS WITH RESPECT TO CERTAIN FOREIGN PARTNERSHIPS Form 8865 is required to report information with respect to controlled foreign partnerships, transfers to foreign partnerships, or acquisitions, dispositions, and changes in foreign partnership ownership. A separate Form 8865, along with the applicable schedules, is required for each foreign partnership. There are four different categories which define who is required to file the form and how much information must be provided. The categories are: Category 1: A U.S. person who owned more than a 50% interest in a foreign partnership at any time during the partnership s tax year. Category 2: A U.S. person who at any time during the tax year of the foreign partnership owned a 10% or greater interest in the partnership while the partnership was controlled by U.S. persons each owing at least 10% interests. However, if there was a Category 1 filer at any time during that tax year, no person will be considered a Category 2 filer. Category 3: A U.S. person, including a related person, who contributed property during that person s tax year to a foreign partnership in exchange for an interest in the partnership, if that person either owned directly or indirectly at least a 10% interest in the foreign partnership immediately after the contribution, or the value of the property contributed by such person or related person exceeds $100,000. If a domestic partnership contributes property to a foreign partnership, the partners are considered to have transferred a proportionate share of the contributed property to the foreign partnership. However, if the domestic partnership files Form 8865 and properly reports all the required information with respect to the contribution, its partners will generally not be required to report the transfer. Category 4: A U.S. person who had one of the following reportable events during the tax year: an acquisition, disposition, or change in proportional interests. There are specific requirements to determine whether any of the events are reportable. A penalty of $10,000 can be assessed for failure to furnish the required information within the time prescribed. This penalty is applied for each tax year of each foreign partnership. Furthermore, once the IRS has sent out a notification of the failure to report the information, an additional $10,000 penalty can be assessed for each 30-day period that the failure continues, up to a maximum of $50,000 for each failure. FORM 5471, INFORMATION RETURN OF U.S. PERSONS WITH RESPECT TO CERTAIN FOREIGN CORPORATIONS Form 5471 is used to satisfy the reporting requirement for U.S. persons who are officers, directors, or shareholders in certain foreign corporations. You will be required to file this form if you meet one of the following tests (Category 1 has been repealed): Category 2: You are a U.S. person who is an officer or director of a foreign corporation in which a U.S. person has acquired stock that makes him or her a 10% owner with respect to the foreign corporation, or acquired an additional 10% or more of the outstanding stock of the foreign corporation Category 3: You are a U.S. person who acquires stock in a foreign corporation which, when added to any stock owned on the date of acquisition or without regard to stock already owned, meets the 10% stock ownership requirement with respect to the foreign corporation, or You are a person who becomes a U.S. person while meeting the 10% stock ownership requirement with respect to the foreign corporation. You are a U.S. person who disposes of sufficient stock in the foreign corporation to reduce your interest to less than the 10% stock ownership requirement. Category 4: You are a U.S. shareholder who owns more than 50% of the total combined voting power of all classes of stock entitled to vote or more than 50% of the total value of the stock in a foreign corporation for an uninterrupted period of 30 days or more during any tax year of the foreign corporation. Category 5: You are a U.S. shareholder who owns stock in a controlled foreign corporation ( CFC ) for an uninterrupted period of 30 days or more and who owns the stock on the last 9 day of that year. A CFC is defined as a foreign corporation that has U.S. shareholders (counting only those with a 10 percent interest) that own on any day of the tax year of the foreign corporation more than 50% of the total combined voting power of all classes of its voting stock, or the total value of the stock of the corporation. Note: In determining stock ownership for these purposes, certain constructive ownership rules apply. The same penalties that apply for failure to file Form 8865 also apply to Form 5471 (see the discussion in the previous section). The information required to properly complete the Form 5471 can be extensive and difficult to obtain. FORM 3520, ANNUAL RETURN TO REPORT TRANSACTIONS WITH FOREIGN TRUSTS AND RECEIPT OF CERTAIN FOREIGN GIFTS AND FORM 3520-A, ANNUAL INFORMATION RETURN OF FOREIGN TRUST WITH A U.S. OWNER U.S. persons who either create a foreign trust, receive distributions from a foreign trust, or receive gifts or bequests from foreign persons are required to file the Form income tax return, including extensions. The failure to do so may subject you to a penalty of 35% of the gross value of any property transferred to the trust, 35% of the gross value of the distributions received from the trust, or 5% of the amount of certain foreign gifts for each month for which the gift goes unreported (not to exceed 25% of the gift). In addition to the filing requirements of the Form 3520, there is also a requirement to file a Form 3520-A (Annual Information Return of Foreign Trust With a U.S. Owner) which is an annual information return of a foreign trust with at least one U.S. owner and which is considered a grantor trust. If you are a U.S. person who directly or indirectly transfers property to a foreign trust, the trust is presumed to have a U.S. beneficiary and is considered a grantor trust unless you can demonstrate that under the terms of the agreement, no income or corpus of the trust can be paid or accumulated for the benefit of a U.S. person. As the U.S. owner, you are responsible for ensuring that the foreign trust annually furnishes certain information to the Internal Revenue Service and the other owners and beneficiaries of the trust. The Form 3520-A must be filed by March 15 after the foreign trust s tax year, in the case of a calendar year trust. A six-month extension can be requested on IRS Form international tax issues and reporting requirements 95 A foreign trust is defined as a trust in which either a court outside of the United States is able to exercise primary supervision over the administration of the trust or one or more non-u.s. persons have the authority to control all substantial decisions of the trust. The information return must be filed in connection with the formation of a foreign trust, the transfer of cash or other assets by the settlor or grantor to a foreign trust, and the receipt of any distributions by a U.S. beneficiary from a foreign trust. Any uncompensated use of foreign trust property (e.g., real estate or personal property) by a U.S. grantor, U.S. beneficiary, or any related person is treated as a distribution to the grantor or beneficiary equal to the fair market value of the use of the property and must be reported. The use or loan of trust property will not be considered a distribution to the extent the loan is repaid with a market rate interest or the user makes a payment equal to the fair market value of such use within a reasonable time frame. Gifts or bequests that you receive in the form of money or property from a foreigner (including a foreign estate) that is valued in the aggregate at more than $100,000 annually is required to be reported. Gifts in 2011 in excess of $14,375 (or $14,723 in 2012) from a foreign corporation or foreign partnership that you treat as a gift must also be disclosed. The Form 3520 must be filed by the due date of your individual FORM 8938, STATEMENT OF FOREIGN FINANCIAL ASSETS For tax years beginning after March 18, 2010, U.S. citizens or resident aliens who hold more than $50,000 (in the aggregate) in certain foreign assets (e.g., a foreign financial account, an interest in a foreign entity, or any financial instrument or contract held for investment that is held and issued by a foreigner) will be required to report information about those assets on an income tax return using Form 8938, Statement of Foreign Financial Assets. This requirement is in addition to the FBAR reporting. Noncompliance with these rules for any tax year will result in a minimum failure to file penalty of $10,000 and continuing failure to file penalties up to a maximum of $50,000. In addition a 40% understatement penalty for underpayment of tax as a result of a transaction involving an undisclosed specified foreign financial asset and criminal penalties may apply. For tax returns filed after March 18, 2010, the statute of limitations for assessing tax with regard to cross-border transactions or for certain foreign assets will be extended for 3 years from the date certain informational reporting is submitted related to the transaction or the asset if the failure to report was due to reasonable cause and not willful omission. If an omission is in excess of $5,000 related to a foreign financial asset, the statute of limitations will be extended 10 96 EisnerAmper 2012 personal tax guide from 3 years to 6 years and would not begin to run until the taxpayer files the return disclosing the reportable foreign asset. NOTE: The IRS issued guidance suspending the information reporting requirements for tax years beginning after March 18, 2010 until the IRS releases the Form U.S. persons will be required to attach the form for the suspended tax year to the following year s income tax return required to be filed. Form 8938 filed for a suspended taxable year with a timely filed income tax or information return (taking into account extensions) will be treated as having been filed on the date that the income tax or information return for the suspended taxable year was filed. Observation: The definition of reportable foreign asset is much broader than under the FBAR rules and includes interests in offshore hedge fund and private equity funds. Individuals not required to file a U.S. income tax return for the tax year are not required to file Form 8938 even if the aggregate value of the specified foreign financial assets is more than the appropriate reporting threshold. Beginning In 2014 foreign financial institutions will be required to report directly to the IRS certain information about financial accounts held by U.S. taxpayers, or by foreign entities in which U.S. taxpayers hold a substantial ownership interest. To properly comply, a foreign financial institution will have to enter into a special agreement with the IRS by June 30, A participating institution will be required to implement certain account opening procedures, to identify U.S. accounts opened on or after the effective date of the agreement, and to have certain procedures for pre-existing private banking accounts. The U.S. account holder will need to provide the institution a Form W-9 to identify the status as a U.S. account holder and the institution will report the information to the IRS. Those institutions that do not participate and account owners unwilling to provide information will be subject to a 30% withholding tax on certain U.S. source payments including interest, dividends and proceeds from the sale of securities. TAX CONSEQUENCES FOR U.S. CITIZENS AND OTHER U.S. PERSONS LIVING IN CANADA March 2015 CONTENTS U.S. income tax filing requirements Non-filers U.S. foreign reporting requirements Foreign trusts Foreign corporations Foreign partnerships U.S. Social Security U.S. estate tax U.S. US Citizens Living in Canada US Citizens Living in Canada Income Tax Considerations 1) I am a US citizen living in Canada. What are my income tax filing and reporting requirements? US Income Tax Returns A US citizen residing in Canada INTERNATIONAL TIDBIT: Reporting Foreign Investments New Requirements for the 2013 Tax Year INTERNATIONAL TIDBIT: Reporting Foreign Investments New Requirements for the 2013 Tax Year The last few years have seen increased emphasis on individuals reporting about their foreign investments and penalizing USA Taxation. 3.1 Taxation of funds. Taxation of regulated investment companies: income tax USA Taxation FUNDS AND FUND MANAGEMENT 2010 3.1 Taxation of funds Taxation of regulated investment companies: income tax Investment companies in the United States (US) are structured either as openend 57 th UIA CONGRESS Macau / China October 31 November 4, 2013 IMMIGRATION AND NATIONALITY LAW GLOBAL TRENDS ON CITIZENSHIP AND NATIONALITY 57 th UIA CONGRESS Macau / China October 31 November 4, 2013 IMMIGRATION AND NATIONALITY LAW Saturday, November 2, 2013 GLOBAL TRENDS ON CITIZENSHIP AND NATIONALITY UIA 2013 THE TAX ISSUES PROMOTING, AND Income Tax and Social Insurance The Global Employer: Focus on Global Immigration & Mobility Income Tax and Social Insurance An employee who works abroad is always concerned about the possibility of increased income taxation and social Are You Ready For New Form 8938 to Report Specified Foreign Financial Assets? Are You Ready For New Form 8938 to Report Specified Foreign Financial Assets? The Hiring Incentives to Restore Employment ( HIRE ) Act, signed into law in 2010, included modified provisions of the previously 1. Nonresident Alien or Resident Alien? U..S.. Tax Guiide for Non-Resiidents Table of Contents A. U.S. INCOME TAXES ON NON-RESIDENTS 1. Nonresident Alien or Resident Alien? o Nonresident Aliens o Resident Aliens Green Card Test Substantial Presence Your Taxes: IRS grants 3-week extension for its tax-amnesty program Your Taxes: IRS grants 3-week extension for its tax-amnesty program Sep. 22, 2009 KEVIN E. PACKMAN and LEON HARRIS, THE JERUSALEM POST This article is an urgent update for US taxpayers... and it comes Corrective U.S. Tax Compliance for Dual Status and Foreign Taxpayers Andrew Bernknopf, Esq., Member: Corrective U.S. Tax Compliance for Dual Status and Foreign Taxpayers Andrew Bernknopf, Esq., Member: This article provides an overview of corrective United States tax compliance measures for individuals Presentation by Jennifer Coates for the American Immigration Lawyers Association Tax Issues for Non- Citizens What Immigration Lawyers Need to Know Presentation by Jennifer Coates for the American Immigration Lawyers Association Principal, Jenny Coates Law, PLLC Seattle and Bainbridge, Tax Aspects of Consulting The Exit Tax Roth IRA Conversions Other. Foreign Bank Account Reporting Update Social Security The Wolf Group, PC Tax Aspects of Consulting The Exit Tax Roth IRA Conversions Other Foreign Bank Account Reporting Update Social Security U.S. citizen Green card holder G-4 visa holder Based on common DESCRIPTION OF THE PLAN DESCRIPTION OF THE PLAN PURPOSE 1. What is the purpose of the Plan? The purpose of the Plan is to provide eligible record owners of common stock of the Company with a simple and convenient means of investing Provinces and territories also impose income taxes on individuals in addition to federal taxes Worldwide personal tax guide 2013 2014 Canada Local information Tax Authority Website Tax Year Tax Return due date Is joint filing possible Are tax return extensions possible Canada Revenue Agency (CRA) US Tax Issues for Canadian Residents US Tax Issues for Canadian Residents SPECIAL REPORT US Tax Issues for Canadian Residents The IRS has recently declared new catch up filing procedures for non-resident US taxpayers who are considered innocent U.S. Taxation and information reporting for foreign trusts and their U.S. owners and U.S. beneficiaries Private Company Services U.S. Taxation and information reporting for foreign trusts and their U.S. owners and U.S. beneficiaries United States (U.S.) owners and beneficiaries of foreign trusts (i.e., non-u.s.. Instructions for Form 1116 Foreign Tax Credit (Individual, Estate, or Trust) 2009 Instructions for Form 1116 Foreign Tax Credit (Individual, Estate, or Trust) Department of the Treasury Internal Revenue Service Section references are to the Internal K-1 (Form 1041), Schedule K-1 U.S. TAX ISSUES FOR CANADIANS March 2015 CONTENTS Snowbirds Canadians owning U.S. rental properties Summary U.S. TAX ISSUES FOR CANADIANS If you own rental property in the United States or spend extended periods of time there, you CYPRUS TAX CONSIDERATIONS TAXATION The following summary of material Cyprus, US federal income and United Kingdom tax consequences of ownership of the GDRs is based upon laws, regulations, decrees, rulings, income tax conventions Instructions for Form 1116 2014 Instructions for Form 1116 Foreign Tax Credit (Individual, Estate, or Trust) Section references are to the Internal Revenue Code unless otherwise noted. Future Developments For the latest information Dispelling Fear! What are your fears?! - Criminal implications! - Cost of penalties! - Cost of getting compliant with advisors! Dispelling Fear! What are your fears?! - Criminal implications! - Cost of penalties! - Cost of getting compliant with advisors! Do not fear the consequences, get the facts:! Each Individual is unique! [LOGO] ROGERS COMMUNICATIONS INC. DIVIDEND REINVESTMENT PLAN. November 1, 2010 [LOGO] ROGERS COMMUNICATIONS INC. DIVIDEND REINVESTMENT PLAN November 1, 2010 Rogers Communications Inc. Dividend Reinvestment Plan Table of Contents SUMMARY... 3 DEFINITIONS... 4 ELIGIBILITY... 6 ENROLLMENT... Instructions for Form 8854 2010 Instructions for Form 8854 Initial and Annual Expatriation Statement Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless otherwise noted. BANK OF MONTREAL SHAREHOLDER DIVIDEND REINVESTMENT AND SHARE PURCHASE PLAN BANK OF MONTREAL SHAREHOLDER DIVIDEND REINVESTMENT AND SHARE PURCHASE PLAN This Offering Circular covers common shares of Bank of Montreal (the Bank ) which may be purchased on the open market through I. Taxation of U.S. Citizens Living and/or Working in Canada* I. Taxation of U.S. Citizens Living and/or Working in Canada* 1 1.01 Introduction: 1 1.02 U.S. Income Taxation of Citizens 2 (a) Taxation of Individuals 2 (b) Taxes and Passports 2 (c) Americans Employed Foreign Financial Account & Asset Reporting: FinCen (FBAR) v. FATCA Foreign Financial Account & Asset Reporting: FinCen (FBAR) v. FATCA Presented by David J Lewis, Attorney, of Krugliak, Wilkins, Griffiths & Dougherty Co. LPA and Patricia L Gibbs, CPA, of CBIZ MHM September Handling IRS Targeted Audits, Voluntary Disclosures and Reporting Foreign Assets. Presentation Roadmap Handling IRS Targeted Audits, Voluntary Disclosures and Reporting Foreign Assets Elizabeth Copeland 210.250.6121 elizabeth.copeland@strasburger.com Farley Katz 210.250.6007 farley.katz@strasburger.com Immigrating to the USA: effective wealth planning Charles P LeBeau, Attorney, San Diego, California, USA Immigrating to the USA: effective wealth planning Charles P LeBeau, Attorney, San Diego, California, USA Although considerations will vary widely depending on the circumstances of the specific non-resident Tax Issues related to holding Canadian assets, Estate issues & other matters Tax Issues related to holding Canadian assets, Estate issues & other matters Carol-Ann Simon, Shareholder Masataka Yamaguchi, International Tax Manager January 14, 2014 Case Study: Client Assumptions & Estate Planning for the International Client Estate Planning for the International Client Brenda Jackson-Cooper Doug Andre March 24, 2015 I. Rules and Definitions Agenda II. Estate Planning Case Studies III. Questions 2 Effects of U.S. transfer tax THE INCOME TAXATION OF ESTATES & TRUSTS The income taxation of estates and trusts can be complex because, as with partnerships, estates and trusts are a hybrid entity for income tax purposes. Trusts and estates are treated as an entity for certain Withholding of Tax on Nonresident Aliens and Foreign Entities Department of the Treasury Internal Revenue Service Publication 515 Cat. No. 15019L Withholding of Tax on Nonresident Aliens and Foreign Entities For use in 2013 Contents What's New... 1 Reminders... 2 Instructions for Form 8960 2014 Instructions for Form 8960 Net Investment Income Tax Individuals, Estates, and Trusts Department of the Treasury Internal Revenue Service Section references are to the Internal Revenue Code unless Foreign Account Tax Compliance Act ("FATCA") Required Form Who Must File? Does the United States include U.S. territories? Reporting Threshold (Total Value of Assets) When do you have an interest in an account or asset? Foreign Account Tax Compliance FEDERAL TAXATION OF INTERNATIONAL TRANSACTIONS Chapter 10 FEDERAL TAXATION OF INTERNATIONAL TRANSACTIONS Daniel Cassidy 1 10.1 INTRODUCTION Foreign companies with U.S. business transactions face various layers of taxation. These include income, sales, IN THIS ISSUE: July, 2011 j Income Tax Planning Concepts in Estate Planning IN THIS ISSUE: Goals of Income Tax Planning Basic Estate Planning Has No Income Tax Impact Advanced Estate Planning Can Have Income Tax Implications Taxation of Corporations, LLCs, Partnerships and Non- Instructions for Form 1116 2005 Instructions for Form 1116 Foreign Tax Credit (Individual, Estate, or Trust) Section references are to the Internal Revenue Code unless otherwise noted. Department of the Treasury Internal Revenue, Radio X June 19 Broadcast Foreign Asset Reporting Questions & Answers Radio X June 19 Broadcast Foreign Asset Reporting Questions & Answers 1. What is the FBAR filing? FBAR is the acronym for the Foreign Bank Account Report that must be filed annually with the IRS to report A History of Controlled Foreign Corporations and the Foreign Tax Credit A History of Controlled Foreign Corporations and the Foreign Tax Credit by Melissa Redmiles and Jason Wenrich A s U.S. corporations have expanded their businesses overseas in the last several decades, Representing U.S.-Swiss Dual Passport Holders in IRS Voluntary Disclosure Cases Volume 55, Number 9 August 31, 2009 Representing U.S.-Swiss Dual Passport Holders in IRS Voluntary Disclosure Cases by William M. Sharp Sr. and Natalie Peter Reprinted from Tax Notes Int l, August 31, Reporting Cash Transactions and Foreign Financial Accounts (Foreign Bank Account Reports "FBAR") Reporting Cash Transactions and Foreign Financial Accounts (Foreign Bank Account Reports "FBAR") Form 8300 - Reporting Cash Payments Over $10,000 in a Trade or Business Reportable transactions include, Taxing Decisions. Gary S. Wolfe and Allen Walburn Taxing Decisions U.S.-Based Hedge Funds And Offshore Reinsurance Gary S. Wolfe and Allen Walburn U.S.-Based Hedge Funds And Offshore Reinsurance Gary S. Wolfe and Allen Walburn U.S.-based hedge funds are IRS Issues Reliance Proposed Regulations On Some Net Investment Income Tax Issues. Background /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// Special Report Series on Section 1411 OF FOREIGN N ATIONALS U.S. TAXATION OF FOREIGN N ATIONALS gtn.com C O N T E N T S Introduction 1 1. Residency Lawful Permanent Resident Test 3 Substantial Presence Test 5 Which Test Prevails? 9 Special Considerations 9 2. Case TAX ASPECTS OF MUTUAL FUND INVESTING Tax Guide for 2015 TAX ASPECTS OF MUTUAL FUND INVESTING INTRODUCTION I. Mutual Fund Distributions A. Distributions From All Mutual Funds 1. Net Investment Income and Short-Term Capital Gain Distributions IRS Issues Final and New Proposed Regulations Implementing the 3.8% Tax on Investment Income IRS Issues Final and New Proposed Regulations Implementing the 3.8% Tax on Investment Income Final Regulations and New Proposed Regulations Implement the 3.8% Tax on Net Investment Income of Individuals, line of SIGHT Cross-Border Trusts A Guide to Cross-Border Trust Design and Administration line of SIGHT Cross-Border Trusts A Guide to Cross-Border Trust Design and Administration We hope you enjoy the latest presentation from Northern Trust s Line of Sight. By providing research, findings, Non-US Collective Investment Vehicles: Suitable Investments for US Taxpayers? Michael J. Legamaro Non-US Collective Investment Vehicles: Suitable Investments for US Taxpayers? Michael J. Legamaro 480401032 Structure of Most CIVs Most non-us collective investment vehicles (i.e., funds) are organized U.S. Taxation of Foreign Investors PART OF THE LEHMAN TAX LAW KNOWLEDGE BASE SERIES United States Taxation Of Investors U.S. Taxation of Foreign Investors Non Resident Alien Individuals & Foreign Corporations By Richard S. Lehman Esq. TAX Estate & Gift Tax Treatment for Non-Citizens ADVANCED MARKETS Estate & Gift Tax Treatment for Non-Citizens It goes without saying that the laws governing the U.S. estate and gift tax system are complex. When you then consider the additional complexities Shareholder Dividend Reinvestment and Stock Purchase Plan Shareholder Dividend Reinvestment and Stock Purchase Plan 2012 Offering circular 1 WHAT S INSIDE Introduction 3 Summary 4 Contact Information 4 Questions and Answers 5 Shareholder Dividend Reinvestment Wealth Planning Summary of U.S. Income, Estate and Gift Taxation for Non-Resident Aliens Wealth Planning Summary of U.S. Income, Estate and Gift Taxation for Non-Resident Aliens Overview The United States ( U.S. ) continues to offer attractive investment options to foreign individuals. While Offshore Tax Evasion: US Initiatives Scott D. Michel, Caplin & Drysdale This Article discusses the US reporting rules for US taxpayers with foreign accounts and assets (including FBAR and FATCA), the civil penalties for non-compliance with Partner's Instructions for Schedule K-1 (Form 1065) 2014 Partner's Instructions for Schedule K-1 (Form 1065) Partner's Share of Income, Deductions, Credits, etc. (For Partner's Use Only) Department of the Treasury Internal Revenue Service Section references Residency for U.S. Income Tax Purposes by Jo Anne C. Adlerstein Copyright 2014, American Immigration Lawyers Association. Reprinted, with permission, from AILA s Immigration Practice Pointers (2014 15 Ed.), AILA Publications,. Residency for U.S. Long Awaited Guidance Concerning Foreign Bank Account ( FBAR ) Filing Requirements Released Long Awaited Guidance Concerning Foreign Bank Account ( FBAR ) Filing Requirements Released This past week, the Treasury Department s Financial Crimes Enforcement Network (FinCEN) released proposed changes INTRODUCTION TO THE TAXATION SYSTEM IN ISRAEL INTRODUCTION TO THE TAXATION SYSTEM IN ISRAEL 1. INTRODUCTION The Israeli tax system is based on UK tax principles with substantial modification. On January 1, 2003, Israel introduced a substantial tax COMMON QUESTIONS ASKED BY FOREIGN PEOPLE ABOUT U.S. TAXES. September 2013 Edition COMMON QUESTIONS ASKED BY FOREIGN PEOPLE ABOUT U.S. TAXES September 2013 Edition COMMON QUESTIONS ASKED BY FOREIGN PEOPLE ABOUT U.S. TAXES September 2013 Edition Editor: Joseph B. Barnes, CPA Past Editor: NORTHERN BLIZZARD RESOURCES INC. STOCK DIVIDEND PROGRAM NORTHERN BLIZZARD RESOURCES INC. STOCK DIVIDEND PROGRAM Introduction This Stock Dividend Program (the "Program") provides eligible holders ("Shareholders") of common shares ("Common Shares") of Northern ADVISING THE FOREIGN PRIVATE CLIENT ON U.S. INCOME AND TRANSFER TAX PLANNING ADVISING THE FOREIGN PRIVATE CLIENT ON U.S. INCOME AND TRANSFER TAX PLANNING The 2011 Annual Meeting of the California Tax Bar and the California Tax Policy Conference State Bar of California Taxation Spanish Tax Facts. The Expatriate Financial Guide to Spain The Expatriate Financial Guide to Spain Spanish Tax Facts Introduction Tax Year Assessment Basis Taxation in Spain occurs at a national level and at a regional ( Autonomous Community ) or municipal level. The IRS Is Looking For Non-Compliant Taxpayers With Foreign Interests: Is Your Taxpayer One Of Them? The IRS Is Looking For Non-Compliant Taxpayers With Foreign Interests: Is Your Taxpayer One Of Them? Josh O. Ungerman and Anthony P. Daddino Each year, in the United States alone, offshore tax evasion Instructions for Form 1118 Instructions for Form 1118 (Rev. December 2009) Foreign Tax Credit Corporations Department of the Treasury Internal Revenue Service Section references are to the Internal Part III; Schedule H; and Schedule TRADITIONAL IRA DISCLOSURE STATEMENT TRADITIONAL IRA DISCLOSURE STATEMENT TABLE OF CONTENTS REVOCATION OF ACCOUNT... 1 STATUTORY REQUIREMENTS... 1 (1) Qualification Requirements... 1 (2) Required Distribution Rules... 1 (3) Approved Form....
https://docplayer.net/7096980-International-tax-issues-and-reporting-requirements.html
CC-MAIN-2020-10
refinedweb
9,343
50.87
The official blog of the Microsoft SharePoint Product Group In this post, I’ll show you how to customize the Quick Launch menu to display several levels of data in a dynamic way and use this customized menu for quick access to all Views within a List without consuming space on the Quick Launch. First, let’s add a List and make sure it shows on the Quick Launch. Let’s call this list “Navigation Test List”, and then add 4 Views to the list. Next, let’s write some OM code that, when run, adds a link to each of the List’s Views under the List Link on the Quick Launch. Add the following code to a new C# Console Application in Visual Studio .NET or 2005 (and don’t forget to add a reference to Microsoft.Sharepoint.dll). using System; using System.Collections.Generic; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Navigation; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { SPSite site = new SPSite(""); SPWeb web = site.OpenWeb(); SPList list = web.Lists["Navigation Test List"]; SPNavigationNode rootListLink = web.Navigation.GetNodeByUrl(list.DefaultViewUrl); SPNavigationNode node = null; foreach (SPView view in list.Views) { node = new SPNavigationNode(view.Title, view.Url, false); rootListLink.Children.AddAsFirst(node); } } } } At this point, we have links to all of the Views under the List, but they cannot be displayed since the menu control ignores the links after the second level. So, let’s modify the menu control to display what we want. Perform the following to accomplish this task: 1. Navigate to the master page gallery: From the home page click on Site Actions, then Site Settings, and then on Master Pages, under the Galleries column 2. Click on the drop down menu for the master page you want to modify, and then click on Edit in Microsoft Office SharePoint Designer 3. Locate the Quick Launch Menu control, and modify the StaticDisplayLevels and MaximumDynamicDisplayLevels attributes: <SharePoint:AspMenu 4. Save your changes and reload the page from the browser. Hover over the Links on the Quick Launch. The end result should look like this: 5. Optional: Modify other properties in the menu control to match the look and feel of your site. The above steps can also be applied to the Top Link Bar. Useful Links: · On MSDN, How to: Customize the Display of Quick Launch. Luis Angel Mex SDET If you would like to receive an email when updates are made to this post, please register here RSS Assumes you have Visual Studio .NET or 2005. Thanks for the post, seems like everyone I work with wants their own navigation. Do you have any idea how to make this work: I have tried walking through this tutorial and it just doesn't work. Hi I have created one document library by C# code but unable to display it on quick launch could you help me out in this please mail me yvanera@sapient.com Upgrade und Migration How to Upgrade an Area based on a Custom Site Definition Upgrade Link-Übersicht Fellow blogger Amanda Murphy is INETA president elect ! The SharePoint Solutions Blog releases the Site Direkter Download: SPPD-072-2007-05-07 [00:00] Intro Windows 386 Promo Video [00:00] Themen Interactive Direkter Download: SPPD-072-2007-05-07 [00:00] Intro Windows 386 Promo Video [02:21] Themen Interactive Thanks for the post, it's just what I need... but, I don't know where the code or DLL must be placed... can you explain it please? Excellent article just what I needed! One question: How to modify this sample to add links under subsites in Sites section? I've tried your samle and added subsite named 'subsite' and tried to modify line: SPNavigationNode rootListLink = web.Navigation.GetNodeByUrl("/subsite/default.aspx"); but added links appear only in Top link bar not under subsite in Quicklaunch. Hi,This is fine, but what is the way to add javascript in this quick launch thing. All right, here are some answers to your questions: 1. To perform these actions, you don't need VS 2005 or Sharepoint Designer. Notepad should be enough to modify the Master Page :) 2. The Object Model code should live in a WFE, and it can be anything from a console, window app or dll that's called from another app. Just create a new app in VS, add a reference to Microsoft.Sharepoint.dll, and add the code above to your .cs file. More info on OM programming can be found here: 3. To add links under the headings in the Quick Launch, replace the following line of code: SPNavigationNode rootListLink = web.Navigation.GetNodeByUrl(list.DefaultViewUrl); for: SPNavigationNode rootListLink = web.Navigation.GetNodeById(1026);//or the Id of the node under which you'll add the links There's an upcoming post that details the differences in the way we handle some URLs and why sometimes GetNodeByUrl doesn't work as expected... The SDK article in has some mistakes and omissions in the task related to create a custom masterpage: Step 4 should be split in 2. First, create the new subsite, and upload the new master page in there. Then run the code to set the .MasterUrl property. Also, the italic tags are confusing. I'll report these mistakes and get a better document out. Thanks for the catch! Hi, I am currently trying to have a single image as the background which takes in the ms-leftareacell/ms-titlearealeft area, and set the Quick Launch Menu background to be transparent but the list/items to display. This is so that the background image image will butt against my ms-bannerframe image to form a seamless top and LHS image. I have tried various permutations in my theme.css, but can't seem to make it work. Has anyone been here before ?. The top of my ms-leftareacell image is the bottom half of a circle, which needs to butt onto the top half of a circle in my ms-bannerfarme image. Thanks in advance for any help I can receive. Hellu! I think this post is exectly what I'm looking for. But it turns out, I'm a total ASP-Noob, so can someone please explain in a more detailed way what to do with the OM-Code? Compile it in a dll an sign it, and now? Where shall I put/install it? (GAC?) And how does the server know that it should use it?(XML-Config?) (What is the "WFE"?) Would appreciate your help. Thx in advance! Homerhirn. Hi, I got a question about the top navigation bar. No matter what I do the current site is always the first link highlighted on the left side. We would like to have the top navigation look the same on every site collection, meaning that there are the four links (Home, Organisations, Projects, Knowledge Base) on every site in that order without the current site collection highlighted on the left. Any ideas? I have tried to get that 3rd level of menu to appear in the flyouts with no success. How do you tell it what to display there. Is it possible in the Nav settings gui or do you have to do in OM Hi, I have the dynamic menus going fine (thanks!). The background is white, no matter what theme I use. The theme I want to use has white text, so the menus are unreadable. Any ideas on how to change the background colour? This flyout menu does not work in Firefox.. any workarounds? I'm trying to incorporate a tree view in my sharepoint site. The look needs to be how an asp:treeview control would render the tree - with expand-collapse and subnodes appearing below the parent node instead of to the right (as shown above). I found it DOES work in firefox, but some of the css styling is causing the fly-outs to display strangely. In the master page, below the quick launch menu control listed above, is this: <LevelSubMenuStyles> <asp:SubMenuStyle <asp:SubMenuStyle </LevelSubMenuStyles> I removed the second entry for and the fly-out now displays fine in Firefox and Safari. Can we implement role based quick launch, means make menu item enable disable based on User Role I'd be interested on the same thing as Prashant. Has anyone got any ideas how to approach this "problem" :) Note: These instructions apply to MOSS 2007. It's tinier than the Quick Launch area, more powerful than I am conparitively new to sharepoint. I want to implement Flyout menu for navigation which i did successfully. Now, i want to change the behaviour of the left navigation i.e when a user goes to a menu, its child menus are displayed. This is the default behaviour of the menu. Here i want to highlight the parent menu i.e. user should know which menu is the parent menu for the child menus he is looking into? In short i want to know how to highlight the parent menu items. I will appreciate any help in this regards. Thanks, Sandeep Body: I have been getting ready for my presentation at the APAC conference . I will do a solo session Hi luis, Its very useful blog... I've tried in my portal home page.its working fine.but i dont know why half of the words are hiding behind the body of the site? I am able to implement fly out menus successfully but when i login with admin credentials i am able to see. When i login with the viewer or member rights the popoutmenus and the arrow don't appear. Any help on this would be very useful. Meghna I've created a Navigation like shown at the top of the page and it is more comfortable for our Sharepoint. But I've still one question. When I want to change the Style, I cannot find a style element in the core.css for the 3rd level. At the picture on the top of the page the items are blue like links, like my problem. which css-element I have to change to get the same text-color like the items on the higher levels? Does anyone have a solution for the styling problem of the subsite headers? First of all the bullet comes as a dublicate and the width varies depending how you navigate. Here is the code that I created to delete the flyouts, if you install them as Luis advised: namespace FlyOut class Program { static void Main(string[] args) { SPSite site = new SPSite(""); SPWeb web = site.OpenWeb(); SPList list = web.Lists["TestList"]; SPNavigationNode rootListLink = web.Navigation.GetNodeByUrl(list.DefaultViewUrl); SPNavigationNode node = null; foreach (SPNavigationNode curnode in rootListLink.Children) { curnode.Delete(); } SPNavigationNode thisnode = rootListLink.Parent; thisnode.Delete(); } } In the previous post, this line of code: thisnode.Delete(); will delete the parent node (meaning the list link will no longer show up in the Quick Launch menu), so you will have to relink your SharePoint List in the Quick Launch menu by going to the List Settings->"Title, description, and navigation" and make sure the "Display this list on the Quick Launch?" Yes radio button is chosen. This is only needed if you chose to add the line of code above. I created this console application in Visual Studio 2005 so it would delete the flyouts that linked to custom SharePoint pages instead of only built-in list views. As you can imagine, deleting my tests was a necessity because my first few attempts did not work out as expected. Basically I was creating linked node lists that linked to other node lists; I needed the ability to delete the parent nodes until all of the flyouts were gone. I will also share the code I used to add custom links to my client's SharePoint site soon. how to create a link in the quick launch to create a new document in a document library? Well, if you don't care if the link is a fly-out link, you can add custom links to your Quick Launch by going to Site Settings->Quick Launch, and add a new link or heading. To get the correct URL, navigate to the document library and click on Upload, and then copy the URL it takes you to. Remember to remove everything after upload.aspx. Sorry, didn't read your comment carefully. Instead of going to "Upload" in your document library, you would just take the URL of your document library and concatenate /Forms/template.doc (or whatever file extension you used for your new documents). So if your document library was called "Shared", your URL would look something like. Please could someone tell me how to automatically get newly created sub-sites to be under (or fly out) of the Sites heading. All my subsites show as headings on quicklaunch unless I manually move them under the Sites heading. WSS3.0在导航方面为用户提供了许多令人振奋的新功能,用于提升用户对站点上下文的感知。分别位于顶部左侧和页面中间区域的两个新的面包屑导航(breadcrumb)控件为用户提供了当前网站上层和内部站... Thanks I can get the 2 tier fly out menus working for views. Is there a way to apply the same principle to folders in a document library insetad of views?
http://blogs.msdn.com/sharepoint/archive/2007/04/26/customizing-the-quick-launch-menu-adding-fly-out-menus-to-sharepoint-navigation.aspx
crawl-002
refinedweb
2,198
73.37
Just something I was trying to do off of the top of my head. I've looked it over dozens of times, and I can't seem to find what I did wrong. I think it has something to do with calling on the corn variable inside the class though.I've looked it over dozens of times, and I can't seem to find what I did wrong. I think it has something to do with calling on the corn variable inside the class though.Code:#include <stdafx.h> #include <cmath> #include <cstdlib> #include <iostream> using namespace std; class vegetable{ public: int broccoli; int carrot; int bean; static int corn; int cauliflower; } int main() { char cornanswer; char Y = "Y"; cout << " Is " << vegetable::corn << "a vegetable?"; cout << "Say Y for yes and N for no"; cin >> cornanswer; if (cornanswer == Y) { cout << "YES!! You are very smart, it is a vegetable!"; } else { cout << "Wow, you are very stupid! That answer is wrong!"; } return 0; } Build Errors. Code:1>Mess.cpp(24): error C2628: 'vegetable' followed by 'int' is illegal (did you forget a ';'?) 1>Mess.cpp(25): error C3874: return type of 'main' should be 'int' instead of 'vegetable' 1>Mess.cpp(27): error C2440: 'initializing' : cannot convert from 'const char [2]' to 'char' 1> There is no context in which this conversion is possible 1>Mess.cpp(45): error C2440: 'return' : cannot convert from 'int' to 'vegetable' 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called
http://cboard.cprogramming.com/cplusplus-programming/124354-what%27s-wrong-code.html
CC-MAIN-2014-35
refinedweb
255
72.76
Opened 9 years ago Closed 9 years ago Last modified 6 years ago #7488 closed (fixed) Inline forms break when the foreign model does not inherit directly from models.Model Description This sample code example demonstrates the problem: from django.db import models from django.contrib import admin class Base(models.Model): base_name = models.CharField( "Base Name", max_length = 40, ) class Child(Base): child_name = models.CharField( "Child Name", max_length = 40, ) class Inline(models.Model): child = models.ForeignKey( Child, ) inline_name = models.CharField( "Inline Name", max_length = 40, ) class InlineAdmin(admin.StackedInline): model = Inline class ChildAdmin(admin.ModelAdmin): inlines = (InlineAdmin,) try: admin.site.register(Child, ChildAdmin) except admin.sites.AlreadyRegistered: pass When the admin's add_view for the "Child" class runs, a DoesNotExist exception is raised. If any of the following changes is made, the problem does not present itself: - removing the "edit_inline" functionality by removing "inlines" from ChildAdmin (not an acceptable workaround) - making the Child model inherit from models.Model (better, but still very inconvenient) I traced the problem back to the get_query_set function in django.newforms.models.BaseInlineFormset, which reads as such: def get_queryset(self): """ Returns this FormSet's queryset, but restricted to children of self.instance """ kwargs = {self.fk.name: self.instance} return self.model._default_manager.filter(**kwargs) Here, self.instance == Child(), an unsaved Child model instance (with no primary key), so the resulting call to filter is looking up an Inline object by an undefined primary key on the Child object. I have no idea why this problem does not present itself when Child inherits from models.Model, but I do have a simple patch to fix this bug. Attachments (2) Change History (9) Changed 9 years ago by comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by comment:3 Changed 9 years ago by Let me do a quick brain dump here. Don't want to forget what I have tracked down thus far. - The DoesNotExist is coming from source:/branches/newforms-admin/django/db/models/fields/related.py#L231 - The above is being called due to source:/branches/newforms-admin/django/db/models/fields/related.py#L137 - Ultimately, Choice().poll does the right thing and raises the DoesNotExist exception. However when translating this into a queryset Choice.objects.filter(poll=Poll()) when Poll does not subclass something will evaluate to an empty list. But when the parent class (multi-table inheritance ) is introduced it will throw this DoesNotExist exception. - The patch here does indeed fix the problem, but I begin to wonder if this should be fixed upstream? comment:4 Changed 9 years ago by As per IRC conversation it seems this should be fixed up in the stack, so changing to trunk and querysets component. comment:5 Changed 9 years ago by So, if it's not a newforms-admin merge blocker bug, milestone should be 1.0 instead of 1.0 alpha. Changed 9 years ago by failing tests comment:6 Changed 9 years ago by comment:7 Changed 6 years ago by Milestone 1.0 deleted Patch to BaseInlineFormset.get_query_set
https://code.djangoproject.com/ticket/7488
CC-MAIN-2017-30
refinedweb
510
51.44
The Entity Framework DbContext class uses a convention over configuration approach to development. When everything is working correctly, you can generate and populate a database just by writing a little bit of code and running “enable-migrations” and “update-database” from the Package Manager Console window in Visual Studio. No XML mapping or configuration files are required, see EF Code-Based Migrations Walkthrough for more details. When things do not work, the conventions are frustrating because they form an impenetrable fog of mystery. Of course, having everything explicitly configured isn’t always clearer or easier to troubleshoot, but here are some common problems I’ve observed with EF 4.1 – EF 5.0, and some steps you can take to avoid the problems. A couple popular errors you might run across include: “System.Data.ProviderIncompatibleException: An error occurred while getting provider information from the database” And the timeless message: “System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. These messages can happen at any stage, from enable-migrations, to update-database, to running the application. The key to getting past one of these messages is figuring out which database the framework is trying to reach, and then making that database available (or pointing the framework somewhere else). DepartmentDb : DbContext { public DepartmentDb() { Debug.Write(Database.Connection.ConnectionString); } public DbSet<Person> People { get; set; } } Run the application with the debugger and watch the Visual Studio Output window. Or, set a breakpoint and observe the ConnectionString property as you go somewhere in the application that tries to make a database connection. Chances are you’ll see something like the following: Data Source=.\SQLEXPRESS;Initial Catalog=SomeNamespace.DepartmentDb;Integrated Security=True; When there is no other configuration or code in place, the Entity Framework will try to connect to the local SQL Server Express database (.\SQLEXPRESS). Visual Studio 2010 will install SQL Server Express by default, so if you are not establishing a connection you might have customized the installation, shut down SQL Server Express, or did one of a thousand other things you might possibly do to make the database unavailable (like change the network protocols). One way to see what SQL services are available on your machine is to go to the Package Manager Console in Visual Studio and execute the following command (also showing the output below): PM> Get-Service | Where-Object {$_.Name -like '*SQL*'} Status Name DisplayName ------ ---- ----------- Stopped MSSQLFDLauncher SQL Full-text Filter Daemon Launche... Stopped MSSQLSERVER SQL Server (MSSQLSERVER) Stopped SQLBrowser SQL Server Browser Stopped SQLSERVERAGENT SQL Server Agent (MSSQLSERVER) Running SQLWriter SQL Server VSS Writer In the above output you can see I do not have a default SQLEXPRESS instance available (it would list itself as MSSQL$SQLEXPRESS), but I do have a default SQL Server instance installed (MSSQLSERVER – but it is not running). I’d have to start the service and give the Entity Framework an explicit connection string for this scenario to work (see Controlling Connections below). Visual Studio 2012 installs SQL Server LocalDb by default. LocalDb is the new SQL Express with some notable differences from SQL Express 2008. A LocalDb instance runs as a user process, requires a different connection string, and stores the system databases under your hidden AppData directory. Since LocalDb is a user process and not a service, you won’t see it listed in the output of the Get-Service command above. You can, however, run SqlLocalDb.exe from the package manager console (or the command line) to see if LocalDb is installed. PM> SqlLocalDb info v11.0 If the executable isn’t found, chances are you do not have LocalDb installed. In the above output, I can see I have LocalDb v11.0 installed. EF 5 will use LocalDb if it doesn’t detect SQL Express running, so when you look at the Database.Connection.ConnectionString property in the constructor, like we did earlier, you might see the following instead: Data Source=(localdb)\v11.0;Initial Catalog=SomeNamespace.DepartmentDb;Integrated Security=True; (localdb)\v11.0 is the LocalDb connection string, and if you are not connecting to LocalDb then you might need to reinstall (here is a link that will eventually take you to the MSI file). How does the framework know to use LocalDb instead of Express? It’s done through configuration. If you open your application’s config file, you should see the following inside: <entityFramework> <defaultConnectionFactory type="System.Data.Entity.Infrastructure.LocalDbConnectionFactory, EntityFramework"> <parameters> <parameter value="v11.0" /> </parameters> </defaultConnectionFactory> </entityFramework> Using the LocalDbConnectionFactory means you’ll no longer try to connect to SQL Express by default. There is also a connection factory for SQL Compact. You’ll find this factory in your config file if you install the EF SQL Compact NuGet package. Another common problem I’ve seen popping up when using Code First migrations is typically realized with one of the following exceptions: System.Data.SqlClient.SqlException: Login failed for user ‘[Your account here]'. ... and ... System.Data.SqlClient.SqlException (0x80131904): CREATE DATABASE permission denied in database 'master'. These permission issues can be hard to fix. The first thing I’d do is add some debugging code to verify the connection string being used by the Entity Framework (the debugging code demonstrated earlier). Once you know the server the Entity Framework is trying to reach you should try to login to the server with a management tool (like SQL Management Studio, which also works with LocalDb), if you can. The problem is you might not be able to login with your account. Even if you are an administrator on your local machine you might find yourself with limited privileges in your own, local SQL Server. One scenario where this can happen is when SQL Server or SQL Server Express is installed onto your machine by a different user, or you installed the software using a different account, or perhaps your machine was built from a cloned image file. As of SQL 2008, just being an administrator in Windows doesn’t make you a sysadmin in SQL Server. To fix the permission issues you can try to login to the server using the sa account, but of course you must know the password and the sa account must be enabled. You can also try to login to your machine using the Windows account used for the software installation. Once logged in with high privileges you’ll need to add your Windows login to the list of SQL Server logins and (ideally) put yourself in the sysadmin server role. When all else fails, you can try to regain control of SQL Server by starting the server in single user mode (see Connect to SQL Server When System Administrators Are Locked Out). The SQL Express blog also published a script to automate this process (see How to take ownership of your local SQL Server), which should also work with SQL 2012. The nuclear option, if you don’t care about any of the local databases, is to uninstall and reinstall SQL Server. If you are using SQL LocalDb, don’t ever delete the physical .mdf and .log files for your database without going through the SQL Server Object Explorer in Visual Studio or in SQL Server Management Studio. If you delete the files only, you end up with an error like the following in a web application: Cannot attach the file ‘…\App_Data\DepartmentDb.mdf' as database 'DepartmentDb'. Or the following error in a desktop app: SqlException: Cannot open database "DepartmentDb" requested by the login. The login failed. In this case the database is still registered in LocalDb, so you’ll need to go in the Object Explorer and also delete the database here before SQL Server will recreate the files. Explicitly controlling the connection string for the Entity Framework is easy. You can pass a connection string to the DbContext constructor, or you can pass the name of a connection string that resides in the application’s configuration file. For example, the following code will make sure the Entity Framework connects to the local SQL Server default instance and use a database named departments. public class DepartmentDb : DbContext { public DepartmentDb() : base(@"data source=.; initial catalog=departments; integrated security=true") { } public DbSet<Person> People { get; set; } } While the following would tell the framework to look for a connection string named departments in the config file: public class DepartmentDb : DbContext { public DepartmentDb() : base("departments") { } public DbSet<Person> People { get; set; } } If the Entity Framework does not find a connection string named “departments” in you config file, it will assume you want a database named departments on the local SQL Express or LocalDb instances (depending on which connection factory is in place). Finally, if you do not pass any hints to the DbContext constructor, it will also look for a connection string with the same name as your class (DepartmentDb) in the config file. As you can see, there are lots of options the Entity Framework tries when establishing a connection. The goal of all this work is to make it easy for you to use databases, but when software is misconfigured or not installed correctly, all these options can also make troubleshooting a bit difficult. When troubleshooting connection problems with the Entity Framework the first step is to always figure out what server and database the framework is trying to use. If the problems are permission related, the next step is to find a way to make yourself a sysadmin on the server (or at least get yourself in the dbcreator role). If the problem is making a connection to a server instance that doesn’t exist, you can always explicitly configure a connection string for an existing instance of SQL Server. If all else fails, use SQL Server Compact, as nearly anything you can do with Code-First Entity Framework will work the compact edition of SQL Server. @Andrew - Ah, thanks. But - the one place I don't see this is with "enable-migrations -verbose", which is usually the first step. I see "Checking if the context targets an existing database..." followed by an exception if the server isn't available. Am I missing something, (or could you add that to the verbose output for enable-migrations?)
https://odetocode.com/blogs/scott/archive/2012/08/15/a-troubleshooting-guide-for-entity-framework-connections-amp-migrations.aspx
CC-MAIN-2019-18
refinedweb
1,710
51.48
What does static mean? When you declare a variable or a method as static, it belongs to the class, rather than a specific instance. This means that only one instance of a static member exists, even if you create multiple objects of the class, or if you don’t create any. It will be shared by all objects. The static keyword can be used with variables, methods, code blocks and nested classes. Static Variables Example: public class Counter { public static int COUNT = 0; Counter() { COUNT++; } } The COUNT variable will be shared by all objects of that class. When we create objects of our Counter class in main, and access the static variable. public class MyClass { public static void main(String[] args) { Counter c1 = new Counter(); Counter c2 = new Counter(); System.out.println(Counter.COUNT); } } // Outputs "2" The outout is 2, because the COUNT variable is static and gets incremented by one each time a new object of the Counter class is created. You can also access the static variable using any object of that class, such as c1.COUNT. Static Methods A static method belongs to the class rather than instances. Thus, it can be called without creating instance of class. It is used for altering static contents of the class. There are some restrictions of static methods : - Static method can not use non-static members (variables or functions) of the class. - Static method can not use thisor superkeywords. Example: public class Counter { public static int COUNT = 0; Counter() { COUNT++; } public static void increment(){ COUNT++; } } Static methods can also be called from instance of the class. public class MyClass { public static void main(String[] args) { Counter.increment(); Counter.increment(); System.out.println(Counter.COUNT); } } // Outputs "2" The output is 2 because it gets incremented by static method increament(). Similar to static variables, static methods can also be accessed using instance variables. Static Blocks Static code blocks are used to initialise static variables. These blocks are executed immediately after declaration of static variables. Example: public class Saturn { public static final int MOON_COUNT; static { MOON_COUNT = 62; } } public class Main { public static void main(String[] args) { System.out.println(Saturn.MOON_COUNT); } } // Outputs "62" The output is 62, because variable MOON_COUNT is assigned that value in the static block. Static Nested Classes A class can have static nested class which can be accessed by using outer class name. Example: public class Outer { public Outer() { } public static class Inner { public Inner() { } } } In above example, class Inner can be directly accessed as a static member of class Outer. public class Main { public static void main(String[] args) { Outer.Inner inner = new Outer.Inner(); } } One of the use case of static nested classes in Builder Pattern popularly used in java.
https://www.freecodecamp.org/news/java-static-keyword-explained/
CC-MAIN-2021-49
refinedweb
453
55.84
Last edited: March 22nd 2018Last edited: March 22nd 2018 This notebook is an introduction to a set of partial differential equations which are widely used to model aerodynamics, atmosphere and climate, explosive detonations and even astrophysics. It only gives a small taste of the world of hyperbolic PDEs, Riemann problems and computational fluid dynamics, and the interested reader is encouraged to investigate the field further [1]. The Euler equations govern adiabatic and inviscid flow of a fluid. In the Froude limit (no external body forces) in one dimension, with density $\rho$, velocity $u$, total energy $E$ and pressure $p$, they are given in dimensionless form as\begin{align*} \frac{\partial \rho}{\partial t} + \frac{\partial (\rho u)}{\partial x} &= 0, \\ \frac{\partial (\rho u)}{\partial t} + \frac{\partial (\rho u^2 + p)}{\partial x} &= 0, \\ \frac{\partial (\rho E)}{\partial t} + \frac{\partial u(E + p)}{\partial x} &= 0. \end{align*} These three equations describe conservation of mass, momentum and energy, respectively. In order to solve them numerically, we start by importing NumPy and setting up the plotting environment. %matplotlib inline import numpy as np from matplotlib import pyplot as plt newparams = {'font.size': 14, 'figure.figsize': (14, 7), 'mathtext.fontset': 'stix', 'font.family': 'STIXGeneral', 'lines.linewidth': 2} plt.rcParams.update(newparams) For an ideal gas, the total energy is the sum of the kinetic and potential contributions, i.e.\begin{equation*} E = \frac{1}{2} \rho u^2 + \frac{p}{\gamma - 1}, \end{equation*} where $\gamma$ is the ratio of specific heats for the material in our system. We shall be considering air, which has $\gamma = 1.4$. Note that $e = \frac{p}{(\gamma - 1)\rho}$ is the specific internal energy for ideal gases. Conversions between energy and pressure will be useful to us later, so we define appropriate functions: def energy(rho, u, p): return 0.5 * rho * u ** 2 + p / 0.4 def pressure(rho, u, E): return 0.4 * (E - 0.5 * rho * u ** 2) Conveniently, this first-order hyperbolic system of PDEs can be written as a set of conservation laws, i.e.\begin{equation*} \partial_t {\bf Q} + \partial_x {\bf F(Q)} = {\bf 0}\,. \end{equation*} Here, the vector of conserved quantities ${\bf Q}$ and their fluxes ${\bf F(Q)}$ are given by\begin{equation*} {\bf Q} = \begin{bmatrix} \rho \\ \rho u \\ E \end{bmatrix} \,,\quad {\bf F} = \begin{bmatrix} \rho u \\ \rho u^2 + p \\ u(E+p) \end{bmatrix} \,. \end{equation*} Given the state of the system (i.e. the vector of conserved quantities), we compute the flux as: def flux(Q): rho, u, E = Q[0], Q[1] / Q[0], Q[2] p = pressure(rho, u, E) F = np.empty_like(Q) F[0] = rho * u F[1] = rho * u ** 2 + p F[2] = u * (E + p) return F Consider a spatial domain $[x_L, x_R]$ and two points in time $t_2 > t_1$. By integrating the Euler equations in differential form in space and time, we acquire the integral form,\begin{equation*} \int_{x_L}^{x_R} {\bf Q}(x, t_2) {\rm d} x = \int_{x_L}^{x_R} {\bf Q}(x, t_1) {\rm d} x + \int_{t_1}^{t_2} {\bf F}({\bf Q}(x_L, t)) {\rm d} t - \int_{t_1}^{t_2} {\bf F}({\bf Q}(x_R, t)) {\rm d} t . \end{equation*} This relation is the basis for our spatial and temporal discretisations. For simplicity, we take our computational domain to be $[0, 1]$, and divide it into $N$ equal cells of width $\Delta x = 1/N$: N = 100 dx = 1 / N x = np.linspace(-0.5 * dx, 1 + 0.5 * dx, N + 2) Note that we have added one extra cell to each side of the domain. These are so-called ghost cells which allow us to apply appropriate boundary conditions (more on that shortly). It is also necessary to discretise the state vector $\bf Q$ and intercell fluxes $\bf F$. We denote by ${\bf Q}_i^n$ the spatial average within the cell $[x_{i-1/2}, x_{i+1/2}]$ at time $t_n$, i.e.\begin{equation*} {\bf Q}_i^n = \frac{1}{\Delta x} \int_{x_{i-1/2}}^{x_{i+1/2}} {\bf Q}(x, t_n) {\rm d} x , \end{equation*} and initialise a numpy array to store the values within each cell: Q = np.empty((3, len(x))) Similarly, the temporal average of the flux across the cell boundary at $x_{i+1/2}$ is denoted ${\bf F}_{i+1/2}^n$:\begin{equation*} {\bf F}_{i+1/2}^n = \frac{1}{\Delta t^n} \int_{t_n}^{t_{n+1}} {\bf F}({\bf Q}(x_{i+1/2}, t)) {\rm d} t . \end{equation*} Inserting these discretisations into the integral form of the Euler equations, we get a conservative update formula for each computational cell which is $exact$:\begin{equation*} {\bf Q}_i^{n+1} = {\bf Q}_i^n + \frac{\Delta t^n}{\Delta x} \left( {\bf F}_{i-\frac{1}{2}}^n - {\bf F}_{i+\frac{1}{2}}^n \right) \end{equation*} This formula is our method for advancing the system forwards in time, and the numerical approximations are solely in the evaluations of the intercell fluxes ${\bf F}_{i \pm 1/2}^n$. We cannot, however, choose the time step $\Delta t^n$ as large as we want, due to restrictions on stability. By choosing a Courant-Friedrichs-Lewis (CFL) coefficient $c \leq 1$, the time step can safely be set to\begin{equation*} \Delta t^n = \frac{c \Delta x}{S_{\rm max}^n}, \end{equation*} where $S_{\rm max}^n$ is a measure of the maximum wave speed present in the system. We use a common approximation which finds the cell with highest sum of material and sound speeds, i.e.\begin{equation*} S_{\rm max}^n = \max_i ( |u_i^n| + a_i^n ) , \end{equation*} where the speed of sound for ideal gases is given by\begin{equation*} a = \sqrt{\frac{\gamma p}{\rho}} . \end{equation*} def timestep(Q, c, dx): rho, u, E = Q[0], Q[1] / Q[0], Q[2] a = np.sqrt(1.4 * pressure(rho, u, E) / rho) S_max = np.max(np.abs(u) + a) return c * dx / S_max Many different procedures exist for approximating the intercell fluxes ${\bf F}_{i \pm 1/2}$. For simplicity, we implement the relatively straight-forward FIrst-ORder CEntred (FORCE) scheme. [2] Given the state of two neighbouring cells, the FORCE flux at the interface is computed as\begin{equation*} {\bf F}_{\rm FORCE} ({\bf Q}_L, {\bf Q}_R) =\frac{1}{2} \left( {\bf F}_0 + \frac{1}{2} ({\bf F}_L + {\bf F}_R) \right) + \frac{1}{4} \frac{\Delta x}{\Delta t^n} ({\bf Q}_L - {\bf Q}_R) . \end{equation*} Here, ${\bf F}_K = {\bf F}({\bf Q}_K)$ and\begin{equation*} {\bf Q}_0 = \frac{1}{2} ({\bf Q}_L + {\bf Q}_R) + \frac{1}{2} \frac{\Delta t^n}{\Delta x} ({\bf F}_L- {\bf F}_R) , \end{equation*} These equations correspond to Eqns. (16) and (19) in [2]. At this point, the reason for our previously introduced ghost cells become apparent. Since each cell interface requires information from both sides, the first and $N$-th cells in our domain lack information from the left and right, respectively. By initialising a ghost cell on each side and transmitting information from within the domain outwards for each time step, these fictional cells provide the necessary information for performing our computations. Implementing the force scheme in Python, we have def force(Q): Q_L = Q[:, :-1] Q_R = Q[:, 1:] F_L = flux(Q_L) F_R = flux(Q_R) Q_0 = 0.5 * (Q_L + Q_R) + 0.5 * dt / dx * (F_L - F_R) F_0 = flux(Q_0) return 0.5 * (F_0 + 0.5 * (F_L + F_R)) + 0.25 * dx / dt * (Q_L - Q_R) Given an initial condition ${\bf Q}(x, 0)$, we want to evolve the system in time to predict the state at some future time $t=T$. A popular test case for the Euler equations is Sod's shock tube. The test consists of a Riemann problem, which means that the PDE is coupled with a set of piecewise constant initial conditions separated by a single discontinuity. Intuitively, the test can be thought of as a tube with a membrane separating air of two different densities (and pressures). At $t=0$, the membrane is removed, which results in a rarefaction wave, a contact discontinuity and a shock wave. The initial conditions are given by\begin{equation*} {\bf Q}(x, 0) = \begin{cases} {\bf Q}_L \quad {\rm if} \quad x \leq 0.5 \\ {\bf Q}_R \quad {\rm if} \quad x > 0.5 \end{cases} , \quad \begin{pmatrix} \rho \\ u \\ p \end{pmatrix}_L = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} , \quad \begin{pmatrix} \rho \\ u \\ p \end{pmatrix}_R = \begin{pmatrix} 0.125 \\ 0 \\ 0.1 \end{pmatrix} . \end{equation*} We therefore initialise the vector of conserved variables according to # Density: Q[0, x <= 0.5] = 1.0 Q[0, x > 0.5] = 0.125 # Momentum: Q[1] = 0.0 # Energy: Q[2, x <= 0.5] = energy(1.0, 0.0, 1.0) Q[2, x > 0.5] = energy(0.125, 0.0, 0.1) c = 0.9 T = 0.25 t = 0 while t < T: # Compute time step size dt = timestep(Q, c, dx) if t + dt > T: # Make sure to end up at specified final time dt = T - t # Transmissive boundary conditions Q[:, 0] = Q[:, 1] # Left boundary Q[:, N + 1] = Q[:, N] # Right boundary # Flux computations using FORCE scheme F = force(Q) # Conservative update formula Q[:, 1:-1] += dt / dx * (F[:, :-1] - F[:, 1:]) # Go to next time step t += dt In order to compare our results, an exact reference solution is provided in the file "ref.txt". For the simple case of a single contact discontinuity, this Riemann problem can be solved up to arbitrary accuracy with an exact (iterative) solver. The implementation of the exact solver is outside the scope of this notebook, but the interested reader is referred to the extensive resource by Toro [1]. # Load reference solution ref_sol = np.loadtxt('ref.txt') ref_sol = np.transpose(ref_sol) # Numerical results for density, velocity, pressure and internal energy num_sol = [Q[0], Q[1] / Q[0], pressure(Q[0], Q[1] / Q[0], Q[2]), \ pressure(Q[0], Q[1] / Q[0], Q[2]) / (0.4 * Q[0])] With the reference solution in place, we can make plots of how density, velocity, pressure and internal energy are distributed throughout the domain at the final time $t = T$. fig, axes = plt.subplots(2, 2, sharex='col', num=1) axes = axes.flatten() labels = [r'$\rho$', r'$u$', r'$p$', r'$e$'] # For each subplot, plot numerical and exact solutions for ax, label, num, ref in zip(axes, labels, num_sol, ref_sol[1:]): ax.plot(x, num, 'or', fillstyle='none', label='Numerical') ax.plot(ref_sol[0], ref, 'b-', label='Exact') ax.set_xlim([0, 1]) ylim_offset = 0.05 * (np.max(num) - np.min(num)) ax.set_ylim([np.min(num) - ylim_offset , np.max(num) + ylim_offset]) ax.set_xlabel(r'$x$') ax.set_ylabel(label, rotation=0) ax.legend(loc='best') plt.show() Going from left to right in the domain, it is clear that our scheme has resolved the rarefaction wave, contact discontinuity (evident in $\rho$ and $e$) and the shock wave. Compared to the exact solution, however, it is obvious that the numerical approximation is diffused and fails to capture sharp discontinuities accurately. This inaccuracy is as expected for a scheme which is only first-order accurate, since the second-order error term is diffusive by nature. Every time we advance the system in time, the numerical solution gets slightly smeared out compared to the exact one. After 60 time steps, the result is as shown. For any serious applications, high-resolution schemes with at least second order accuracy should be considered. A few such schemes are given in [3]. [1] E. F. Toro: "Riemann Solvers and Numerical Methods for Fluid Dynamics - A Practical Introduction" (3rd ed, Springer, 2009) [2] E. F. Toro & A. Hidalgo & M. Dumbser: "FORCE schemes on unstructured meshes I: Conservative hyperbolic systems" (Journal of Computational Physics, 2009) [3] E. F. Toro & S. J. Billett: "Centred TVD Schemes for Hyperbolic Conservation Laws" (IMA Journal of Numerical Analysis, 2000)
https://nbviewer.jupyter.org/urls/www.numfys.net/media/notebooks/euler_equations_for_inviscid_flow.ipynb
CC-MAIN-2019-43
refinedweb
2,041
54.42
- (54) - Academic Free License (2) - Apache License V2.0 (1) - Artistic License (2) - BSD License (3) - Common Development and Distribution License (2) - GNU General Public License version 2.0 (27) - GNU Library or Lesser General Public License version 2.0 (10) - MIT License (5) - Microsoft Public License (1) - Mozilla Public License 1.0 (1) - Open Software License 3.0 (1) - W3C License (1) - Public Domain (2) - Creative Commons Attribution License (1) - Windows (58) - Grouping and Descriptive Categories (36) - Linux (24) - Mac (22) - Android (14) - Modern (10) - Embedded Operating Systems (4) Object Oriented Software - Hot topics in Object Oriented Software.net design pattern asp.net mvc projects school bell bindinglistview delphi excel export library arabic words blue bird asp.net project blue browser .Net 2.0 O/R Mapper with Class Generator Framework for RAD against relational database. Supports multiple database platforms. Uses generics and partial classes. Includes a winforms Class Generatator for autogenerating O/R mapper classes based on Customisable templates..2 weekly downloads AlarmClock AlarmClock is used to alert you when you need it. -Advertisement AssemblyDiff A small .Net utility to show difference between two Assembly Versions. Bell Log File Reader This software like a Text Log File Reader.5 weekly downloads BindingListView Sort, filter and aggregate lists of business objects without all the boring code! This library provides a powerful "view" approach to data binding .NET objects to user interface controls.19 weekly downloads Blue Bird(Web Browser) Blue Bird is a web browser useful enough for surfing the web.1 weekly downloads BugEye Unit Testing Framework BugEye is an XML-based unit test creation framework. Being XML-based, it can be easily translated to almost any language. The current translations are C#, Java, JavaScript, and Visual Basic. Future translations include C++, Python, Perl, and PHP. - CLAM (Common Language Automation Machine) attempts to eliminate the rote mapping of data to objects that is prevalent in most modern development languages by breaking the bonds of relational data and providing a generic mechanism for retrieving data. CodeBooster CodeBooster is a set of DataTier generating tools written in C# that generates a stack of Data Access layers and objects from the existing schema in a database. Current databases supported are MSSQL, Oracle, and MySql. Command Script 4 Command Script 4 (CSC4) is an "application control" or application extension scripting language / virtual machine / compiler environment. CSC4 supports object member access, object creation and static calls to the underlying .NET Framework 2.0. D.NET D.NET (DDotNet) are a dynamic implementation of ORM (Object Relational Mapping) framework implementation with Business Objects Access Tier The namespace Data Access Tier (DAT) provides access to a SQL Server Data Provider through an Object Oriented Data layer. Data Holder .net Framework Data Holder object/relational mapper. Provides typed data encapsulation and database persistence for .net apps. It also contains a wizard for generating the data objects code and persistance code. Right now it has implementation only for MSQL 2000. Design Pattern Automation Toolkit A toolkit to design applications using design patterns, with facility to generate code, and reverse engineering. Drag and Drop facility to create UML Class diagrams Support to write custom plug-ins for code generators and reverse engineering. Digester.NET Digester.NET is an up-to-date version of the Jakarta Digester that's popular in the Java community. - Dynamic Method Dispatch for .NET This library adds dynamic dispatch capability to statically typed languages like C# and VB.NET. Dzine Dzine is a case tool for OOAD. Draw use case,interaction,class,deployment,ER Diagrams. Code generation in C++ and java and for SQL. Reverse engg design from code. Just put in your Java or C++ code and generate class diagrams.1 weekly downloads Easy C# Command Line Argument Parser A very simple and easy to use command line argument parser library for .NET C# console applications. No need to read any documentation - just look at the provided example that you can also use as a template. The library has only 4 methods.1 Frigg3d Frigg3D is a 3D game engine with DirectX 9.0 support, BSP-level system, radiosity lighting, T&L, PS, .NET support and much more. ILNumerics.Net math lib for .NET. n-dim arrays, complex numbers, linear algebra, FFT, sorting, cells- and logical arrays as well as 3D plotting classes help developing algorithms on every platform supporting .NET. Sources from SVN, binaries:
https://sourceforge.net/directory/development/swdev-oo/language%3Avb_net/?sort=name
CC-MAIN-2017-39
refinedweb
732
50.43
Get the table row index from table DOM2 Discussion in 'Javascript' started by sudhaoncyberworld@gmail.com, access ul element's style via DOM2?Spartanicus, Nov 9, 2004, in forum: HTML - Replies: - 2 - Views: - 624 - DU - Nov 10, 2004 DOM2 API (Java): how to get namespace declarations?Simon Brooke, Feb 11, 2006, in forum: XML - Replies: - 8 - Views: - 4,415 - Simon Brooke - Feb 11, 2006 sorting index-15, index-9, index-110 "the human way"?Tomasz Chmielewski, Mar 4, 2008, in forum: Perl Misc - Replies: - 4 - Views: - 545 - Tomasz Chmielewski - Mar 4, 2008 table row property DOM2, Dec 5, 2005, in forum: Javascript - Replies: - 4 - Views: - 209 - Thomas 'PointedEars' Lahn - Dec 6, 2005 ie6 Insert Row at specific Row Index of TableGiggle Girl, Feb 5, 2006, in forum: Javascript - Replies: - 18 - Views: - 526 - Thomas 'PointedEars' Lahn - Feb 7, 2006
http://www.thecodingforums.com/threads/get-the-table-row-index-from-table-dom2.921726/
CC-MAIN-2016-07
refinedweb
138
62.41
I'm using Mockito 1.9.0. I want mock the behaviour for a single method of a class in a JUnit test, so I have final MyClass myClassSpy = Mockito.spy(myInstance); Mockito.when(myClassSpy.method1()).thenReturn(myResults); The problem is, in the second line, myClassSpy.method1() is actually getting called, resulting in an exception. The only reason I'm using mocks is so that later, whenever myClassSpy.method1() is called, the real method won't be called and the myResults object will be returned. MyClass is an interface and myInstance is an implementation of that, if that matters. What do I need to do to correct this spying behaviour? Let me quote the official documentation:Important gotcha on spying real objects! Sometimes it's impossible to use when(Object) for stubbing spies.); In your case it goes something like: doReturn(resulstIWant).when(myClassSpy).method1(); My case was different from the accepted answer. I was trying to mock a package-private method for an instance that did not live in that package package common; public class Animal { void packageProtected(); } package instances; class Dog extends Animal { } and the test classes package common; public abstract class AnimalTest<T extends Animal> { @Before setup(){ doNothing().when(getInstance()).packageProtected(); } abstract T getInstance(); } package instances; class DogTest extends AnimalTest<Dog> { Dog getInstance(){ return spy(new Dog()); } @Test public void myTest(){} } The compilation is correct, but when it tries to setup the test, it invokes the real method instead. Declaring the method protected or public fixes the issue, tho it's not a clean solution. The answer by Tomasz Nurkiewicz appears not to tell the whole story! NB Mockito version: 1.10.19. I am very much a Mockito newb, so can't explain the following behaviour: if there's an expert out there who can improve this answer, please feel free. The method in question here, getContentStringValue, is NOT final and NOT static. This line does call the original method getContentStringValue: doReturn( "dummy" ).when( im ).getContentStringValue( anyInt(), isA( ScoreDoc.class )); This line does not call the original method getContentStringValue: doReturn( "dummy" ).when( im ).getContentStringValue( anyInt(), any( ScoreDoc.class )); For reasons which I can't answer, using isA() causes the intended (?) "do not call method" behaviour of doReturn to fail. Let's look at the method signatures involved here: they are both static methods of Matchers. Both are said by the Javadoc to return null, which is a little difficult to get your head around in itself. Presumably the Class object passed as the parameter is examined but the result either never calculated or discarded. Given that null can stand for any class and that you are hoping for the mocked method not to be called, couldn't the signatures of isA( ... ) and any( ... ) just return null rather than a generic parameter* <T>? Anyway: public static <T> T isA(java.lang.Class<T> clazz) public static <T> T any(java.lang.Class<T> clazz) The API documentation does not give any clue about this. It also seems to say the need for such "do not call method" behaviour is "very rare". Personally I use this technique all the time: typically I find that mocking involves a few lines which "set the scene" ... followed by calling a method which then "plays out" the scene in the mock context which you have staged... and while you are setting up the scenery and the props the last thing you want is for the actors to enter stage left and start acting their hearts out... But this is way beyond my pay grade... I invite explanations from any passing Mockito high priests... * is "generic parameter" the right term? In my case, using Mockito 2.0, I had to change all the any() parameters to nullable() in order to stub the real call. Can I restore a MySQL table from a file (non-sql) backup? Models file looks like: I am trying to implement Apache Tomcat's built-in JDBC connection poolBut I am confused with many things
https://cmsdk.com/java/mockito-trying-to-spy-on-method-is-calling-the-original-method.html
CC-MAIN-2017-47
refinedweb
666
65.73
Both Mozilla browsers support address bar shortcuts. They currently only support one substitution, but it works pretty well for most applications. To create a shortcut, save a bookmark to the site you wish to have a shortcut to, then edit it and put a %s where the query would go and put a name for the shortcut in the shortcut field of its properties. F.ex, a shortcut for the WlugWiki might look like and be invoked with the name wlug. Then you could enter "wiki MozillaNotes" in your address bar to get to this page. Search plugins <search name="WLUG Wiki" description="Wlug Wiki TitleSearch" method="GET" action="" queryEncoding="utf-8" queryCharset="utf-8" > <input name="auto_redirect" value="1"> <input name="s" user> </search> To use it, save it as wlug.src or something like that in Mozilla's searchplugins/ directory and restart the browser. Mycroft is a huge repository of search plugins for all manner of sites. If you're sick of ads add this to your userContent.css file mentioned above: *"] { -moz-opacity: .2 !important; } and restart Mozilla. You'll have to "Exit Mozilla" if you use Windows Quick Launch. Alternatively, change the line that says -moz-opacity: .2 !important; to display: none !important; to not even bother loading ads. Hitting the Escape key will stop GIF animations in Firefox. To disable Flash animation, see the Flash and Java Plugin click-to-load section below. These settings are set in the prefs.js file. You can go to the URL about:config (can't make this an href in wiki!!) from within Mozilla to see a list of (all?) the preferences settings, and whether the default value is being used or not. The site explains in detail what some of the more useful settings do. user_pref("font.minimum-size.x-western", 13); pref("font.FreeType2.enable", true); pref("font.freetype2.shared-library", "libfreetype.so.6"); pref("font.FreeType2.autohinted", true); pref("font.FreeType2.unhinted", false); pref("font.antialias.min", 16); pref("font.directory.truetype.2", "/usr/share/fonts/truetype"); pref("font.scale.aa_bitmap.enable", true); //pref("font.scale.aa_bitmap.always", true); pref("font.scale.aa_bitmap.min", 16); Note: fonts that Mozilla lists as having a lowercase initial letter are retrieved through X11 and don't support anti-aliasing. Debian testing and unstable have a package called mozilla-xft which contains the libraries needed to use TrueType fonts with anti-aliasing. user_pref("ui.textSelectBackground", "#b4b2b4"); user_pref("ui.textSelectForeground", "#000000"); This gives me black text on a light grey background for text that is selected by the mouse or by searching. To use your default MailClient instead of Mozilla Mail, use user_pref("network.protocol-handler.external.mailto", true); Now this doesn't seem to work in Linux, so you might want to go to the Mozex site and get Mozex, an extension that lets you use any program you like for external links, viewing source etc. (And feed it evolution mailto:%A?Subject=%S&Cc=%C&body=%B ) If you have problems with plugins, almost all of them can be solved by manually copying the right files from your installation of whatever-player into your plugin directory. Check out for which file does what. Generally, plugins need to be compiled with a similar version of g++ as mozilla itself was compiled (since g++ changed "name-mangling" schemes between version 2.9x and version 3.x). If you only install plugins provided by your distribution, then they have probably compiled everything with the same compiler version. This is the same with the EnigMail PGP plugin for MozillaMail - they provide two versions, one for each compiler version. Macromedia Flash Plugin: after struggling for months with flash plugin crash, bringing Mozilla down with it, whenever a web page had flash on it, I finally solved the problem. The problem was an issue between Flash 6.0 r79 and XFree86. My X Config file (/etc/X11/XF86Config-4) had a color depth that wasn't an even-byte-boundary depth. It was "DefaultColorDepth 15". When I changed the line to read "DefaultColorDepth 16" flash started working-- no more crashing. Hooray! Mozilla-based browsers (including Galeon) don't seem to have any easy way of disabling plugins under Linux. and will only tell you how to disable them under Windows (the "plugin.scan" preference) or tell you to delete the plugin file (under linux). But if you are using a machine that you do not administer, and it has plugins installed into system directories like /usr/lib/mozilla/plugins, then you are pretty much stuck with it. But - there is a way to get your mozilla to not use the annoying acroread pdf plugin (or any other annoying plugin) - edit your $HOME/.mozilla/pluginreg.dat file. First, completely close down your browser. Open the file in an editor, and remove the lines related to the pdf (or whatever) plugin. For me, I removed the /opt/Acrobat5/Browsers/intellinux/nppdf.so:$ :$ 1092927951000:1:1:$ :$ nppdf.so:$ 1 0:application/pdf:Portable Document Format:pdf:$ The format seems to consist of lines ending with ":$" - the filename, a blank line, a magic number of some sort, a description of the plugin, the name of the plugin, and then a number saying how many lines follow. Those following lines (indexed from 0) contain all the mime-types that the plugin handles. Now I can control how my pdf files open, instead of being forced into that horrible acroread browser plugin... You can enable the Flash and Java plugins but not automatically load flash animations or Java programs by using CSS and XUL. This can also be used for arbitrary types. Copy the following into your your $HOME/.mozilla/firefox/salt/chrome/userContent.css file: /*("resource:///res/clickToView.xml#flash"); } /* Prevent java applets from playing until you click on them. */ object[codebase*="java"], object[type="application/java"], embed[type="application/java"], applet[code$=".class"] { -moz-binding: url("resource:///res/clickToView.xml#java"); } The file location of the userContent.css will be different for other browsers: try $HOME/.galeon/mozilla/galeon/chrome or $HOME/.mozilla/default/salt/chrome/userContent.css. Next, copy the clickToView.xml file into the "res" subdirectory of the directory where firefox is installed. The command "which firefox" can help determine the directory where firefox is installed, and root may be needed to install the xml file. The xml file comments also give directions for installing clickToView, including hints for Windows OS users. Any Flash and Java content will be replaced by a button, and if you click the button it will start the animation. Sometimes flash content covers other content and then shrinks to reveal the real page content. To help manage this, the button is translucent (until you put the cursor over the button) to show what is under it. There is also an X button inside that deletes the click-to-view button without playing (this capability was added on 9/21/06). clickToView.xml really should be in the chrome directory with userContent.css, but this is not yet supported. Eventually, "pro" may be available to replace "resource:///res". And "" will not work here. Keeping a copy of clickToView.xml in the chrome directory is a good idea anyway, and you may want to use a symlink to the xml file in chrome from the res directory. It is possible to load the clickToView.xml from the web instead of installing a local copy, but this can be slow and the xml file may not always be available or may be modified without your knowing. Use a local xml file if you can, but if necessary the file can be loaded from the web by replacing the moz-binding line with this: { -moz-binding: url(""); } There is also a Firefox extension called FlashBlock that provides similar capabilities. For people that want to know the keyboard shortcuts: As of version 1.4, this is in mozilla's help: "Help" menu -> "Help Contents" -> "Mozilla Keyboard Shortcuts". Rather than using the PRINTER or LP EnvironmentVariables to determine where to print, Mozilla inspects MOZ_PRINTER_NAME instead and falls back to a default of lp if it is unset. If you find that Mozilla is a little too heavy for your system, try out Firefox. The current version (1.0) is very stable and much lighter on memory than Mozilla. It supports Mozilla's useful features, including tabbed browsing, popup suppression and the fact that it is, of course, based on the wonderful Gecko. It also adds a few features, including extension support (to make the default browser more "minimal"), and fully customisable toolbars! Note that MozillaFirefox is a replacement for the Mozilla browser component only, and doesn't include Mail, Composer, or any of those things. However, a lightweight, XUL based mail client called Thunderbird is a work in progress, but is still in a very early stage. MozillaFirefox will become the "official" Mozilla web browser (rather than the current, large integrated browser/mailer/chatzilla that is code-named SeaMonkey). The project's internal name was recently changed from Phoenix to Firebird to Firefox (it's explained on the MozillaFirefox page.) However, the first release is expected to be called simply "Mozilla Browser" (until a better name is found :) There is good support now for Mac OS X. You could also look at Chimera. The IMAP spec allows IMAP servers to use whatever namespace they like to store the messages and defines an extension (RFC2342 - NAMESPACE extension to IMAP4) that describes how a IMAP client can discover what this is. Now if you are like me you like to have different folders from your Inbox that you keep mail in, and you prefer that these folders are on the same level as your Inbox not subfolders of your Inbox. Unfortunately due to some weird interactions between Courier-imap and Mozilla this doesn't work by default. However you can get it working with some magic in the mozilla configuration. Click OK and exit out of the config. You must completely close Mozilla (Mail, Browser everything) before these changes will take effect - This caught me out a few times. You can now create and use folders at the same level as the Inbox with Courier-IMAP --Actually, it only appears that you are doing this. Arguably the same point, but the way the data is stored on the server doesn't change. So if you fire up mozilla on a different computer without having made the above changes, you will still get the old view of your directories By default Mozilla will only check the INBOX of an Imap server for new messages, to change this behavior. user_pref("mail.check_all_imap_folders_for_new", true); I had a problem where Mozilla (version 1.2.1) would hang when going to the "Helper Applications" preference. I found that if I moved the "XUL.mfasl" file out of the way (in my ~/.mozilla prefs directory) things worked again after the next mozilla startup. Also, I had a similar problem where one day mozilla mail would work but any attempt to start a browser would result in it sitting in a loop allocating memory. Removing the large XUL.mfasl fixed this (mozilla recreates this file the next time you start). This might also fix problems if Mozilla/Firefox crashes on startup after upgrading to a newer version. Mozilla catches M-Left and M-Right, and also M-ScrollDown and M-ScrollUp, as Page-forward/Page-backward commands. In theory, you can bind any key to do this, via the 'mozilla -remote xfeDoCommand(back)' or some similar magic, but the back and forward commands aren't actually implemented yet. If you run sawfish, you can get Button 6 and Button 7 on your Intellimouse Explorer Pro Optical Whatever to do page forward/back by hacking at your /.sawfish/custom file. I added a dummy event for Button6-Click and Button7-Click (say, open xterm) via the control panel applent (sawfish-ui), and then opened my custom file and edited the event as shown below. Note that after this I wasn't really able to edit my custom bindings any more, as sawfish-ui didn't know how to deal with synthesize-event. ((synthesize-event "M-Left" (input-focus)) . "Button6-Click") ((synthesize-event "M-Right" (input-focus)) . "Button7-Click") What this actually does is sends a M-Left or M-Right on Button6/Button7 click. It works. There might be a nicer way of doing this in other window managers. or it might be possible to get mozilla to grab button6/7 itself, I dont know. -- DanielLawson For a more generic approach, install the IMWheel daemon and create a .imwheelrc in your user directory. To map the side buttons to alt-left and alt-right, put into it: ".*" None, Up, Alt_L|Left None, Down, Alt_L|Right Finally, run 'imwheel -b 67' to map buttons 6 and 7 only. See Google or the IMWheel site for more examples. --Pradeep Sanders Another possibility is to remap the mouse buttons using xmodmap. This was actually done under Solaris (Nevada build 30), but I see no reason why it shouldn't also work under Linux (I don't currently have a Linux machine to try it). I discovered through trial and error (and a little luck) that buttons 6 and 7 were indeed making Mozilla move back and forward through the browsing history (I didn't actually have to do any of the above steps). Using xev to debug the mouse buttons (I have a Logitech MX700), I found the following: Button Desc 1 Normal left button 2 Press the wheel 3 Normal right button 4 Wheel up 5 Wheel down 6 Wheel up fast 7 Wheel down fast 8 Back thumb button 9 Forward thumb button 10 The one on the top 11 Don't know 6 and 7 are generated if the mouse wheel is spun very quickly, and they make Mozilla move through the browse history. Also, for some reason, the USB driver seems to detect 11 buttons - maybe it's a bug in the driver (but it doesn't seem to do any harm). Obviously, I'd like buttons 8 and 9 to make Mozilla move through the browse history, so I swapped them with buttons 6 and 7 using xmodmap: xmodmap -e "pointer = 1 2 3 4 5 8 9 6 7 10 11" -- Allan Black I was having problems where somehow typeahead find would become enabled whenever I tried to type into the address bar and it was near on impossible to get out of the typeahead find mode. To disable it set accessibility.typeaheadfind to false. This appears to be a bug in Mozilla (or X) - occasionally the focus gets stuck on the wrong object and the main page gets the focus, even if you click on the address bar on in a form field. If you are using tabs, changing to another tab and back clears the problem, but it is still an annoying bug. The point is I don't think it's the typeaheadfind's fault... I haven't seen it in moz 1.4 or later, although I still see at in 1.2.1 -- JohnMcPherson I had to set MAXPERIP=5 in /etc/courier/imapd (on a Debian 3.0 server). Otherwise Courier-IMAP defaults to allow only 4 connections per IMAP client and that causes Thunderbird (and Netscape Messenger, IIRC) to stop responding. -- FrEd -- This is actually configured within the mail client (Thunderbird, Netscape Messenger, Mozilla Mail, etc). Go to the Account Settings, then Server Settings, press the 'Advanced' button, and there is field for 'Maximum number of server connections to cache', which is set to 5 by default. -- DanielLawson Linux Firefox comes with a GTK2 file picker that is very slow. You may see a large delay when starting a download or using "Save Image As" or "Save Link As". The delay increases if there are many files in the target directory, and KDE users may see more delay than GNOME users. The GTK2 file picker also selects the target file name, so you cannot middle click to paste previously selected text into the name. You can recover the built-in XUL file picker. For Firefox 2 and above, type “about:config” into the URL and find the preference called “ui.allow_platform_file_picker”. Double-click on the preference to change the value to false. For more details, see the MozillaZine KnowledgeBase. (For Firefox 1.5, you need to use different instructions.)
http://wiki.wlug.org.nz/MozillaNotes
CC-MAIN-2020-10
refinedweb
2,752
64.2