text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I want to make some "duel" with two "units". I write class "duel" that constructs from two "units". But some kind of "unit" is special (inherited from units) like heroes, bosses etc. And they want to use special strikes during battle. But actually class "duel" doesn't know who is hero, or who is pure unit. Code looks like this: #include <iostream> class unit{ public: unit(){}; virtual void make_hit(){ std::cout<<"pure hit\n"; } }; class hero:public unit { public: hero():unit(){}; void make_hit(){ std::cout<<"SUPER hit\n"; } }; class duel { unit *a, *b; public: duel(unit _a, unit _b):a(&_a),b(&_b){}; void start (){ a->make_hit(); b->make_hit(); } }; int main(){ duel(unit(),hero()).start(); return 0; } I have two main problem. First - I use refers to temporary objects in constructor. That objects illegal when duel::duel() finished. Second - my hero turned into pure unit, and doesn't use "SUPER hit" Is it possible fix it in elegant way (without changing call in main())?
http://www.developersite.org/1002-137-c2b2b
CC-MAIN-2018-22
refinedweb
164
73.88
Twig – the Most Popular Stand-Alone PHP Template EngineBy Claudio Ribeiro This article was peer reviewed by Wern Ancheta. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! More from this author Tw> > </body> </html> Opening index.php in the browser (by visiting localhost or homestead.app, depending on how you’ve set up your hosts and server) should produce the following screen now: But let’s go back and take a closer look at our template code. There are two types of delimiters: {{ ... }} is used to print the result of an expression or an operation and {% ... %} is used to execute statements like conditionals and loops. These delimiters are Twig’s main language constructs, and are what Twig uses to “inform” the template that it has to render a Twig element. Layouts In order to avoid the repetition of elements (like headers and footers) in our templates, Twig offers us the ability to nest templates inside of templates. These are called blocks. To exemplify this, let’s separate actual content from the HTML definition in our example. Let’s create a new HTML file and call it layout.html: <!DOCTYPE html> <html lang="pt-BR"> <head> <meta charset="UTF-8"> <title>Tutorial Example</title> </head> <body> {% block content %} {% endblock %} </body> </html> We created a block called content. What we are saying is that every template that extends from layout.html may implement a content block that will be displayed in that position. This way we can reuse the layout multiple times without having to rewrite it. In our case, the index.html file will now look like this: {% extends "layout.html" %} {% block content %} > {% endblock %} Twig also allows us to render just a single block. To do that we need to first load a template and then render the block. $template = $twig->load('index.html'); echo $template->renderBlock('content', array('products' => $products)); At this point, we still have the same page, but we reduced the complexity of it by separating contextual blocks. Cache The Environment object can be used for more than just loading templates. If we pass the cache option with a directory associated, Twig will cache the compiled templates so it avoids the template parsing in subsequent requests. The compiled templates will be stored in the directory we provided. Be aware that this is a cache for compiled templates, not evaluated ones. What this mean is that Twig will parse, compile and save the template file. All subsequent requests will still need to evaluate the template, but the first step is already done for you. Let’s cache the templates in our example by editing our bootstrap.php file: $twig = new Twig_Environment($loader, ['cache' => '/templates/cache']); Loops In our example we’ve already seen how a loop is done using Twig. Basically, we use the for tag and assign an alias for each element in the specified array. In our case, we assigned the alias product for our products array. After that, we can access all the attributes in each array element by using the . operator. We use the endfor tag to indicate the end of our loop. We can also loop through numbers or letters using the .. operator. Just like the following: {% for number in 0..100 %} {{ number }} {% endfor %} or for letters: {% for letter in 'a'..'z' %} {{ letter }} {% endfor %} This operator is just syntactic sugar for the range function which works just like the native PHP range function. Also useful is the option to add a condition to a loop. With a condition, we’re able to filter which elements we want to iterate through. Imagine we want to iterate through all products whose value is less than 250: <tbody> {% for product in products if product.value < 250 %} <tr> <td>{{ product.name }}</td> <td>{{ product.description }}</td> <td>{{ product.value }}</td> <td>{{ product.date_register|date("m/d/Y") }}</td> </tr> {% endfor %} </tbody> Conditionals Twig also offers us conditionals in the form of the tags if, elseif, if not, and else. Just like in any programming language, we can use these tags to filter for conditions in our template. Imagine that, in our example, we only want to show products with a value above 500. <tbody> {% for product in products %} {% if product.value > 500 %} <tr> <td>{{ product.name }}</td> <td>{{ product.description }}</td> <td>{{ product.value }}</td> <td>{{ product.date_register|date("m/d/Y") }}</td> </tr> {% endif %} {% endfor %} </tbody> Filters Filters allow us to filter what information is passed to our template and in which format it is shown. Let’s look at some of the most used and important ones. The full list of Twig filters can be found here. Date and date_modify The date filter formats a date to a given format. As we can see in our example: <td>{{ product.date_register|date("m/d/Y") }}</td> We are showing our date in a month/day/year format. On top of the date filter, we can change the date with a modifier string using the date_modify filter. For example, if we wanted to add a day to our date we could use the following: <td>{{ product.date_register|date_modify("+1 day")|date("m/d/Y") }}</td> Format Formats a given string by replacing all the placeholders. For example: <td>{{ "This product description is: %s"|format(product.description) }}</td> Striptags The striptags filter strips SGML/XML tags and replaces adjacent white space with one space: {{ <p>Hello World</p>|striptags }}` Escape Escape is one of the most important filters. It filters a string for safe insertion in the final output. By default, it uses the HTML escaping strategy, so {{ products.description|escape }} is equivalent to {{ products.description|escape('html') }} The js, CSS, URL and html_attr escaping strategies are also available. They escape the string for the Javascript, CSS, URI and HTML attribute contexts respectively. Debug Lastly, let’s take a look at debugging. Sometimes we need to access all the information on a template variable. For that effect Twig has the dump() function. This function is not available by default. We must add the Twig_Extension_Debug extension when creating our Twig environment: $twig = new Twig_Environment($loader, array('debug' => true)); $twig->addExtension(new Twig_Extension_Debug()); This step is needed so we don’t accidentally leak debug information on a production server. After configuring it, we can just use the dump() function to dump all the information about a template variable. {{ dump(products) }} Conclusion Hopefully, this article can give you a solid base on the fundamentals of Twig, and get your projects started right away! If you want to go deeper into Twig, the official website offers a very good documentation and reference that you can consult. Do you use a template engine? What do you think of Twig? Would you compare it with popular alternatives like Blade, or even Smarty?
https://www.sitepoint.com/twig-popular-stand-alone-php-template-engine/
CC-MAIN-2017-34
refinedweb
1,136
66.64
Question: And if so, under what circumstances? Javadoc and JPA spec says nothing. Solution:1 You are right. JPA specification says nothing about it. But Java Persistence with Hibernate book, 2nd edition, says: If the query result is empty, a null is returned Hibernate JPA implementation (Entity Manager) return null when you call query.getResultList() with no result. UPDATE As pointed out by some users, it seems that a newest version of Hibernate returns an empty list instead. Solution:2 If the specs said it could't happen, would you belive them? Given that your code could conceivably run against may different JPA implementations, would you trust every implementer to get it right? No matter what, I would code defensively and check for null. Now the big question: should we treat "null" and an empty List as synonymous? This is where the specs should help us, and don't. My guess is that a null return (if indeed it could happen) would be equivalent to "I didn't understand the query" and empty list would be "yes, understood the query, but there were no records". You perhaps have a code path (likely an exception) that deals with unparsable queries, I would tend to direct a null return down that path. Solution:3 Contrary to Arthur's post, when I actually ran a query which no entities matched I got an empty list, not null. This is using Hibernate and is what I consider correct behaviour: an empty list is the correct answer when you ask for a collection of entities and there aren't any. Solution:4 Of course, if you test the result set with Jakarta's CollectionUtils.isNotEmpty, you're covered either way. Solution:5 If you take a close look at the org.hibernate.loader.Loader (4.1) you will see that the list is always initialized inside the processResultSet() method (doc, source). protected List processResultSet(...) throws SQLException { final List results = new ArrayList(); handleEmptyCollections( queryParameters.getCollectionKeys(), rs, session ); ... return results; } So I don't think it will return null now. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/02/tutorial-can-javaxpersistencequerygetre.html
CC-MAIN-2018-09
refinedweb
363
66.13
Today at TechEd, I had the opportunity to talk with a number of developers and IT Pros about some of the new ways in which you can use Microsoft’s cloud platform, Windows Azure, to build some very compelling solutions. I covered four areas in the talk: The following slide-deck provides an overview of these integrations and more generally the talk. Following the slide-show, I’ve added some color commentary on each of these areas—which I’ll expand on in future blog-posts. TechEd Talk: Why SharePoint and Windows Azure are Just Plain Better Together (June, 13, 2012) Synchronizing Data from On-premises to the Cloud This is an interesting area that provides us to leverage a data sync from data that we are storing in SQL Server on-premises (which I was using in an external list via BCS in SharePoint) to a SQL Database instance sitting in Windows Azure. It’s interesting because one of the key concerns today is data protection and sovereignty. For example, the screen-shot below provides a snapshot of my Data Sync Group that takes data from my on-premises SQL Server and syncs it with SQL Database. You can see on the right the “Edit Dataset” button that enables you to edit the columns you want to expose to your SQL Database. Once you’ve got your data into your SQL Database, you can do any number of things—such as building a WCF wrapper, REST service, etc. to expose that data to any number of endpoints and clients. This is where the cross-device and SharePoint Online (SP-O) story comes into play here: through a REST service, for example, you can expose the data from the SQL Database to either a phone, tablet, Win 8, or SP-O application. To illustrate this, I created a simple REST service that is being consumed through a Windows Phone 7.5 application. The code for this was straightforward and was created using an ASP.NET application with a WCF Data Service template, which I then deployed to Windows Azure. … namespace RestfulPhoneApp { public partial class MainPage : PhoneApplicationPage { private TechEdEntities context; private readonly Uri techedURI = new Uri(""); private DataServiceCollection<Speaker> speakers; public MainPage() { InitializeComponent(); context = new TechEdEntities(techedURI); speakers = new DataServiceCollection<Speaker>(context); var query = from spkrs in context.Speakers select spkrs; speakers.LoadCompleted += new EventHandler<LoadCompletedEventArgs>(speakers_LoadCompleted); speakers.LoadAsync(query); } void speakers_LoadCompleted(object sender, LoadCompletedEventArgs e) { if (e.Error == null) { if (speakers.Continuation != null) { speakers.LoadNextPartialSetAsync(); } else { this.LayoutRoot.DataContext = speakers; } } else { MessageBox.Show(string.Format("An error has occurred: {0}", e.Error.Message)); } } } } The net with this demo was the ability to first synchronize the data from on-premises (the same data I was using to populate and manage data within an external list within my on-premises instance of SharePoint) to SQL Database in Windows Azure, which can then be consumed in a variety of ways/clients—as shown through the WP7.5 app above. For more information on Data Sync, download the latest Windows Azure Training Kit and walk through the Data Sync module. Leveraging Windows Media Services Azure Media Services is very cool; it provides you with a way to manage media workflow via a set of services sitting in Windows Azure. And while this workflow is tangential to SharePoint, the ultimate time you save and where the application you build ends up directly makes media preparation and deployment very easy for SharePoint. The example I showed earlier today was first leveraging the services in Windows Azure Media Services to encode a WMV file to MP4—to illustrate one service to use, and then leveraged the resulting MP4 file that was stored in BLOB storage. I then created an SMF player (which is a Silverlight application) with a link to my uploaded and encoded MP4 video that was then playable in a Silverlight web part that I created in SP-O (which would also work the same in SP on-premises). To do this, I created a new SMF player application and then amended the video link manifest (the link to the video), built the XAP and then as mentioned above leveraged the XAP within a Silverlight web part. <UserControl x: <Grid x: <smf:SMFPlayer> <smf:SMFPlayer.Playlist> <media:PlaylistItem </smf:SMFPlayer.Playlist> </smf:SMFPlayer> </Grid> </UserControl> The Windows Azure Media Services are a new set of services, and there’s some great starter information for the developer within the MSDN SDK. To download the SMF player, which has some video how-to tutorials and docs, visit Codeplex here. Overall, there’s a ton of potential here, both in video production and in integrating the finished production video into SharePoint or SharePoint Online. Leveraging the New Virtual Machine Role for SharePoint This is one of the biggest announcements that’s landed in the last week, and in the various sessions, keynote activities and other chalk-talks, SharePoint has had a mention in being a workload that is supported. Point of fact is that SharePoint is one of the core workloads that is supported on Windows Azure VM (or also referred to more generally as Infrastructure as a Service—IAAS). This is important because SharePoint has dependencies on Active Directory and SQL Server—which are also core workloads that are supported on the new Virtual Machine role. Using the new Virtual Machine (VM), you can do a couple of things: You can also build a standalone SharePoint server, which essentially is a self-contained server sitting in the cloud, or you can create a SharePoint farm—Paul Stubbs spoke on this at TechEd as well. I used the standard Windows Server 2008 R2 SP1 image in the gallery to do this (see below) and then remoted into the image and added the appropriate software to the image. In today’s session, I walked through how to create a new Virtual Machine which was a standalone SharePoint server. I then built and customized a couple of web parts, deployed them to the SharePoint site collection and then opened up the site for audience visibility using Anonymous Access to the site to let the audience join in on the fun. I’ll be blogging more on this topic as there’s a ton to write about here. In the meantime, to get started take a look at this article here, and then make sure you get yourself sorted out with an account by going here. Lots more to come here!!! Office 365, SharePoint Online & Windows Azure I’m not going to talk too much about this; I’ve spoken a lot on this topic, and you can find some info on my past blogs. The net net is I walked through how to use Windows Azure WCF services to connect a LOB (using SQL Database) to BCS and SP-O. A good blog-post on this is here. I am very excited at the possibilities around Windows Azure and SharePoint and will continue to post on these areas, focusing on the above and other areas I discover. Happy coding! Steve @redmondhockey
http://blogs.msdn.com/b/steve_fox/archive/2012/06/13/sharepoint-and-windows-azure-data-sync-media-services-amp-virtual-machines.aspx
CC-MAIN-2015-06
refinedweb
1,181
56.49
Build Functions to Easily Perform Repeated Operations - Feb 18 • 7 min read - Key Terms: functions, lists, loops, math Functions allow you to store reusable logic. Function Terminology type(32) int The name of the function is the words that precede the parentheses. The expression in parentheses is the argument of the function. Functions don't need arguments; functions can contain multiple arguments. The result for this function is outputted below the function and called the return value. Every function returns a value. In the statement above, our function named type takes in an argument of 32 and returns int - an integer. type is a function built into the Python standard library. So, the function has already been defined for us. In the statement above, we call the function type to perform the function's computation and return the result. Execution of the return statement inside a function exits the function. Bike Trips Example Below is a sample of data for bike trips as a list of lists. Each inner list holds the data for a trip formatted as [duration in seconds, date]. bike_trips = [[475, '2018-02-18'], [825, '2018-02-18'], [1034, '2018-02-18'], [980, '2018-02-18'], [1350, '2018-02-19'], [1880, '2018-02-19'], [1950, '2018-02-19'], [1530, '2018-02-19'] ] We want to answer these questions: - What is the average trip duration on any given day? - How many trips were taken on any given day? In terms of Python, there's 3 componenents we want to create: - Sum of trip durations on a day - Count of trips on a day - Averge trip duration on a day We can create our own functions with logic for each component. Sum of trip durations on a day def is a Python keyword that indicates the start of a function definition. The name preceding def is the function name. Python recommends lowercase letters and underscores between words - similar to naming variables. An argument passed to sum_seconds_biked_day is assigned to a variable called a parameter. We use this parameter in the body of our function. Inside our functions, you'll see text in triple quotes. These triple quotes are docstrings - plain text as comments to document the logic of our function. # global variables below can be used in any function index_trip_duration_seconds = 0 index_trip_date = 1 def sum_seconds_biked_day(ride_date): """ Find the sum of seconds biked on a given day :param ride_date: string in format year-month-day :returns sum_trips_seconds: sum of seconds biked on a single day """ sum_trips_seconds = 0 for trip in bike_trips: if trip[index_trip_date] == ride_date: sum_trips_seconds += trip[index_trip_duration_seconds] return sum_trips_seconds Count of trips on a day def count_bike_trips_day(ride_date): """ Count bike trips on a given day :param ride_date: string in format year-month-day :returns count: count of unique bike trips in a day """ count = 0 for trip in bike_trips: if trip[index_trip_date] == ride_date: count += 1 return count Average trip duration on a day An average computation is the sum of events divided by the count of events. We can call our two previously created functions inside a new function. def average_trip_duration_seconds_day(ride_date): """ Compute average trip duration in seconds :param ride_date: string in format year-month-day :returns average_trip_duration_in_seconds: average duration of trips in a day - units are seconds """ average_trip_duration_in_seconds = sum_seconds_biked_day(ride_date) / count_bike_trips_day(ride_date) return average_trip_duration_in_seconds In average_trip_duration_seconds_day, the flow of execution is to call the function sum_seconds_biked_day and divide the return value by the return value of count_bike_trips_day. Answer questions for February 18th, 2018 sum_seconds_biked_day('2018-02-18') 3314 count_bike_trips_day('2018-02-18') 4 average_trip_duration_seconds_day('2018-02-18') 828.5 Present data in better readable format It's weird to say the average trip duration for February 18th, 2018 is 825.5 seconds. You should pronounce it in days. Let's design a function to present the average trip durations in a more typical format such as 38 minutes and 40 seconds. I'll use Python operations for floor division and modulo. seconds_in_a_minute = 60 def convert_seconds_to_minutes_seconds_readable_format(total_seconds): """ Convert a seconds value into a clean human readable format of X minutes and Y seconds :param total_seconds: seconds value :returns readable_statement: human readable string of minutes and seconds """ minutes = total_seconds // seconds_in_a_minute seconds = total_seconds % seconds_in_a_minute readable_statement = str(minutes) + " minutes and " + str(seconds) + " seconds" return readable_statement We pass a function call as an argument to a function too. convert_seconds_to_minutes_seconds_readable_format(average_trip_duration_seconds_day('2018-02-18')) '13.0 minutes and 48.5 seconds' Why are Functions Important Creating a new function gives you an opportunity to coordinate similar statements together, which makes your code easier to read and debug. Our seconds conversion definition utilizes similar concepts with seconds, minutes and math logic around time conversions. Functions can make a program smaller by eliminating repetitive code. You'll often hear in programming the acronym DRY - do not repeat yourself. We wrapped count_bike_trips_day in a function and can utilize it to calculate just the count of trips in a day as well as the average trip duration of rides in a day. We reuse our counting logic. It is much more concise to call a function twice than to copy and paste the body! Using multiple functions in a program allows you to easily write logic in components - step by step - just as we did with our sum, count and average functions. I find it's quicker to get to the final solution by writing out small functions and utilize them together. Well-designed functions can be used in multiple programs. Our convert_seconds_to_minutes_seconds_readable_format can used to analyze the time it takes to repair bikes, the amount of time people spend in a bike store and more.
https://dfrieds.com/python/functions
CC-MAIN-2019-26
refinedweb
928
51.48
DEBSOURCES Skip Quicknav sources / epic5 / 2.0.1-1 /276 EPIC5-2.0 EPIC5-1.8 *** News 01/30/2016 -- /WINDOW LOGFILE and /SET LOGFILE more like /LOG FILNAME Historically, changing a logfile name (with /WINDOW LOGFILE and /SET LOGFILE) does not affect the log status. This leads to unexpected behavior if you do /WINDOW LOG ON LOGFILE foo.txt because /WINDOW LOGFILE only changes the filename the *next* time you open the log, not affecting the currently open log. The behavior of /LOG FILENAME is more in line with what people said they expected. If you change /LOG FILENAME while the log is ON, then it will 1) close the existing log, 2) change the filename, and 3) re-open the log under the new name. The behavior of /WINDOW LOGFILE and /SET LOGFILE have been changed to match the behavior of /LOG FILENAME -- changing the logfile name while it is open will close the existing log and open a new one. EPIC5-1.6 *** News 01/08/2016 -- Per-server vhosts now restrict protocol (ipv4/ipv6) Historically, the client tries to connect to the server using the addresses as they are returned in order. (This is a great thing for round-robin or geographic-aware dns resolvers.) However, if you have a per-server vhost, you probably intend that epic use that vhost to connect to the server. But what happens if your vhost is ipv4 only or ipv6 only, and the first address to the server is to the other protocol? Historically, epic will just go ahead and connect without your vhost. You've been able to correct this behavior by specifying explicitly the protocol family: /server irc.foo.com:proto=v6:vhost=irc.leet6.com Some folks said it violated POLA, so here's a new rule: "If you set a per-server vhost, then that server can only be connected to if the vhost can be used. If ths means that no addresses can be used, then you will not be able to connect to the server until you clear the vhost." This rule does not apply to you if you're using /HOSTNAME but only if you are doing something like /server irc.foo.com:vhost=irc.leet6.com *** News 01/01/2016 -- Can now /query an exec process that doesn't exist Previously, you were forbidden from setting up a /query to an /EXEC process that didn't exist. That set up a race condition between running an /EXEC process and being able to corral its output into a window via the query. So now you can /query an exec process before you start it. If you try to send a message to it before you do fire it up, the user will see a diagnostic telling them that the message could not be sent to a non-existing exec process. *** News 01/01/2016 -- New /window operation, /window log_mangle This allows you to overrule /SET MANGLE_LOGFILES for logs that you create with /WINDOW LOG ON (only!) Example: window logfile "my.windowlog" mangle NORMALIZE,COLOR log on *** News 01/01/2016 -- New /window operation, /window log_rewrite This allows you to overrule /SET LOG_REWRITE for logs that you create with /WINDOW LOG ON (only!) Example: window logfile "my.windowlog" rewrite "HOOBOO $1-" log on *** News 01/01/2016 -- Refinement to $pad(), $[len]var, and $leftpc() These functions do not behave graciously since our conversion to UTF-8, since they count code points rather than columns. It just seems sensible to redefine the behavior of these functions based on columns, which is what everybody probably expects them to do. Function: $pad(len char string) Summary: Extend, but do not truncate, do not justify 'string' Definition: Return 'string' so that it takes up at least 'len' columns. If it is too short, it will be padded with 'char's until it is 'len' columns wide. If it is too long, it is NOT truncated. Function: $leftpc(len string) Summary: Truncate, but do not extend, do not justify 'string' Definition: Return the first 'len' columns of 'string'. If it is too short, it will NOT be padded. If it is too long, it will be truncated on the right end. Function: $[len]var Summary: Extend, truncate, and justify $var Definition: Return $var so that it takes up EXACTLY 'len' columns. If it is too short, it will be padded with /set pad_char until it is 'len' columns wide. If it is too long, it is truncated. If 'len' is > 0, then the string is left justified, and padding (or truncation) happens on the right end. If 'len' is < 0, then the string is right justified, and padding (or truncation) happens on the left end. Function: $fix_width(len justify pad string) Summary: Extend, truncate, and justify 'string' Definition: Return 'string' so it takes up EXACTLY 'len' columns. If it is too short, it will be padded with 'pad' until it is 'len' columns wide. If it is too long, it is truncated. If 'justify' is "l" then the string is left justified, and padding (or truncation) happens at the right If 'justify' is "c" then the string is centered, and padding (or truncation) happens equally at both ends. If 'justify' is "r" then the string is right justified, and padding (or truncation) happens at the left. *** News 09/15/2015 -- New /ON, /ON RAW_IRC_BYTES This new /ON, /ON RAW_IRC_BYTES is the same as /ON RAW_IRC, except $* is the _raw unmodified bytes_ received from IRC. Specifically, $* is not guaranteed to be a UTF-8 string, so functions that expect a UTF-8 string won't work. You should not try to /ECHO the $* from this /ON. Just like /on raw_irc, if you catch this hook, you will suppress normal handling of the event: /on ^raw_irc_bytes * {echo nothing further happens} EPIC5-1.4 *** News 08/25/2015 -- Improved automargin support You can now use automargins to get better wrapping of long urls. 1. Use a terminal emulator that supports automargins (they pretty much all do) 2. Set your TERM env variable to something that supports automargins export TERM=vt102am should do the job 3. Restart EPIC after you've updated the TERM. It's not enough to just change the TERM and re-attach screen. It's a good idea to check this when you're upgrading. 4. /SET -CONTINUED_LINE to get rid of the + thingee 5. /SET FIRST_LINE � or something if you don't use /set output_rewrite to prefix every line 6. /SET WORD_BREAK<space><space><enter> URLs contain commas and dots and semicolons, and you don't want epic to word break on anything other than a space. That should do it! Your display will now use the final column on the display, and urls should be unmangled when you copy them If you "forget" to /SET -CONTINUED_LINE or /SET WORD_BREAK and want to rebreak your windows in order to take advantage of this after the fact, you can always just do /window rebreak_scrollback. *** IMPORTANT! *** Note that you _must_ be running with a TERM supporting automargins or this will not change things! You can check this by doing /eval echo $getcap(TERM enter_am_mode) If it returns blank, then your TERM does not support automargins. *** IMPORTANT! *** *** News 08/25/2015 -- New /SET, /SET FIRST_LINE For those of you who use /SET OUTPUT_REWRITE to prefix every line of output with something (like a timestamp), you can ignore this. If you /SET FIRST_LINE, the string will be prefixed before every logical line of output. This is great if you /SET -CONTINUED_LINE so you can continue to tell what lines are what. EPIC5-1.2 EPIC5-1.1.11 *** News 07/20/2015 -- New operation: @serverctl(SET refnum UMODE ...) You've never been able to SET your UMODE, becuase that never made much sense. But it was pointed out that the UMODE is used when you reconnect to establish your initial usermode, and they wanted to be able to control that. So now, when the server is disconnected, you can change its umode, which will be used when you next connect. Try something like this: on server_lost * {defer @serverctl(SET $0 UMODE i)} *** News 07/09/2015 -- Don't do /redirect @W<refnum> <command> You can /msg @W4 <stuff> for example to do an /xecho -w 4 <stuff>. But you can't use this with /REDIRECT because the redirection itself causes another redirect, and it just gets stuck in an infinite loop. The client can't protect you from doing this without a rewrite, so, don't do this. ;-) *** News 07/09/2015 -- New flag to /ENCRYPT: /ENCRYPT -REMOVE It was confusing to remove /encrypt's before -- you had to specify all the arguments but not the password: /ENCRYPT hop -blowfish And if you had a PROG crypto, you couldn't remove it at all! /ENCRYPT hop password program... (How do you not specify the password in this case?) Anyways, there is now /ENCRYPT -REMOVE which lets you unambiguously remove an encrypt for a target /ENCRYPT -REMOVE hop *** News 07/09/2015 -- Only one /ENCRYPT per target now. Traditionally, ircII has only had one cipher type for /ENCRYPT. EPIC added more cipher types along the way, and it was possible to set up multiple /ENCRYPT sessions for the same person. However, as I reflect upon this, this isn't a reasonable thing to do, because if you do /ENCRYPT hop -blowfish password1 /ENCRYPT hop -aessha password2 And then you /msg hop, which one should it use? So I've changed it so when you change the encryption type (such as in the 2nd line above), it will REPLACE the first one -- you will only be able to have one cipher session per target. If this is a problem -- if I broke something for you, please let me know so we can address your needs. *** News 04/14/2015 -- New function, $chankey(servref #channel) The $chankey() function returns the channel for a specified channel. I created this because $key() doesn't allow you to specify 'servref' but rather uses from_server, which means you have to wrap it in an /xeval -s to use a non-default server. ick. Arguments: $0 - A server refnum $1 - A channel name Return Value: empty string - Either 1.) 'servref' not provided, or 2.) '#channel' not provided, or 3.) You're not on '#channel' on servref, or 4.) #channel doesn't have a key anything else - The mode +k key for #channel on servref. *** News 04/14/2015 -- SSL info available via $serverctl(GET refnum SSL*) You can now get information about live SSL connections: $serverctl(GET refnum SSL_CIPHER) The encryption being used, something like "DHE-RSA-AES256-SHA" $serverctl(GET refnum SSL_VERIFY_RESULT) 0 if the certificate was verified successfully. Any other value if it did not. These values match up to the verify(1) man page. Most notable, error code 20 means that OpenSSL could not find your local CA file (see /SET SSL_ROOT_CERTS_LOCATION below) $serverctl(GET refnum SSL_PEM) This is the PEM (base64) format of the server's certificate. You could save this to see if it's changed. ;-) $serverctl(GET refnum SSL_CERT_HASH) This is the SHA1 digest of the server's certificate. It is converted into a byte string like AB:CD:EF:01:02:... $serverctl(GET refnum SSL_PKEY_BITS) This is the number of bits that the server's certificate said that the server's public key uses. $serverctl(GET refnum SSL_SUBJECT) This is the hostname of the subject of the certificate. In theory, this is supposed to be the server's hostname. This could be a wildcard string. $serverctl(GET refnum SSL_SUBJECT_URL) This is the SSL_SUBJECT, but passed through URL encoding. It's useful because the SSL_SUBJECT will have spaces, and this results in just one word. $serverctl(GET refnum SSL_ISSUER) This is the Certificate Authority (CA) that issued the server's certificate. $serverctl(GET refnum SSL_ISSUER_URL) This is the SSL_ISSUER, but passed through URL encoding. It's useful because the SSL_ISSUER will have spaces, and this results in just one word. $serverctl(GET refnum SSL_VERSION) This is the version of SSL you're using. It should either be TLSv1 (good) or SSLv3 (bad). With all of the above information, I hope someone scripts a nice SSL executive script that caches the certificate information, tells you whether the connection should be trusted, decides whether the ssl version is ok, the public key bits are ok, all that stuff. *** News 04/10/2015 -- New /TIMER argument, /TIMER -SNAP A "snappable" timer fires off at the "top of the interval". That means it runs every time ($time() % <interval> == 0) If you snap to 60, it will run at the top of every minute, just like mail checking, clock updating, etc. If you snap to 3600, it will run at the top of every hour. Example: /timer -snap -repeat -1 -refnum hourly 3600 { echo I run at the top of every hour } *** News 04/10/2015 -- New script, /LOAD find_ssl_root_certs This script is loaded by /load global, and you should load it too, if you're not using /load global; or you should implement a similar functionality to get /SET SSL_ROOT_CERTS_LOCATION pointing to the right place for you. Otherwise, your SSL certificates won't authenticate, and $4 in /on ssl_server_cert will always be 0, even if the cert is actually legitimate. Or maybe all of what I just said doesn't matter to you. *** News 04/10/2015 -- New /SET, /SET SSL_ROOT_CERTS_LOCATION In order for SSL to verify certificates, it needs to have a copy of the root certificate authorities. This is usually a file named "ca.bundle" or "ca-root-nss.crt". You need to /set this variable to wherever your openssl compatable root ca authority certificates are. It would help if I understood what I'm talking about more. Anyways, the script /load find_ssl_root_certs tries to help you with this. *** News 04/10/2015 -- Enhancements to /ON SSL_SERVER_CERT Apparently I've never documented /on SSL_SERVER_CERT. The /ON SSL_SERVER_CERT hook is thrown every time a successful connection to an SSL server is made. For now, the only SSL connections are to IRC servers, but some day I hope to support DCC as well. For now, this refers only to server connections. $0 - File Descriptor (need a way to convert to server refnum) $1 - The "subject" of the certificate -- the server name $2 - The "issuer" of the certificate $3 - How many bits are used by the certificate's public key $4 - Did the certificate validate? 0 = pass, anything else = fail. (For now, this the result of X509_get_verify_result()) (This depends on /SET SSL_ROOT_CERTS_LOCATION (above)) $5 - What SSL type are we using? (TLSv1 or SSLv3) $6 - What is the digest of the SSL Certificate? The idea is you could use $1 to cache the metadata about a server, and use $2, $3, $4, $5, and $6 to see if anything changes from one connection to the next. Probably more additions will come later. I'm especially interested in passing in the complete plain text certificate +url'd up. *** News 04/10/2015 -- More robust certificate verification for SSL Based on a paper written by Roca He, who is doing a research project on improper use of OpenSSL API by open source software, the OpenSSL code in epic was reviewed and enhanced. One point of interest was certificate verification -- epic wasn't doing any of that. But now it is. This requires OpenSSL to know where your root/trusted certificate authorities are, and there are more notes above about how to handle that. The results of this verification are reflected in /on ssl_server_cert above. *** News 04/10/2015 -- You can now /encode to /EXEC processes The /EXEC system is now UTF-8 aware, and you can use the /encoding command to recode between %procs targets now. yay! Example: /encoding %nonutf8prog iso-8859-15 /exec -name nonutf8prog myprog <output from 'myprog' treated as iso-8859-15, converted to utf8 for epic's use> *** News 04/10/2015 -- New scripts: /load sasl_auth, userlist, tmux_away Zlonix wrote these scripts. I need to write blurbs about each. Until that time, read the scripts, they're well documented! *** News 07/24/2014 -- New feature $windowctl(REFNUMS_ON_SCREEN <winref>) The $windowctl(REFNUMS_ON_SCREEN <winref>) will return all of the refnums on the screen that contains window <winref>. They are returned in SCREEN ORDER, ie, from top to bottom. This was because someone wanted to make the bottom window ONLY double status bar, so he needed to know which one was on bottom and which ones weren't. If <winref> is a hidden window, then it will return all of the invisible windows, but in no guaranteed order. If <winref> is 0, then it will return the screen for the current window, of course. You MUST specify a window refnum of some sort, even if it's just 0. *** News 04/19/2014 -- Support for tmux for /WINDOW CREATE If you run EPIC under tmux, you can now use /window create and it will create new screens running under other tmux screens, just like it does for gnu screen. There is also /set tmux_options, but I did not really test that. What EPIC does is run this command: tmux new-window "<wserv_path> <tmux_options> localhost [port]" *** News 04/17/2014 -- /SET TRANSLATION now retired The /SET TRANSLATION feature which has served us well for many years is now superceded by /ENCODING, and has been retired. EPIC5-1.1.10 EPIC5-1.1.9 *** News 04/16/2014 -- $fix_width() now fully works, and UTF-8 aware. The $fix_width() function takes the following arguments: cols $0 Number of display columns justify $1 Justification ("l" left, "c" center, "r" right) fillchar $2 Fill character (can be utf8 cchar) text $3- This fully supports UTF-8, so the result is a string that will take up "cols" columns, even if "fillchar" takes multiple columns. *** News 04/16/2014 -- Many functions are now UTF8 aware These functions are now fully UTF8 aware. This means that their unit of operation is a unicode character, and not a byte. after before center chop chrq curpos fix_width index indextoword insert left maxlen mid msar pad pass rest reverse right rindex rsubstr sar split strip strlen substr toupper tolower tr wordtoindex These things are also UTF-8 aware: /XTYPE -L $[num]VAR (pad/truncate $var to num places) /FEC Additionally, case sensitivity is UTF8 aware, for all languages, not just English. (at least as far as I tested) *** News 04/11/2014 -- New function, $encodingctl() The $encodingctl() gives you a lower level interface to the encoding system. The behavior of $encodingctl() is ugly and I regret several of the decisions I've made already, but that's the way it goes... As with other $*ctl() functions, if there was not information in the argument list to decide what you wanted to do, it will return the empty string. - $encodingctl(REFNUMS) Return all recode rule refnums. - $encodingctl(MATCH servref sender receiver) Decide which recode rule would be used for a message sent by "sender" to 'receiver' over the server 'servref'. If you are the sender, use $servernick(). Return value: empty_string - an argument is missing, or servref is not an integer anything else - the rule that would be used. - $encodingctl(GET refnum <OPERATION>) Get an attribute of a recode rule: -> Returns empty string if <refnum> is not a valid recode rule -> Returns empty string if no <operation> specified. - $encodingctl(GET refnum TARGET) Return the complete "target" part of the rule; this is whatever you passed to the /encoding command. - $encodingctl(GET refnum ENCODING) Return the complete "encoding" part of the rule; this is whatever you passed to the /encoding command. - $encodingctl(GET refnum SERVER_PART) Return the 'server part' of the target. This is the part before the slash, or the empty string if there is no server part. - $encodingctl(GET refnum TARGET_PART) Return the 'target part' of the target. This is the part after the slash, or the empty string if there is no target part. - $encodingctl(GET refnum SERVER_PART_DESC) This returns nothing for now. Maybe someday it will return a server description (ie, host:port:...) - $encodingctl(GET refnum MAGIC) 1 if this rule is a "system rule" and cannot be deleted. 0 if this is a user rule, and can be deleted. - $encodingctl(GET refnum SOURCE) 1 - Set at boot-up by using your locale (CODESET) 2 - Set at boot-up from hardcoded defaults 3 - Set by the user - $encodingctl(SET refnum <OPERATION>) Change an attribute of a recode rule -> Returns empty string if <refnum> is not a valid recode rule. -> Returns empty string if no <operation> specified. - $encodingctl(SET refnum ENCODING new-encoding) Empty String - New-encoding was not specified 0 - Changed: The new encoding was set successfully -1 - Not changed: The new encoding does not exist on your system -2 - Not changed: The new encoding does not convert to UTF-8 -3 - Changed: The new encoding PARTIALLY converts to UTF-8 - $encodingctl(DELETE refnum) Delete a recoding rule: Empty String - <refnum> was not a valid recode rule. 0 - <refnum> is a magic rule and may not be deleted 1 - <refnum> was successfully deleted. - $encodingctl(CHECK encoding) Determine whether or not <encoding> could be used in recoding rules: Empty String - <encoding> was not specified. 0 - Acceptable: The encoding is acceptable for use -1 - Unacceptable: The encoding does not exist on your system -2 - Unacceptable: The encoding does not convert to UTF-8 -3 - Partially Acceptable: The encoding PARTIALLY converts to UTF-8 - $encodingctl(CREATE target encoding) Basically the same thing as /encoding target encoding Returns the refnum of the new rule. EPIC5-1.1.8 *** News 03/11/2014 -- New command, /ENCODING The /ENCODING command is not really new, but it finally is in its final form. With /ENCODING you can specify rules that tell epic what encoding you think other people are using. Whenever epic receives a non-utf8 message, it will evaluate the rules to decide what encoding it should treat the non-utf8 message. Whenever you send an outbound message to irc, epic will use the rules to decide if it should encode it in something other than utf-8. This means you can talk to non-utf8 users, and their messages can be made utf8 for you; and your (utf8) messages can be non-utf8 for them. This is full end-to-end recoding support. The rules look like this, and are evaluated with this priority: 6. /ENCODING server/nickname 5. /ENCODING nickname 4. /ENCODING server/channel 3. /ENCODING channel 2. /ENCODING server 1. /ENCODING irc (the "magic" rule) In this case, "server" is anything that can be recognized: * A server refnum * A server "ourname" * A server "itsname" * A server group * Any server altname The client will evaluate each rule, and the "best match" is the first rule that lands highest on that first list. Hopefully it should just be natural. An example: # All "efnet" servers use ISO-8859-1 (level 2) /encoding efnet/ ISO-8859-1 # Except #epic, which uses CP437 (level 3) /encoding #epic CP437 # Except zlonix, who uses KOI8-R (level 5) /encoding zlonix KOI8-R So if you get something non-utf8 message over an "efnet" server, it will be assumed to be ISO-8859-1. Unless that message was sent to #epic -- then it is assumed to be CP437. Unless that message was sent by zlonix, then it is KOI8-R. In this way, you can set defaults for channels and overrule it by individual person. SYNTAX: If you do /ENCODING <stuff> If <stuff> is a channel, then <stuff> is treated as a channel. If <stuff> is a number, then <stuff> is treated as a server refnum. If <stuff> contains a slash, then anything before the slash is the server part, and anything after the slash is the channel or nickname part. *You can use a trailing slash to make it unambiguous you mean a server, or a leading slash to make it unambiguous you mean a channel/nickname.* If <stuff> contains a dot, then <stuff> is treated as a server. Otherwise, anything else is treated as a nickname. *** News 03/06/2014 -- $cparse(%X) turns into a ^X You can use %X in $cparse() to inject a ^X which might make it easier to handle 256 color support. All the rules below still apply. /echo $cparse(%kone %rtwo %X80buckle %XFFmy %X32shoe) *** News 03/05/2014 -- 256 color support (^X works like ^C) The ^X attribute allows you to set 256 colors (if your emulator supports that). The ^X attribute takes two hex digits to indicate a color between 00 and FF (0 to 255). ^X-1 Turn off color ^X00 Turn on fg color 0 .... .... ^XFF Turn on fg color 255 ^X<number>,<number> Turn on <fg>,<bg> colors The <fg> number can be omitted. The ^X attribute ALWAYS takes stuff after it. You cannot use a naked ^X or you risk forwards incompatability. I *will* be adding more stuff to ^X in the future so don't develop bad habits. Caveat: 256 color support isn't "standard" ansi so the client sends the hardcoded sequence ^[[38;5;<number>m to your terminal. If your terminal does not honor this way of doing 256 colors, then there's not much you can do about it... *** News 03/05/2014 -- Italics support The highlight character ^P now toggles the "Italic" setting of your terminal emulator. Mine doesn't support this so I couldn't test it very well. Please report any bogons if you use it. *** News 03/02/2014 -- Clarified behavior for /set lastlog 0 It was pointed out that /set lastlog 0 did not do a reasonable thing with the unified scrollback buffer, so the behavior has been refined a bit. Here is how /set lastlog <X> now works 1. For each window, set /window lastlog <X>. 2. For each window, rebuild each window's scrollback -- Which may throw away stuff! -- If you do /set lastlog 0, it throws away all scrollback and does an implicit /window clear! 3. For each window, if <X> is less than twice the window's size, /window lastlog <twice its size> (24 lines -> /window lastlog 48) 4. The final value of <X> will be twice the size of the biggest window. *** News 02/14/2014 -- New flags, /LASTLOG -THIS_SERVER and -GLOBAL The /LASTLOG -THIS_SERVER flag will show all lastlog entries from any window belonging to the server server as this window. IE, it is a /lastlog that catch all of this server. The /LASTLOG -GLOBAL flag will show all lastlog entries from any window whatsoever. *** News 02/11/2014 -- Auto-detect incorrect encodings lead to warnings If you have /ENCODING CONSOLE set to non-utf8, and then you type stuff that looks like UTF8, the client will tell you and suggest you switch. This will end the problem with utf8 users seeing multiple garbage characters on the input line. If you have /ENCODING CONSOLE set to utf8 and you type something that is not utf8, the client will tell you and suggest you switch to something else. Unfortunately it's not easy to know what you are using, so it suggests ISO-8859-1. If you are correctly setting LC_ALL (see below) then the above should never happen for you. *** News 02/11/2014 -- Now honoring LC_ALL (locale charset settings) If you set your character set via locale environment variables, EPIC will now use your locale as the default character set for /ENCODING console. If you do not set your locale variables, then epic will continue to default to ISO-8859-1. *** News 02/11/2014 -- New /ENCODING target "scripts" Whenever you /load a script, epic needs to convert it into utf8. The normal way a script can declare itself is via /load -encoding (See the note from 11/17/2012) if (word(2 $loadinfo()) != [pf]) { load -pf -encoding CP437 $word(1 $loadinfo()); return; }; If a script is not well-formed utf8 and it does not declare its own encoding, then it will be assumed to be whatever the value of /ENCODING scripts is. The hardcoded default is "CP437". Naturally, if you /load a script that is not utf8 and is not CP437, it may not translate correctly. But it seems most scripts use CP437, and we'll get everybody to declare/utf8-ify their scripts. *** News 02/10/2014 -- EPIC users now utf8 stransparant to irc. As of right now, whether you are using utf8 or not, anything you send from epic will be sent to irc as UTF8. Anything UTF8 that anybody sends you will be displayed properly on your screen, even if you are not using UTF8. *** News 02/10/2014 -- New command /ENCODING -- declare target encodings The new command /ENCODING is used to declare what string encoding a target is using. At some point this will spawn into an all- encompassing feature, but for now, it's just used to declare the encoding of your console. /ENCODING console ISO-8859-1 or /ENCODING console UTF-8 (default) If EPIC detects that your input is illegal for the encoding you are using, it will ask you to change it. If the stuff you type is not what you think you're typing, again, you might be using the wrong encoding. *** News 02/09/2014 -- Unicode support for $chr(), new func $unicode() The $chr() function will now accept unicode descriptors $chr(U+0415 U+0420 U+20) The $unicode() function converts text to unicode descriptors $unicode(�) returns "U+0423" *** News 02/08/2014 -- New server flag, "encoding" -- WITHDRAWN Server descriptions now have an extra field "encoding" which is used when you receive a non-utf8 string from the server. When you receive a non-utf8 string from the server, epic will assume it is in this encoding and use iconv() to convert it to utf8. The default is "ISO-8859-1" for no particular reason. This will be supplanted by a /recode command in the future! *** This feature has been removed. Do not use! *** EPIC5-1.1.7 *** News 01/16/2014 -- New scripts (should document these!) from Zlonix xmsglog Encrypted logfiles sasl_auth SASL support (for some networks) idlealert Monitor friends' idle times (with lots of WHOISs) *** News 01/02/2014 -- New status expando, %{4}S, full "itsname" Just to run this down, here are the %S expandos %S Altname 0 (default: Shortened "ourname") (Only shown when you're connected to multiple servers) %{1}S Altname 0 (default: Shortened "ourname") %{2}S Full "ourname" %{3}S Server Group %{4}S Full Server "itsname" *** News 01/02/2014 -- Add /input -- so you can stop arg processing If you wanted to be able to prompt starting with a hyphen, well, you couldn't do that before. But now -- is honored and everything else is taken as the input prompt. *** News 09/12/2013 -- New built in function $status_oneoff() This is a completely experimental function right now, which is helping me decouple the status bar generation from the windows. This function allows you to create your own status bar string, if you provide it a window and a status_format. $status_oneoff(winref ...status goes here...) As a simple example, $status_oneoff(1 %S) would return what %S would be on window 1. EPIC5-1.1.6 *** News 07/31/2013 -- New action $LOGCTL(LAST_CREATED) The $logctl(LAST_CREATED) returns the log refnum of the most recent log that was created with /LOG NEW or $logctl(NEW). *** News 07/31/2013 -- New action $LOGCTL(NEW) The $logctl(NEW) performs a /LOG NEW and returns the refnum of the newly created log. *** News 07/31/2013 -- New /lastlog flag. /lastlog -regignore [regular-expression] avoids printing lines that would otherwise be printed without the regex. Can be used in combination with -regex and -ignore and the other flags. *** News 07/28/2013 -- $windowctl(REFNUMS_BY_PRIORITY) returns by current-ness The $windowctl(REFNUMS_BY_PRIORITY) operation returns all windows in the descending order that they have been the "current window". This is based on your input screens -- if you have multiple windows connected to multiple servers, this list doesn't care about that. If you need that, iterate over the list and filter out the ones for your server: fe ($windowctl(REFNUMS_BY_PRIORITY)) x { if (windowctl(GET $x SERVER) == serverctl(FROM_SERVER)) { push results $x } }; xecho -b Windows for this server in order of current-ness: $results *** News 07/28/2013 -- FIXED-SKIPPED windows don't get channels on /window kill FIXED-SKIPPED windows (ie, /window fixed on skip on) are used to create status windows (see below). They will no longer be given channels from another window that is killed unless it is the last window connected to the server. *** News 01/09/2013 -- New /QUEUE flag, -RUNONE I can't believe I didn't think of this before! The /QUEUE -RUNONE flag will run the first command in a queue, leaving the rest of the queue alone. This will work great with timer to create a FIFO queue that you can stagger commands through. For example, a command that slows down output to the server /TIMER -REPEAT -1 2 {QUEUE -RUNONE serverqueue} fe (#one #two #three #four) x { QUEUE serverqueue {join $x} } EPIC5-1.1.5 *** News 11/28/2012 -- Lots of code quality improvements Ancient graciously set up epic to run under clang and its static analyzer, and we found lots of suggestions of things to fix. *** News 11/17/2012 -- New flag, /load -encoding You may now specify what a file's encoding is, and it will be converted automatically to utf8 (there will be a /set for this soon enough). You can use this in your magic bootstrap: if (word(2 $loadinfo()) != [pf]) { load -pf -encoding iso-8859-1 $word(1 $loadinfo()); return; }; This allows utf8 terminal users to load ascii art in 8859, and it will Just Work! EPIC5-1.1.4 *** News 8/5/2012 -- Anti-foot-shooting for $pad() and $repeat() Some had mentioned that you shouldn't be permitted to shoot yourself in the foot by asking for absurdly large output strings in $pad() and $repeat(). Normally I wouldn't agree to that, but I guess I'm getting soft in my old age.... *** News 8/5/2012 -- /xdebug no_color - turn off all color unconditionally The experimental feature /xdebug no_color turns off color support at the lowest level of the client. If you refresh your screen, any previously displayed color will be suppressed. Turning off this feature allows any color to be shown again. *** News 8/5/2012 -- /LASTLOG -CONTEXT actually works correctly The /LASTLOG -CONTEXT feature shows you some lines before and after any lastlog match. This has previously been broken and as of the time I write this, works properly, both normal and -REVERSE. *** News 8/5/2012 -- /ON SET only thrown once when you type the exact name Previously, if you typed /SET <X> where <X> is the exact name of any builtin SET, then /on set would be thrown twice. That was never the intention, and this has been "fixed". *** News 6/26/2012 -- Merge two windows together -- /WINDOW MERGE otherwin /WINDOW MERGE is like /WINDOW KILL except it moves everything from the current window to another window first. This allows you to "merge" two windows into one window. * Window output * Channels * Logfiles * Queries * Timers If the current window can't be KILLed then everything will be moved away anyways, but you'll get an error message telling you that it can't be killed. This isn't a bug. *** News 6/26/2012 -- Expiring output -- /XECHO -E The /XECHO -E flag lets you create "expiring output" which will disappear after however many number of seconds. /XECHO -E 10 This goes away in 10 seconds! This might be useful for status windows that you want to show new messages briefly. *** News 6/24/2012 -- New $hookctl(CURRENT_IMPLIED_HOOK) When an implied /on hook is being expanded, $hookctl(CURRENT_IMPLIED_HOOK) is set to the name of that hook. I recommend wrapping this in an alias: alias ih {return $hookctl(CURRENT_IMPLIED_HOOK) Then you can use $ih() to get the hook. The use case is if you are using the same function for different implied hooks -- there was no way to tell the function which one it was working on. *** News 6/24/2012 -- New $dccctl(SET refnum FULL_LINE_BUFFER [0|1]) You can now set a dcc raw to "fully line buffered" mode. When this is turned on, /on dcc_raw will not trigger until a complete line is available. You can turn this off by setting it to 0. The corresponding $dccctl(GET refnum FULL_LINE_BUFFER) also works. *** News 6/24/2012 -- New $dccctl(SET refnum PACKET_SIZE <bytes>) You can now set a dcc raw to "fully packet buffered" mode. When this is turned on, /on dcc_raw will not trigger until <bytes> bytes are available. You can turn this off by setting it to 0. The corresponding $dccctl(GET refnum PACKET_SIZE) also works. *** News 06/24/2012 -- New option, /XEVAL -NOLOG The -NOLOG option to /XEVAL suppresses logging for the command. The person who requested wanted to do something like this: alias ll { xeval -nolog {lastlog $*} } to be able to avoid logging /lastlog output. *** News 06/24/2012 -- New option, /LASTLOG -IGNORE <pattern> The /LASTLOG -IGNORE <pattern> option allows you to display all of your lastlog EXCEPT whatever matches <pattern>. In this way, it acts as a reverse to the normal way. EPIC5-1.1.3 *** News 03/24/2012 -- New status bar expando, %G (Network) The %G status bar expando shows the 005 NETWORK value for your server *** News 06/09/2010 -- New semantics for /BIND TRANSPOSE_CHARACTERS The TRANSPOSE_CHARACTERS keybinding now has the following semantics: 1. When the cursor is on the first character, swap the first and second characters. 2. When the cursor is on a character (but not the first character), swap the character under the cursor with the character before the cursor. 3. When the cursor is at the end of the line (and not on a character), swap the last two characters on the input line. In all three cases, the cursor stays in whatever column it is in. *** News 06/05/2010 -- New script: rejoin Stores channel/key on disconnect/part/kick. I hope it's useful! Allows you to rejoin all channels lost in a disconnect, by doing: /rejoin -all OR /rejoin -server See script for details. EPIC5-1.1.2 *** News 04/15/2010 -- New flags to $sar(), $msar(), case sensitivity In EPIC4, $sar() and $msar() were case sensitive. You could turn this off by using the 'i' flag. In EPIC5, $sar() and $msar() are case *INSENSITIVE* There has been no way to turn this off! You can now turn this off with the 'c' flag. Example: $sar(g/One/Two/one One one One) -> "Two Two Two Two" $sar(cg/One/Two/one One one One/) -> "one Two one Two" *** News 04/15/2010 -- New /on, /on unknown_set As a favor to howl, I've added /on unknown_set, which will be hooked whenever /set is called on a set that doesn't exist. $0 - The set that doesn't exist $1- - The value the user wanted to set. If you catch this, then the /on set that triggers for "unknown-set" will not be thrown. If you don't know what I'm talking about, then you won't miss it. *** News 04/01/2010 -- Can now backslash colons in server passwords Previously it was impossible to include colons in server passwords because colons are delimiters in server descriptions. Now you can backslash the colon and it will do the right thing. Don't forget to backslash your backslashes! Real password What you should use: ------------------- --------------------- onetwothree onetwothree one:twothree one\:twothree one\two:three one\\two\:three *** News 03/25/2010 -- Can now modify servers by refnum (Fix to server descs) The /server command was broken in the epic5-1.1.1 release, and got some extra work for the next release. As part of this work, you can now add change fields to a server refnum, like so: /server 1:type=irc-ssl Previously refering to a server refnum didn't support change fields. EPIC5-1.1.1 *** News 3/19/2010 -- EPIC5-1.1.1 was released here *** News 3/19/2010 -- The last value of /WINDOW SERVER is saved per window The last argument passed to /WINDOW SERVER is saved on a per-window basis, via $windowctl(GET x SERVER_STRING). I added this because howl asked for it, although I don't know what he intended it for. *** News 3/19/2010 -- Modifying server descriptions on the fly You may now modify server descriptions on the fly in the following situations: /SERVER -ADD <desc> /SERVER -UPDATE <desc> /SERVER <desc> $serverctl(READ_FILE filename) $serverctl(UPDATE refnum stuff) /WINDOW SERVER <desc> For example, let's say you created a server irc.foo.com, but you forgot that it used SSL. Before it was a pain to "fix" that, but now you can fix it like this: /SERVER -ADD irc.foo.com:8855 (oops, it uses ssl, i forgot!) /SERVER irc.foo.com:type=irc-ssl (aha! okie. now it will connect using ssl) *** News 3/19/2010 -- You can log everything with /LOG SERVER ALL If you create a log like: /LOG NEW FILE myirc.log SERVER ALL ON that will log everything. Previously, you had to add each server individually by refnum, but now you can just use the magic string "ALL" to refer to all servers. *** News 3/19/2010 -- Rewrite /log support The /LOG command should work a lot better now. *** News 3/19/2010 -- New target to msg a window, @E<winref> You can /echo to a window by /msg'ing its winref, as in: /msg @E3 This message will display in window 3. *** News 3/19/2010 -- "global" now loads ambig and newnick scripts *** News 3/19/2010 -- New /XEVAL -N flag, which reverses the ^ flag Normally if you run an alias or an on with ^, it will treat every command as though it were prefixed with ^. This suppresses the output of many commands, which you may not want to do. You can negate the effect of ^ with /XEVAL -N. For example: /on ^hook "chdir %" { xeval -n {cd $1-} } Normally the /CD command will output an error if it could not change the directory, but since ^ suppresses that error, in this example, you'd never know that the cd failed. So you can wrap commands whose error messages you want to see in /XEVAL -N. Don't forget! Always wrap your commands in {} to avoid unintended back doors. *** News 3/19/2010 -- Remember, $dccctl(GET refnum WRITABLE) If you combine $dccctl(FD_TO_REFNUM fd) with $dccctl(GET refnum WRITABLE) you can detect when a nonblocking connect has succeeded! *** News 3/19/2010 -- New $dccctl(FD_TO_REFNUM <fd>) The $connect() function returns a file descriptor, which you can pass to $dccctl(FD_TO_REFNUM fd) to get the $dccctl() refnum, which you can use to do other stuff. *** News 3/19/2010 -- New flag to /EXEC, -CLOSEOUT If you do /EXEC -CLOSEOUT it will close the stdin to the process, (ie, sends an EOF) which some processes need to decide that they're supposed to do something. *** News 3/19/2010 -- New scripts I should have documented These scripts have been added, but I never got around to documenting them. That's a bummer. help.irc history.rb locale tabkey.sjh logman cycle set_color ban speak.irc *** News 10/29/2009 -- Valgrind assistance If you want to run epic under Valgrind, you may want to pass the --with-valgrind flag to configure, which will compile in some additional assistance to help valgrind find memory leaks. This support was graciously provided by caf, as were patches for the bugs he found using valgrind. *** News 07/06/2009 -- The Fish64 xform actually works now (see below) The first implementation of FISH64 was not actually, to be technical, compatable with FiSH. It is a strange thing and it took me a while to come up with an implementation of it that doesn't depend on how bits are stored in integers. I have actually tested it against the real life FiSH implementation and it's correct now. *** News 06/17/2009 -- New $xform, "FISH64" The FISH64 transform performs base64 encoding that is compatable with FiSH. Fish64 uses the same algorithm as base64, but it uses a different character set. *** News 06/08/2009 -- New $xform(iconv) functionality You can now refer to a pre-defined iconv encoding setup, instead of specifying encoding upon every use of $xform(iconv). Whereas you in the old days would do: echo $xform(iconv utf-8/ascii $stuff) which would take a lot of cpu time, as the client would have to do a lot of stuff to open, use, and then close up, the iconv stuff, you can now do as follows: @ id = iconvctl(ADD utf-8/ascii); echo $xform(iconv +$id $stuff); You can also do: echo $xform(iconv -$id $stuff) to reverse. Use /xdebug +unicode to debug iconv stuff! *** News 06/08/2009 -- New control function: $ICONVCTL() This function works as follows: @ id = iconvctl(ADD fromcode/tocode[//option]) This sets $id to a permanent identifier for doing encoding from *fromcode* to *tocode*. (This may speed up encoding a bit.) If the chosen encoding isn't accepted by iconv(), $iconvctl() returns empty. @ encoding = iconvctl(GET $id) This will return whatever you set the encoding $id to. @ iconvctl(REMOVE $id) This removes the $id from the table of encodings. @ iconvctl(LIST) This lists encodings. @ iconvctl(SIZE) And this returns the size of the iconv table. Do notice that identifers are re-used after removal. *** News 06/08/2009 -- Add USERINFO to /on hooks You can now add some information to "executing hooks". @ hookctl(USERINFO -1 stuff) This will set the USERINFO of the current executing hook to "stuff". To get the userinfo of the current executing hook: echo $hookctl(USERINFO -1) This can be used in conjunction with changing the $* of a hook, to, for instance, add encoding information to a it. *** News 06/06/2009 -- Can change $* in an /on hook now You can change the value of $* in an /on hook that will affect /on's with higher serial numbers. @hookctl(ARGS <level> <new value of $*>) This is expected to be useful for things like iconv translation. Please note carefully that the pattern matching of /on's against $* is done *AFTER EACH ON IS RAN* so if you change $* you might affect which higher serial numbered /ons will run! Usually <level> is -1 and usually the new value of $* would be based on the current value of $*. The change to $* takes place immediately. Example one: on #^hook -100 * {@hookctl(ARGS -1 >>>$0 $1<<< $2-) on ^hook * {echo $*} hook This is a test would output >>>This is<<< a test because the /on hook with serial number -100 changed the old value of $* "This is a test" to ">>>$0 $1<<< $2-" which after expansion is: ">>>This is<<< a test" which is the value of $* in the /on hook with serial number 0. Example two: on #^hook -100 * {@hookctl(ARGS -1 $reverse($*))} on ^hook "ape" {echo APE! APE!} hook epa would output "APE! APE" because the first hook changes $* from its original value "epa" to $reverse(epa) or "ape" which matches the second hook. *** News 04/10/2009 -- /WINDOW CHANNEL now outputs all channels in the window Previously, /window channel only output the window's current channel. It still does that, but now it will also output the full and complete channel list so you see the other channels in that window. *** News 04/10/2009 -- /IGNORE user@host.com now works again Due to a really lame bug, /ignore user@host.com did not work properly because the client thought it was a server name and did not fix it up to *!user@host.com which prevented it from matching anything which prevented it from working. Sorry about that. *** News 04/10/2009 -- Add permitted values for server desc "proto" field Previously, despite all of the documentation to the contrary, the only permitted values were "0", "4", and "6" for "either", "ipv4 only" and "ipv6 only" respectively. This has been increased so you can specify any of these values: For "either ipv4 or ipv6, I don't care" (the default) 0 any ip tcp For "ipv4 only, never use ipv6 for this server" 4 tcp4 ipv4 v4 ip4 For "ipv6 only, never use ipv4 for this server" 6 tcp6 ipv6 v6 ip6 Example to connect to an ipv6 server: /server irc.ipv6.foo.com:6665:proto=tcp6 Example to connect to a server only using ipv4: /server irc.foo.com:proto=ipv4 EPIC5-1.0 *** News 12/25/2008 -- EPIC5-1.0 was released here. EPIC5-0.9.1 *** News 12/12/2008 -- Configure will check for perl/ruby/tcl usability Up until now, configure would include perl/ruby/tcl as long as it existed and told us where its stuff was at. That's bad if you don't install the dev packages, because linking against the langauage library won't work if it's not there. Configure will now try a test-compile to use the language embedding to see if it works and supports the api we expect. Failures will cause that language to be turned off. Be sure to re-run configure! *** News 12/10/2008 -- New function, $chanlimit(#chan #chan #chan) The $chanlimit() command works just like $chanmode(), but it returns the +l argument -- the channel membership limit. This is by special request of fusion. *** News 12/10/2008 -- Minor change to /SET NEW_SERVER_LASTLOG_LEVEL Previously, each time you connected to a server (received a 001 reply) the client would unconditionally assign all of the levels in /set new_server_lastlog_level to the server's current window. This is rather annoying if you got disconnected from the server because the default value is ALL,-DCC and that would clobber all of your window levels. Having this brought to my attention, this has been changed to be more reasonable. These will now reclaim any unused levels, rather than unconditionally stealing them from other windows. Thus, what /set new_server_lastlog_level ALL,-DCC means is, "each time I connect to a server, please put any levels that aren't being used by any window connected to this server in the current window". I apologize for the previous behavior which was stupid and shouldn't have survived as long as it did. *** News 12/10/2008 -- Minor change to /SET OLD_SERVER_LASTLOG_LEVEL The same change applies when you /window server a window to a server that is already connected -- it has its window level changed to /set old_server_lastlog_level, but it will now NOT steal the level from any other window that already claims it. EPIC5-0.9.0 (EPIC5-0.3.10) *** News 11/24/2008 -- New /window operation, /WINDOW SCROLL_LINES The /WINDOW SCROLL_LINES operation overrules /SET SCROLL_LINES for one particular window. The value may be -1 (which is the default, and means use /SET SCROLL_LINES) or a positive number. *** News 11/01/2008 -- New /SET, /SET DCC_CONNECT_TIMEOUT This set will control how long a nonblocking connect for a /dcc get or /dcc chat can go before the client decides to abandon it. The value is in seconds, and 0 turns this off (connects will not time out) The default value is 30 seconds. This feature uses system timers, and you shouldn't change the value of this /set while a connect is pending or you'll confuse things and your connects probably won't time out properly. *** News 09/24/2008 -- New script, 'topicbar' The idea with this script is to use topline 1 of any window with a channel to display the topic of the given channel. *** News 08/25/2008 -- /SET INPUT_INDICATOR_RIGHT now functional It was documented below, but the code for it wasn't finished until today. So now it will start appearing on your input line. *** News 07/01/2008 -- Add servers from file -- $serverctl(READ_FILE filename) You may now insert servers into the server list from a file using $serverctl(READ_FILE filename) where "filename" is the name of the servers description file. Note that the servers are appended to the end of the servers list! The filename must be in the same format as the server description file that is loaded at startup. EPIC5-0.3.9 *** News 06/25/2008 -- configure --without-wserv, job control Configure now checks your system to see if it has posix job control (which means you have setsid() and tcsetpgrp()) and if it does not, it turns off job-control features: /BOTMODE /EXEC $killpid() $exec() $open() of compressed files The -b command line option External crypto program support Asynchronous (nonblocking) DNS lookups Wserv support You can also use the --without-wserv flag to configure to turn off wserv support for a system that otherwise supports job control. There is nothing gained by omitting wserv support, only things removed. Normally this flag wouldn't be added but I did it as a favor to someone. *** News 06/25/2008 -- You can now /ignore a server Due to some networks (undernet) having annoying servers that spam you 10-20 times a day with annoying messages you don't want to receive, it's now possible to /ignore a server: /IGNORE irc.server.com ALL *** News 05/09/2008 -- Hitting ^C twice interrupts infinite loop Historically, if you hit ^C twice in a row and the client is stuck, it will send itself a SIGARLM. In the past, this was because the client used blocking connect()s and stuff, and guarded them with alarm(3)s, so sending SIGARLM would cause an early interruption to a blocking connect. Anyways, since we don't have any blocking stuff any more, this is no longer useful for its intended purpose. You've been able to send the client a SIGUSR2 to raise a 'system exception" which attempts to gracefully end an infinite loop in your script. Hitting ^C twice in a row on a stuck client will send a SIGUSR2 which will cause an infinite loop in your script to terminate. *** News 05/09/2008 -- New /SETs: INPUT_INDICATOR_LEFT, INPUT_INDICATOR_RIGHT This was written and contributed by fusion. Thanks! The input line has been changed so the input prompt is always visible. When you reach the right or left side of the display, the input line will still scroll side-to-side, but the input prompt will always be visible, not just when you're at the start of the input line. Because it would otherwise not be obvious whether you are at the beginning of the input line or not, there have been two new /set's added: /set input_indicator_left + /set input_indicator_right + When there is more stuff on the input line than what is currently visible, if the extra stuff is off to the left, the first /set is used to tell you there is more in that direction. If the extra stuff is off to the right, the second set is used to tell you there is more in that direction. As of the time of this writing, the support for the second /set isn't ready yet, so there is no visual clue if you are at the end of the input line or not. Keep watching for more info about this. *** News 04/23/2008 -- Added new /on, /ON WINDOW_NOTIFIED This is hook is thrown when there's activity in a hidden window that is notified. $0 - The window refnum $1 - The level of the activity. Be careful with this hook, as output defered from it, may wreak havoc. EPIC5-0.3.8 *** News 04/10/2008 -- Added new /on, /ON SIGNAL You can hook signals with /ON SIGNAL $0 - The signal that was caught (a number) $1 - The number of times this signal has been caught since the last time /ON SIGNAL was thrown Not every signal can be caught, and some signals are dangerous to catch. For example, no matter what, you can't catch signals 9 (KILL) or 13 (PIPE) 15 (TERM). It's safe to catch 30 (USR1) and 31 (USR2), but everything else is entirely at your own risk. You should /defer anything you do within an /on signal to be safe. *** News 04/10/2008 -- /USERHOST -FLUSH /USERHOST -FLUSH removes those userhosts which are "pending send" not those which are "pending receive". EPIC5-0.3.7 EPIC5-0.3.6 *** News 03/10/2008 -- /NOTIFY list now applicable to local server. The notify list can now be updated on a per server basis. This is done by placing the ":" nick before the list of local changes in the /NOTIFY command. Everything on the /NOTIFY line UP TO the ":" is still applicable to every server. Examples: /NOTIFY : - [nicks] # Clear local list and replace with [nicks]. /NOTIFY - : [nicks] # Clear all notify lists and add [nicks] locally. *** News 01/28/2008 -- /ON WINDOW_COMMAND has command as $2 (kitambi) The command being executed is $2 in /on window_command. If you do evil things with this, you may crash the client. You Have Been Warned. *** News 01/23/2008 -- New built in function $check_code(...) --- Warning --- This function is not really as useful as it looks because you would be unable to submit an invalid block statement or expression to the function without getting a warning from the syntax parser in the first place. I don't know how I will "fix" this, but maybe you might find the function interesting for now. The $check_code() function takes either a *block statement* (surrounded by curly braces {}) or an *expression* (surrounded by parenthesis ()) and tells you whether the item is well-formed and does not have any unmatched braces or parentheses. It does *NOT* tell you if the code or expresison inside the item is valid or even makes sense, it only tells you if it contains code that you could pass to /eval or to /@. Return values: 0 - The expression or block statement looks ok -1 - This is not an expression or block statement -2 - The expression/block statement is invalid, probably because there is unmatched brace or parenthesis -3 - There is trailing garbage after the closing brace or parenthesis. More return values will probably be added in the future as more errors become detectable. *** News 01/22/2008 -- /SERVER listing now shows your vhost The listing of your servers from /SERVER now shows you the vhost that you're using (if any). I forget who asked for this. *** News 01/22/2008 -- Oper passwords no longer revealed with ^L Wjr pointed out that if you did /oper and typed a password that was hidden and hit ^L it would reveal the password. This has now been fixed. *** News 01/22/2008 -- $ignorectl(SUSPEND) and $ignorectl(UNSUSPEND) Larne asked for a way to globally turn off /ignores for some period of time. So you can turn off all ignores globally with $ignorectl(SUSPEND) and turn ignores back on again later with $ignorectl(UNSUSPEND). A word of caution -- this is a counting queue, so each SUSPEND must be matched with an UNSUSPEND. If you do two SUSPENDs and one UNSUSPEND, it will still be SUSPENDed. Use $ignorectl(RESET_SUSPEND) if you get the client totaly confused. *** News 01/22/2008 -- You can /load executable files, with caution Crimedog said that all of his scripts on windows were executable (+x) and epic wouldn't let him /load them, and so I've removed the restriction that you can't /load executable files. I've replaced it with a warning that the file is executable and that /loading binary files yields undesirable results. *** News 01/22/2008 -- /xecho -w -1 outputs to current window As a special favor to BlackJack, /xecho -w -1 will output to the current window, because this is what epic4 used to do, particularly when you did /xecho -w $winchan(#foo) and #foo was not a channel that you were on (so it returned -1) and it output to the curernt window. In any other case but -1, /xecho -w to a window that does not exist will drop the output. *** News 01/22/2008 -- New built in function $strptime() Now you know the $strftime() function converts a $time() value into a string using a special format. If you have the output of strftime and you have the format it was created with,t he $strptime() function will return the original $time() value it was created with. This is probably useful by people who are parsing logfiles and want to get a $time() value so they can do time math and see how long ago something occured. For the moment, this only works if you have strptime(3) on your system, and not everybody does. Very soon, a compat version of strptime() will be shipped with epic to ensure minimum functionality. *** News 01/05/2008 -- You can now use arglists with /input (fusion) You can now use arglists with input, like so: input "Enter command and arguments: " (cmd, args) { xecho -b You entered [$cmd] and [$args]! } *** News 01/03/2008 -- $info(o) values for libarchive, iconv support If the binary supports libarchive, $info(o) will include 'r'. If the binary supports iconv, $info(o) will include 'v'. Libarchive support is required to /load from a .zip file Iconv support is required to be able to do character set translation. *** News 11/29/2007 -- New function $fix_width() The $fix_width() function takes the following arguments: cols $0 Number of columns justify $1 Justification (must be "l" for left justify) fillchar $2 Fill character (a dword, so use " " for space) text $3- This function returns <text> formatted so that it takes up exactly <cols> number of columns on the display. It does this by adding <fillchar> to the string on either the left or the right or both. If <text> is already wider than <cols> then it is truncated to <cols>. <Justify> must be either "l" for left justify, "c" for center, or "r" for right justify. Only left justify is supported. The others are for future expansion. This function is intended for creating full width reverse toplines: @ :cols = word(0 $geom()) @ :str = fix_width($cols l " " blah blah blah blah) window topline 1 "^V$str" You will probably want to call $fix_width() in a separate statement from the /window topline in order to avoid the syntactic confusion with passing a double quoted word to /window and passing a double quoted word to $fix_width() (the space). Trust me. Don't go there. In the future, support will be added for right justify and centered. Please keep watch out in this document for more info. *** News 11/29/2007 -- Support for ZIP files from libarchive Support for loading files from .zip files has been added. This first round of implementation just adds the raw ability, but it's not totaly ready to be used yet. You're welcome to start playing with it and reporting any problems you have. You can $open() a file for reading or /load it from a zip file: /load foo.zip/file and @fd = open(foo.zip/file R) If you /load a zip file, it will load the file ".ircrc" in the top level directory. This might be enahanced or changed in the future: /load foo.zip acts like /load foo.zip/.ircrc Some operations cannot be performed on zipped files, such as $fseek() and $frewind() and so forth. This might change in the future. All of this is based on 'libarchive' being installed. You will need to re-run configure in order to pick up libarchive support after you do a cvs update. *** News 10/22/2007 -- New flag to /lastlog, /lastlog -window The /lastlog -window flag lets you grep the lastlog from a different window. The output will still go to the *current* window, however! *** News 09/19/2007 -- Some built in functions now 'builtin' aliases. Several functions that have been deprecated by $xform() have been demoted from built in functions to aliases in the 'builtins' script. encode decode b64encode b64decode urlencode urldecode sedcrypt sha256 EPIC5-0.3.5 *** News 09/14/2007 -- New built in function: $splitw(<delim> <string>) This function takes a <string> which has sections delimited by <delim>. The <delim> argument can only be one character. An obvious example of this is $PATH which is a <string> that uses the colon as <delim>. The <delim> is a dword, so you can use the space as a delimiter if you needed to. This function unconditionally converts <string> into a dword list. You will need to xdebug dword to iterate over the return value, or you can use $unsplit(" " $splitw(<delim> <string>)) to collapse it to a uword list (although this is probably pointless) Example: @ directories = splitw(: $PATH) might return /bin /sbin /usr/bin /usr/sbin /usr/local/bin Example: @ foo = splitw(: one:two:three a berry:four:five) returns one two "three a berry" four five *** News 09/13/2007 -- New /SET, /SET STATUS_HOLDMODE This is the value that %{1}H expands to. The default is " (Hold)". If you don't like the "(Hold)" in your status bar when your window is in hold mode but not holding anything, unset this variable entirely: /set -status_holdmode and it will disappear. Or remove %{1}H from your status format. *** News 09/13/2007 -- New status expando, %{1}H, hold mode indicator. The %H status expando expands when your window is in hold mode *and* there is something being held. But if your window is in hold mode, but nothing is held, you can't tell just by looking. So the %{1}H status expando will expand whenever the window is in hold mode *except* when %H will expando. This is so you can put %{1}H%H in your status format and one or the other (but not both) will expand at all times hold mode is on. This expando has been added to the client's default status format. The value of %{1}H is controlled by /set status_holdmode, and the default value of that is " (Hold)" *** News 09/13/2007 -- Remember, *0 is an rvalue, but *var is an lvalue The deref operator ("*") converts a token into an rvalue and then uses that as an lvalue. Example: assign foo bar @ *foo = [testing] echo $bar in the above (*foo) is the same as "bar". But numbers are different. Derefing a number yields an rvalue: alias oofda { @ foo = *0 } offda one To convert an argument into an lvalue, deref it a second time: alias booya { @ *(*0) = 'testing' } booya varname echo $varname Does that make it clear? *** News 09/02/2007 -- New function, $is8bit(string) This function find the first character of string that has the eight bit set. Useful to discover Unicode-strings or other non-eight bit characters. *** News 08/22/2007 -- New xform, $xform(ICONV "from/to" text) If your binary is built with iconv support (re-run configure before you tell me it doesn't work!) then you will be able to use iconv() to translate strings from one character encoding to another. This might be useful to experimentally convert to and from utf8 while you wait for the unicode-enabled input line to be written. XXX Help files updated to here XXX *** News 08/22/2007 -- Checks for iconv in configure: --with-iconv Configure will now check for libiconv support. Normally it will look for libiconv.a and iconv.h in the --prefix/lib and --prefix/include directory, or in /usr/local or /opt or /usr/opt. If your iconv support is not in any of these places, then you must supply the --with-iconv=/path/to/dir where that directory contains "include/iconv.h" and "lib/libiconv.a" *** News 08/22/2007 -- Checks for alt place in configure: --with-localdir A lot of times you'll have software installed but it won't be in the places the compiler looks for it. Usually this is /usr/local or /opt or something. You can use this flag to tell configure to look in this directory in addition to the default directories. This can aid in ensuring optional dependancies don't get turned off because they're in a directory the compiler doesn't look in. *** News 08/13/2007 -- New /SETs, /SET DEFAULT_REALNAME and DEFAULT_USERNAME Per a discussion on the list, it was decided to introduce two new sets, /SET DEFAULT_REALNAME and /SET DEFAULT_USERNAME. These control what realname and username should be sent to the server each time you connect. Although I can't stop you, you shouldn't unset these variables if you know what is good for you. /SET DEFAULT_REALNAME replaces /SET REALNAME, which confused people who though that /setting it changed their realname immediately /SET DEFAULT_USERNAME replaces /ircuser which confused people because they thought it was going to be a /set. *** News 07/20/2007 -- New /SET, /SET LASTLOG_REWRITE This provides the default value to /lastlog -rewrite whenever you don't use the /lastlog -rewrite flag. If you always want the /lastlog command to timestamp each line, try this: /set lastlog_rewrite $strftime($1 %T) $8- (Remember, if you use this in a pf loaded script to double up $'s... /set lastlog_rewrite $$strftime($1 %T) $$8- ) *** News 07/20/2007 -- New flag to /LASTLOG, /LASTLOG -REWRITE The /LASTLOG -REWRITE flag rewrites each lastlog line with the following values for $* $0 - The lastlog item's unique refnum $1 - Timestamp (suitable for use with $strftime()) $2 - Window refnum $3 - Output level $4 - Reserved for future use $5 - Reserved for future use $6 - Reserved for future use $7 - Output target $8- - The logical line of output Not all of these field are intended to be useful -- but I don't know what sort of imaginative things people might come up with. Example: To put a timestamp before every line... /lastlog -rewrite "$strftime($1 %T) $8-" *** News 07/04/2007 -- New $windowctl() option, $windowctl(GET refnum CHANNELS) You can now fetch all of the channels in a window by using $windowctl(GET <refnum> CHANNELS) There is no defined order to the channels returned. *** News 07/02/2007 -- Clarification of single-indirection implied hooks Earlier, I said... You may surround <string> with {}s if you wish, to avoid quoting hell. Match sure to keep your {}s matched up if you do so. See the "loadformats" info above for how to practically use this. But due to a mistake, this never worked correctly. This is now fixed. If you compare the normal two-expansion version: @ var = [format_send_public] @ fmt = '<%W$N%n> $1-' addset $var int @ hookctl(set list send_public implied \\$var); set $var $fmt which ties /on send_public dynamically to the value of /set format_send_public. If you wanted to do it directly, and not tie the implied hook to a variable, you can surround the format in curly braces, like so: @ hookctl(set list send_public implied {<%W$N%n> $1-}) Remember that curly braces protect the insides from $-expansion, so for all purposes the inside of {}s is a literal string that is not expanded except each time the /on is thrown. *** News 07/02/2007 -- Clarification on using $'s in expression parser The old expression parser used to allow you to use expandos as lvalues in order to indirectly assign to variables, for example: alias inc { @ $0 += 1 } But this is not supported in the new math parser. Instead you have to use the deref operator, like so: alias inc { @ *0 += 1 } which does the same thing. *** News 06/25/2007 -- New flag to /XECHO, /XECHO -AS The /XECHO -AS flag will output a message to all windows on the current server. You can combine this with the -S flag if you want to output to another server. This might be good for blasting a message when you're disconnected from the server. For another example, see below. *** News 06/25/2007 -- New window level, SYSERR The SYSERR window level will now be used for all of those layered "INFO --" system errors ("syserrs"). The client will make its best effort to ensure that these messages go to the correct server's windows, and that /on yell is hooked in the correct server context. You can combine this with /xecho -as [see above]... on ^yell "* INFO -- *" {xecho -as $*} EPIC5-0.3.4 *** News 06/02/2007 -- New tranformer: $xform(ALL) The $xform(ALL) transform ignores the text and returns a list of all supported tranformers. If the user didn't compile SSL support, then you won't be able to use the strong crypto transforms, so this is a great way to check before trying to use crypto. Since this is a regular old transform, you can further transform it any way you want to (if you want to). *** News 06/01/2007 -- More $xform()s added to the list below Please re-read the list immediately below, it's been updated! *** News 06/01/2007 -- Totaly rewritten $xform(), now actually useful! Here's the plan -- we're going to do this over again a second time. $xform("<transformations>" "meta" "meta" text) Where the <transformation>s are supported by transform_string(). At the time i write this, they are: Reversable encodings that convert between binary and printable data and do not require a meta: URL URL encoding ENC Base16 encoding B64 Base64 encoding CTCP CTCP encoding NONE Straight copy -- data not changed Reversable encryptions that require a meta value (a password): SED Simple Encrypt Data BF Blowfish-CBC CAST CAST5-CBC AES AES256-CBC AESSHA AES256-CBC with SHA256 digest of the meta DEF Default encryption (NOT IMPLEMENTED YET!) Irreversable digest operations that do not require a meta: SHA256 SHA256 message digest. The transformations is a dword (must be surrounded by double quotes if it contains a space, which it will if you do multiple transformations). The meta values are dwords (must be surrounded by double quotes if they contain a space). These two things make this function behave differently than functions normally do, so this is a documented deviancy! Examples: URL-encode a string $xform(+URL this is a string) URL-decode a string $xform(-URL this%20is%20a%20string) SED-cipher a string $xform(+SED password this is a string) SED-decipher a string $xform(-sed password <whatever>) More practical examples: 1) Read binary data from a file, encrypt it, and url encode it again. @fd = open(file.txt R) @data = read($fd 1024) @cipher = xform("-CTCP +SED +URL" password $data) @close($fd) msg someone $cipher Why does this work? -- $read() returns ctcp-enquoted data, so -CTCP removes it -- Now we have binary data, so +SED will cipher it -- Now we have ciphertext, so +URL will url encode it. We can send this to someone else, and they can put it in $cipher... @newfd = open(newfile.txt W) @newdata = xform("-URL -SED +CTCP" password $cipher) @writeb($newfd $newdata) @close($newfd) We did the reverse of the above: -- We -URL to recover the binary data -- We -SED to decrypt it using the password -- We +CTCP to put it in a form we can use with $writeb(). Viola! *** News 05/31/2007 -- New math operator, Unary ** operator The ** unary prefix operator treats its operand as an unexpanded string, permitting it to be expanded again. This should remove most any need to use /eval. Example: @ foo = 'this is a string with $var in it' @ var = 'one' @ val1 = **foo @ var = 'two' @ val2 == **foo So $val1 is "this is a string with one in it" and $val2 is "this is a string with two in it". You can apply this operator to any rvalue (natch): @ foo = **[booya $foo booya!] becomes @ foo = **[booya this is a string with $var in it booya!] becomes @ foo == [booya this is a string with two in it booya!] Yeah! *** News 05/17/2007 -- New script, 'dcc_ports' This script adds two new sets: /set dcc_port_min <number> and /set dcc_port_max <number> which create a port range which will be used by your dcc's. Each number in the range will be chosen sequentially (this will be enhanced further in the future) and any time you change either of these values you reset the sequence and it goes back to the minimum port again. When the port range is exhausted, it will cycle back again to the minimum port. *** News 05/17/2007 -- Default ports for dcc, recovering from ports-in-use There are two new features, that sort of complement each other, and sort of overlap each other. Whether to use one or the other depends on how your script is set up and what your needs are. When you use /dcc chat or /dcc send, but do not use the -p flag to specify a port, you can create a callback to your script to create a default port: @dccctl(DEFAULT_PORT <string>) The <string> will be expanded with $* being the dcc refnum of the dcc needing a port number. The expanded value of <string> will be used to set the WANT_PORT value (see below). If a port cannot be used because it is already in use by someone else, this value will be expanded repeatedly until it generates a port we can use. Whenever you use /dcc chat or /dcc send and use -p to bind a port, and that port is already in use, then /on dcc_lost will be thrown $0 The person you're sending it to $1 The dcc type (ie, send or chat) $2 The filename (url encoded) $3 The refnum of the dcc $4 The port that we tried to use that is in use. $5- The literal string "PORT IN USE" It is expected that you will catch this hook and then do @dccctl(SET $4 WANT_PORT <new port>) and then the client will try the new port again. If a port cannot be used because it is already in use by someone else, this hook will be thrown repeatedly until you set a port we can use, or you don't change the port In either case, if a port is in use, and neither the DEFAULT_PORT nor /ON DCC_LOST results in the WANT_PORT value being changed, then the dcc will be considered a failure, and it will be abandoned, just as it has always been until this change. *** News 05/15/2007 -- New dccctl() value, "WANT_PORT" You can now get and set the "WANT_PORT" value for a dcc, which is the value that the -p flag sets. You can therefore use this to change the -p flag for the user. Because this value is only used at the time the client tries to bind() an inbound address, changing this value after the dcc has already been offered will have no effect: it is too late at that point to change the port. *** News 05/15/2007 -- Normalization of the /SAY and /SEND commands The old behavior: - /SEND (the command used when you type text) sends a message to 1) Current Query or 2) Current Channel or 3) Nowhere - /SAY (the "empty" command to overrule a query) sends a message to 1) Current Channel or 2) Nowhere The new behavior: - /SEND sends a message to 1) Current Query or 2) Current Channel or 3) Nowhere - /SAY sends a message to 1) Current Channel or 2) Current Query <--- this is the change here or 3) Nowhere So /SAY will now send a message to the current query if you are not on a channel, rather than silently failing. Until this change, there was no way to do this at all. *** News 05/13/2007 -- ***IMPORTANT*** Removal of /SET REVERSE_STATUS_LINE *** IMPORTANT *** IMPORTANT *** IMPORTANT *** The /set reverse_status_line variable has been removed. This means that your /set status_format* variables, and your /window status_format variables will not be auto-prepended with ^Vs like they have always been since time immemorial. You ***MUST*** prepend your status_formats with ^Vs (control-Vs, ie, the reverse character) if you want them to appear in reverse. EPIC5-0.3.3 *** News 04/11/2007 -- New argument to /xecho, -TARGET Usually if you wanted to output to a channel's window, you would do /xecho -w $winchan(#channel) .... but someone asked (nicely) for a -target flag to /xecho because they thought that looked nicer. So now you can do /xecho -t #channel ... *** News 04/11/2007 -- New serverctl operation "ALLGROUPS" The $serverctl(ALLGROUPS) operation will return a unique list of all of the group names used in all your servers. Do not pass any arguments to this operation, since that's reserved for future expansion. *** News 04/11/2007 -- New gettable serverctl value, "FULLDESC" This value returns the fully qualified server description for a server, suitable for writing into a servers file, or for passing to /server -add or any old place. You can only get this value; you cannot set it, although that's planned for the future. *** News 01/27/2007 -- New serverctl value, "AUTOCLOSE" Normally a server is automatically closed when the last window to it is disconnected (or killed). Many people hate this behavior and would like to have a server's connect persist even if there are no windows connected to it. You can now control this behavior. @serverctl(SET refnum AUTOCLOSE 0) The default is 1 (natch). The behavior of turning this off is not well understood. Use with caution, tell me of any troubles. *** News 12/09/2006 -- New configure flag, --with-threaded-stdout EJB has graciously provided us with a patch to do threaded output. This code will hopefully work around problems with gnu screen blocking, causing pinging out of your servers. You can turn this code on by using the '--with-threaded-stdout' flag to configure. This code is sort of experimental at this time, but this is an officially supported feature, so please do report any problems you have with it. If you turn this option on, then $info(o) will include 'o'. *** News 11/17/2006 -- New flag to /userhost, /userhost -extra The /USERHOST -EXTRA flag causes the present value of a variable to be appended to the $* value of the results of a userhost query. Remember that /on 303 and /userhost -cmd are hooked with: $0 Nickname $1 + if oper, - if not oper $2 + if away, - if not away $3 Username $4 Hostname and now, $5- Contents of the -extra variable If you do not hook /on 303 or use /userhost -cmd, the contents of the -extra variable are appended to the output. Usage: @ foo = [extra stuff] USERHOST hop -extra foo -cmd {xecho -b $0!$3@$4 ($5-)} might output *** hop!jnelson@epicsol.org (extra stuff) Remember that the argument to the -extra flag is *an lvalue* (the name of a variable) and is not literal text! The lval is expanded (yeilding an rval) and *that* value is saved, and used later on. You can freely change the value of the variable and that will not affect anything: Example: @ foo = [extra stuff] USERHOST hop -extra foo -cmd {xecho -b $0!$3@$4 ($5-)} @ foo = [blah blah] still will output: *** hop!jnelson@epicsol.org (extra stuff) because the value is saved at the time you do /userhost. EPIC5-0.3.2 *** News 11/04/2006 -- $shift() and $unshift() changed like $push() and $pop() In the same way that $push() now only takes two arguments, $unshift() now takes only two arguments, an lval, and an unquoted dword: @ foo = [one two] @ unshift(foo three four) ($foo is one two "three four") In the same way that $pop() now only takes one argument, $shift() now only takes one argument, an lval: @ foo = [one two three] @ booya = shift(foo) ($booya is one and $foo is two three) *** News 11/04/2006 -- The /xdebug command now takes a block argument You can temporarily change the /xdebug value for a block of statements, and have the original value restored, all in one operation: xdebug dword { @push(bar "one two") } @ foo1 = leftw(1 $bar) ($foo1 is "one) xdebug dword { @ foo2 = leftw(1 $bar) } ($foo2 is "one two") *** News 11/04/2006 -- New built-in function, $curcmd() (nullie) With this one you know what command is currently executing. For instance, you might want to not timestamp lines generated by /lastlog to avoid having doubled timstamps, or prepend them with something else than timestamps. Example: /set output_rewrite ${ curcmd() == [lastlog] ? [Lastlog:] : Z} $1- *** News 11/4/2006 -- Changes to how $push() works with dwords Up until this time, the $push() function treated the arguments as a list of words, rather than a single word that needed to be added. This caused confusion because people didn't have a way to add a word atomically. For example: xdebug dword assign foo one two @push(foo three four) echo $foo returns one two "three four" So from now on, $push() takes two arguments, an lvalue, an a single *unquoted* dword. If the dword contains spaces, then it will be quoted before added to the variable (as above) *** News 11/4/2006 -- Changes to how $pop() works (along with dwords) Up until this time, the $pop() function has had two conflicting behaviors, that ambiguously overlapped: $pop(word word word word) $pop(lval) But what happens if you have a word list with only one word? For this reason, $pop() wasn't really useful, and people avoided using it. So I've removed the ambiguity by removing the first case, which isn't used. If you want to $pop() off a word list, use $rightw(1 <word list>) which has the proper behavior. Further, to differentiate $rightw() from $pop(), the $pop() function returns an *dequoted string*, as in this example: assign foo one two "three four" xdebug dword echo $rightw(1 $foo) returns "three four" echo $pop(foo) returns three four *** News 10/25/2006 -- New field for server descriptions, 'vhost' You may now specify per-server virtual hostnames ("vhosts") by using the "vhost" field in a server descrpition. Example: /server irc.foo.com:vhost=my.other.hostname.com Obviously the value you set for vhost must be a value that you could use for /hostname; that is, a hostname or p-addr that resolves to an address your machine will let you bind(). Specifying a vhost for the wrong protocol will malfunction. (For example: /server irc.foo.com:pr=4:vhost=[ff::ff] ) *** News 09/24/2006 -- New /ON, "WINDOW_SERVER" (nullie) This new hook will trigger every time window's server is changed. This might be useful for purging window-related data such as scripted window-to-channel associations. The params are: $0 - affected window's refnum $1 - old server $2 - new server *** News 09/19/2006 -- New /ON, "UNKNOWN_COMMAND" (nullie) The hook is thrown whenever an unknown command is entered on the command line or specified in a script. This way, you can complete a partial command name with an ambiguous hook. The params are: $0 - command being unknown $1 - parameters, if any *** News 09/19/2006 -- New /ON, "CHANNEL_LOST" (nullie) This hook is thrown whenever the client leaves a channel for any reason. It is meant as a generalized way for clearing channel-related structures in scripts, instead of having to hook on several other ones (KICK, PART and SERVER_LOST). The parameters are: $0 - server refnum where a channel is being destroyed $1 - channel name Have one thing in mind, though. This hook isn't necessarily thrown in current server's context. Instead of $servernick(), you need to do $servernick($0). *** News 09/18/2006 -- New keybinding, "RESET_LINE" The "RESET_LINE" keybinding is intended to be used by tabkey and history recall scripts that need to be able to replace the contents of the input line without affecting the cut buffer. The RESET_LINE keybinding takes an argument (naturally) that is the new value of the input line. If you don't provide an argument then the input line is simply cleared. Example: /alias oofda {parsekey reset_line this is your new input line} /bind ^I parse_command {oofda} (type something and then hit tab) *** News 09/18/2006 -- New field in server description, "proto" A new (seventh) field, has been added to server descriptions. This field is named "protocol" (see below) and can be abbreviated as "pr". The field restricts what socket protocols you want epic to use for this server. Possible values are; tcp4 or 4 IPv4 only tcp6 or 6 IPv6 only tcp or any IPv4 or IPv6, doesn't matter Naturally, "any" is the default. *** News 09/16/2006 -- Enhancements to server descriptions You are permitted to skip fields in the server description. Each field has a name that you refer to it by: host port pass nick group type You can change the field by "assignment", like so: /server irc.foo.com:group=efnet or /server irc.foo.com:7005:type=irc-ssl The server description actually skips to the name field, so you don't have to specify them in order, you can skip around however it suits you. If you skip to a field and then don't specify the name of the following field, whatever field would naturally follow is assumed: /server irc.foo.com:7005:group=efnet:irc-ssl In this case, you skipped to the 'group' field, and the field that follows that is 'type', so that is what "irc-ssl' is assumed to be. This can be used any place server descriptions are taken, from /server, to /window server, to ircII.server, to the command line, everywhere. *** News 09/14/2006 -- WINDOW command will fail if given invalid window refnum Consider the behavior of /WINDOW foo KILL in the two cases where the window 'foo' does and does not exist. Up until now, if 'foo' existed, it will killed, but if 'foo' did not exist, this was not an error, and the current window was killed. Ignoring window refnums that didn't exist was intended as a courtesy but actually is a hassle, since most people assume that any further commands will operate only on that window, or not at all. So this behavior has been changed. In the above case, if 'foo' does not exist, then the /window command simply fails at that spot and all further actions in that statement are ignored. *** News 09/14/2006 -- Some functions no longer support double quoted words The work was done by nullie. The following functions have previously supported double quoted words, but due to popular sentiment, this support is being removed to make the functions useful with untrusted data (ie, irc stuff) match rmatch userhost word remw insertw chngw pattern filter rpattern pop findw findws splitw diff sort numsort uniq remws getsets getcommands getfunctions prefix rfilter copattern corpattern cofilter corfilter The following functions didn't support dwords unless you did /xdebug extractw. Nothing has changed for these functions. leftw rightw midw notw restw insertw chngw beforew tow afterw fromw splice The /FE and /FOR I IN (list) commands previously supported double quoted words, but now they will not. You can revert to the epic4 behavior (for now) with /xdebug dword *** News 08/31/2006 -- New serverctl value, $serverctl(GET refnum ADDRSLEFT) This returns the number of addresses left from the last dns lookup for the server "refnum". It's intended to be used like this: on server_status "% % CLOSED" { if (serverctl(GET $0 ADDRSLEFT)) { # epic will be reconnecting... } } This will help nullie write his reconnect script to not trap until epic is finished with what it wants to do. *** News 08/17/2006 -- Add support for OPERWALL, /ON OPERWALL, OPERWALL level OPERWALL is an efnet server command which servers send amongst themselves. The local server very graciously sends it to you wrapped in a WALLOPS (big ups to whoever was considerate enough and made this decision, you made it much easier to support!) so we peer at WALLOPS and route "OPERWALL - " messages to /ON OPERWALL and a new OPERWALL window level. Now black will stop asking me to add this. This is all opt-in. If you don't hook /on operwall, then they continue going to /on wallops and WALLOPS as they have always done. *** News 08/17/2006 -- Two new server states CREATED and DELETED When a server is added to the server list, it's initial state is "CREATED". Once it is initialized, it is switched to the "RECONNECT" stage. So you can trap new servers with: /on server_status "% CREATED %" {xecho -b New server is $0} Servers that are deleted are switched to the DELETED state: /on server_status "% % DELETED" {xecho -b Server $0 going away} No matter what you do, you cannot stop the deletion of a server. It is recommended you /defer any changes you make to the server list from within an /on server_status that traps a server deletion. EPIC5-0.3.1 *** News 07/01/2006 -- Revamped /encrypt command The /encrypt command, which is a legacy ircII command, has been revamped and a lot of functionality has been added (see below). The new format looks like this: /encrypt See the cipher list *** Ways to specify the other person (TARGET is a NICK or CHANNEL) /encrypt TARGET key Trade messages with 'nick' on all servers using 'key'. /encrypt 0/TARGET key Trade messages with 'nick' only on server 0 using 'key' /encrypt SERVERNAME/TARGET key Trade messages with 'nick' on server 'name' using 'key'. 'name' can be "ourname", or "itsname" /encrypt SERVERGROUP/TARGET key Trade messages with 'nick' on group 'group' using 'key'. /encrypt ALTNAME/TARGET key Trade messages with 'nick' on servers with 'altname' using 'key'. *** Ways to specify the cipher you want to use /encrypt nick key Trade SED messages (default) /encrypt nick key /path/to/program Trade messages ciphered by an external crypto script /encrypt -SED nick key Trade SED messages /encrypt -SEDSHA nick key Trade SED messages, using the SHA256 hash of 'key'. /encrypt -CAST nick key Trade CAST5-CBC messages /encrypt -BLOWFISH nick key Trade BLOWFISH-CBC messages /encrypt -AES nick key Trade AES256-CBC messages /encrypt -AESSHA nick key Trade AES256-CBC messages, using the SHA256 hash of 'key'. Compatability and availability of all these is discussed in the following news bulletin. AESSHA is the best cipher, but if it's not available, then SEDSHA is better than SED, but SED is universally compatable. If you must use SED, use a very long key string (40 chars) For backwards compatability, messages ciphered by external crypto scripts are always sent as SED messages, even though they are not, technically, ciphered with SED. *** News 06/26/2006 -- CAST5, Blowfish, AES, and AES-SHA encryption support This only took me 8 years to catch up with ircII! Yah! All of this support is provided by the EVP api in OpenSSL. If you don't have OpenSSL, this won't be supported. The /ENCRYPT command now takes an argument after the nickname, which can be either -CAST, -BLOWFISH, -AES, or -AESSHA /ENCRYPT nick -CAST key /ENCRYPT nick -BLOWFISH key /ENCRYPT nick -AES key /ENCRYPT nick -AESSHA key The -CAST support is fully compatable with ircII (as of the last time I tested it) and -BLOWFISH isn't compatable with anything but your fellow bleeding edgers (since nobody supports that yet). Same thing with -AES (only supported by EPIC for now). The Blowfish support is *NOT* compatable with FiSH, because FiSH has a couple of idiosyncracies (non-standard Base64, supports keys that are longer than what openssl supports). It would probably be more likely that FiSH needs bag-on-the-side support than to be mainlined into the crypto support... AES uses a fixed 256 bit key. This is 32 characters. If your key is not 32 characters, it is padded out with nuls (ascii 0). If your key is more than 32 characters, it is truncated. AESSHA runs your key through SHA2 to generate a 256 bit key for AES. In any other way, it works the same as AES. SEDSHA runs your key through SHA2 to generate a 256 bit key for SED. In any other way, it works the same as SED. Unlike the others, this does not depend on openssl and is always available. Please remember, when you /ENCRYPT a target, everything you send to that target is encoded. This includes /CTCP requests (/DCC offers). Because ircII does not do CTCP-over-CTCP, you won't be able to send /DCC SEND or /DCC CHAT offers to ircII users, but it will work with EPIC users. *** News 06/24/2006 -- New ON, /ON NUMERIC The /ON NUMERIC hook is thrown for all numerics that you do not catch with a more specific /ON <number>. For example: /ON 318 * {echo Numeric 318 -- $*} /ON NUMERIC * {echo Not Numeric 318 -- $*} This can be used to unilaterally replace the default format for numerics that you don't otherwise catch, without having to actually catch all of them! The expandos are: $0 The numeric being thrown $1 The server who sent the message $2- The message *** News 06/24/2006 -- New script, 'reconnect' from nullie The script causes the client to reconnect to disconnected servers after a certain period (/SET reconnect_time, measured in minutes), cycling between servers in a given servergroup. To register given servergroup for its use, use "/NETWORK ADD group". With conjunction with the autojoin scripts, allows the user to automatically rejoin channels he was on before a disconnect. *** News 06/24/2006 -- New script, 'renumber' from nullie The script adds an alias (/RENUMBER), filling "gaps" in window numbers. If /SET renumber_auto is set (defaults to OFF), this will be performed automatically after a window is closed. *** News 06/24/2006 -- New script, 'floodprot' from nullie This script buffers all outgoing data sent by the client before transmitting it to the remote server, to avoid "Excess Flood" disconnection, e.g. pasting a lot of text or sending a lot of /MODE commands. /SET floodprot_burst decides how many lines can be sent immediately after an idle period. *** News 06/24/2006 -- New script, 'autoquery' from nullie The script creates new windows for incoming private messages, one per each sender. /SET autocreate_split_queries (defaults to OFF) decides whether windows will be split (ON) or hidden (OFF). *** News 06/24/2006 -- New script, 'autojoin' from nullie The script maintains a list of channels to join upon connecting to a server. For user-defined channels that are joined after every connect, create an ~/.irc/channels file with one line for each channel to join in the specified format: <channel> <winnum> <servergroup> [key] For instance: #epic 2 efnet #secretchan 3 somenet qwerty If the reconnect script is loaded and given server's group is added via /network add, the script will automatically rejoin channels on which user was before a disconnect, remembering the keys. *** News 06/24/2006 -- SEND_LINE does SCROLL_FORWARD in HOLD_MODE For those who use HOLD_MODE, if you accidentally scroll back, you might find yourself trapped in a place where hitting <enter> (the SEND_LINE keybinding) does not get you back current. To avoid this unfortunate situation, if you are in HOLD_MODE, then SEND_LINE keybinding (the <enter> key) will now always advance you forward a page, even if you are in scrollback mode. *** News 06/24/2006 -- Support for MAILDIR maildrops, /SET MAIL_TYPE If you /SET MAIL_TYPE MAILDIR, epic will check your maildir maildrop for email instead of your mbox maildrop. You can /SET MAIL_TYPE MBOX to switch back to mbox. You must set the MAILDIR (or MAIL) environment variable to point to your maildrop. For MAILDIR, your maildrop is the directory that contains the 'new' and 'cur' subdirectories. EPIC will check your 'new' directory for new emails. This support works the same way for maildir as it does for mbox: /on mail (maildir) /on mail (mbox) $0 -- New emails in 'new' $0 -- New emails in 'mbox' $1 -- Total emails in 'new' $1 -- Total emails in 'mbox' The maildir support does not look in 'cur' or anywhere else. *** News 06/24/2006 -- Nonblocking SSL negotiation Connections to ssl-enabled servers have an SSL protocol negotiation stage between the nonblocking connect() and irc protocol negotation. Traditionally, this SSL negotation has been blocking, but this can take a long time, during which the client is locked up. So now SSL negotiations are nonblocking. Yay. Previously, the server state goes from CONNECTING to REGISTERING. But if it has to take a detour for nonblocking ssl negotation, it will go from CONNECTING to SSL_CONNECTING and then to REGISTERING. *** News 06/24/2006 -- New script, 'nickcomp' from Blackjac *** News 06/06/2006 -- Much improved configure support for perl/tcl/ruby *** VERY IMPORTANT FOR PACKAGE MAINTAINERS *** The code in the 'configure' script that decides how to integrate perl, tcl, and ruby has been rewritten. For the most part, it should automagically work. Here's the details: PERL: --with-perl --with-perl=yes --with-perl=/usr/local/bin/perl --with-perl=no --without-perl By default, unless you specify --without-perl (or --with-perl=no), configure will look for the 'perl' program in your path (or use the perl you specify) and will interrogate the extmod package for details how to compile sucessfully against perl. If the default 'perl' binary is fine, you need do nothing at all. Otherwise, all you need know is what 'perl' binary you want to use. RUBY: --with-ruby --with-ruby=yes --with-ruby=/usr/local/bin/ruby --with-ruby=no --without-ruby Same deal as with perl. Configure will use the 'ruby' program (or the one you specify) to get all the details necessary to get support for ruby. For most people, you need do nothing at all. TCL: --with-tcl --with-tcl=yes --with-tcl=/usr/local/lib/tclConfig.sh --with-tcl=no --without-tcl TCL is a little different. It looks for a 'tclConfig.sh' file. These ones are used by default (in this order) /usr/lib/tclConfig.sh /usr/local/lib/tclConfig.sh /usr/local/lib/tcl8.4/tclConfig.sh If your tclConfig.sh is not in one of these places, or if you want to use a specific tcl version, you must specify the path to the tclConfig.h file to --with-tcl. Note to package maintainers: Please throw away all of the hacks and patches you have in place to support the old nasty way to set these flags. I tried to make this as painless as possible. If the user has ruby/perl installed, then ruby/perl support will be included. The hard part is tcl, since every system puts their tclConfig.sh in an odd place. *** News 06/06/2006 -- /SET INPUT_ALIASES and /SET CMDCHARS deprecated Due to recent optimizations in the epic command parser, these two sets no longer have any function (although they have not yet been removed). If you desparately need the use of these /set's, please let me know. Otherwise someday they will vanish into the ether. *** News 06/06/2006 -- Support for RUBY: $ruby() and /ruby Support for ruby (1.6.4 and 1.8.4 are tested) has been added. The support is similar to perl and tcl. You can call out to ruby using either the $ruby() function or the /ruby command. The following callbacks to epic are available from within ruby: EPIC.echo(stringval) Same as /echo EPIC.say(stringval) Same as /xecho -b EPIC.cmd(stringval) Run "stringval" without $-expansion EPIC.eval(stringval) Run "stringval" with $-expansion EPIC.expr(stringval) Return value of epic expression EPIC.call(stringval) Return value of epic function call All of these functions take one argument (a string) and return one value (also a string). Some examples: To export an epic variable to ruby (as a String object): @ ruby(rubyvar='$epicvar') or ruby {rubyvar = EPIC.expr("epicvar")} To export a ruby variable to epic: @ epicvar=ruby(rubyvar) or ruby {EPIC.expr("epicvar=[#{rubyvar}]")} To iterate over each item in a ruby collection, passing each value as the argument to an epic command: ruby { something.each {|x| EPIC.cmd("epiccmd #{x}") } } (Think of using this to do database access) The EPIC.cmd and EPIC.eval commands do not start off a new atomic scope within epic. Consequently, you can read (and write) local variables that are otherwise in scope: alias booya { @ :epiclocal = 5 ruby { rubyvar = EPIC.expr("epiclocal").to_i rubyvar = rubyvar + 1 EPIC.expr("epiclocal = '#{rubyvar}'") } echo $epiclocal } It is *IMPORTANT TO REMEMBER* that because EPIC is untyped, everything returned to ruby is a String object. If you have a number in EPIC, it will be a String in ruby, and you need to explicitly convert it to an integer (as in the above example). It is *ALSO VERY IMPORTANT TO REMEMBER* that if you use the /ruby command, and you use it in a script that is loaded with the standard (legacy) loader, the loader will insert semicolons into your ruby code in the places where it would belong if it were ircII code. Needless to say, THIS MAY OR MAY NOT ALWAYS BE CORRECT. The best solution is to get with the program and switch over to the pf-loader where this is entirely under your control. *** News 06/06/2006 -- New commands: /PERL and /TCL Support for perl and tcl has been via the $perl() and $tcl() functions. Regretably, calling a function submits the argument list to quoting hell, which can be particularly painful for perl. To remedy this somewhat, you can now call a block of perl or tcl code with these new commands. The block of code must be surrounded by curly braces, which protect the inside from quoting hell. Usage: /perl { <perl code goes here> }; Usage: /tcl { <tcl code goes here> }; See the above warnings (for ruby) about using these commands in a script that is loaded by the standard loader. Semicolons may be a problem. Use the PF loader instead. *** News 06/06/2006 -- Removal of /set highlight_char The long-threatened removal of /set highlight_char has occurred. Please use the 'highlight' script for its replacement. *** News 06/06/2006 -- New script, "chanmonitor" *** News 06/06/2006 -- Extra support for 64 bits, if you have it. Most built-in-functions that take an integer argument will now support 64 bit integer values. Further, the following functions can output 64 bits whereas they never could before: fseek numsort strtol tobase tobase stat EPIC5-0.2.0 EPIC5-0.2.0 was released somewhere in here EPIC5-0.0.8 *** News 12/09/2005 -- New window verb, /WINDOW INDENT (ON|OFF) The /window indent value permits you to tweak the /set indent value on a per-window basis. However, if you do /set indent, it will overwrite all of your per-window indent values (I'm *** News 12/09/2005 -- New built in function, $xform() *** OBSOLETE *** THIS INFORMATION IS OLD *** OBSOLETE *** This information is no longer useful. Do not rely on it. Please *** OBSOLETE *** THIS INFORMATION IS OLD *** OBSOLETE *** The $xform() function does (symmetrical) string transformations. What is a symmetrical transformation? It is one in which all of the bits in the original string are present in the result string, and the original string can be recreated from the result. This is distinct from the idea of hashing, which is closer to being a lossy compression algorithm. The general format of the $xform() function is: $xform(TYPE DIRECTION KEY text) whereby TYPE is one of the following: URL Convert non-printable chars to %HH equivalent Equivalent to $urlencode() and $urldecode() ENC Base16 encoding, for creating variable names Equivalent to $encode() and $decode() B64 Base64 encoding, for sending over email/http Equivalent to $b64encode() and $b64decode() NONE No changes (just copies the string) CTCP Mangle things they way CTCP ought to be No equivalent and DIRECTION is either "E" (for encoding) or "D" (for decoding), and KEY is ignored except for SED, where it is the cypher used in the SED bit-twiddling, and DEF where it is the nickname of the person who it should be transformed for At the time I write this, SED, CTCP, and DEF are not implemented yet, but they will be soon. Watch for more info. More transformations are expected to be supported in the future (including real encryption routines). It will probably be possible to add your own at some point. *** News 11/28/2005 -- New /ON, /ON KEYBINDING This on goes off whenever a keybinding is activated: $0 - The keybinding that is activated $1 - Length of the sequence (future expansion -- always 0 for now) $2 - The ascii number of the key that activated it. Note that $2 only contains the last character in the bound sequence. In the future, $2- will change to be a word list containing all of the characters in the sequence. When this change is made, $1 will be changed to a positive value. But for now, $1 is always 0, and $2 is always just the last character. If you hook this silently, you will suppress the keybinding! If you only want to spy on a keybinding, hook it quietly. The purpose for which this was requested was to be able to trap everything bound to SELF_INSERT without having to rebind them. Examples: To do something every time capital-A is pressed: on -keybinding "SELF_INSERT % 65" {echo You pressed A} To keep the user from using the SWITCH_CHANNELS keybinding. on ^keybinding "SWITCH_CHANNELS % *" # *** News 10/30/2005 -- New function, $dbmctl() [hash table support] *** Notice *** This function uses a custom implementation of SDBM. The file format it generates should be compatable with $perl() but is not compatable with ndbm or gdbm. The $dbmctl() function is an interface to the unix DBM API: $dbmctl(OPEN type filename) Open a DBM file for read and write access. $dbmctl(OPEN_READ type filename) Open a DBM file for read-only access. $dbmctl(CLOSE refnum) Close a previously opened DBM file $dbmctl(ADD refnum "key" data) Insert a new key/data pair. Fail if key already exists. $dbmctl(CHANGE refnum "key" data) If key already exists, change its data. If it doesn't exist, add it. $dbmctl(DELETE refnum "key") Remove a key/data pair $dbmctl(READ refnum "key") Return the data for a key. $dbmctl(NEXT_KEY refnum start-over) Return the next key in the database $dbmctl(ALL_KEYS refnum) Return all keys -- could be huge! could take a long time! $dbmctl(ERROR refnum) Return the errno for the last error. "Type" must always be "STD" for now. Reserved for future expansion. "Filename" must be a filename that doesn't include the ".db" extension! This is a requirement of the DBM api, and not an epic thing. "Refnum" is the integer value returned by OPEN or OPEN_READ "Key" is a hash table key value "Data" is a hash table data value "Start-over" is 1 if you want to fetch the first key in the table, and is 0 if you want to fetch the next key. You must call this with 1 before you call it with 0, according to the API. ALL_KEYS does a "start-over" and you need to do another "start-over" after using it. *** News 10/29/2005 -- New built in function, $levelctl() You may now add new window/lastlog/ignore/flood levels at runtime! Once you add a level, you can use it in your windows, and you can use it in /xecho -l. Since ignore and flood control are hardcoded features adding a level will show up in their types, but will never be used. The $levelctl() function permits you to add and query levels: $levelctl(LEVELS) Return a space-separated list of all canonical level names. This does not return any alias names (see below) $levelctl(ADD new-name) Add "new-name" as a new canonical level name. This new name is automagically considered part of ALL. There is no way to remove levels (yet). This returns the new level's refnum, which is a small integer value. If you try to add a level that already exists, it is not added again, but it's existing refnum is returned. $levelctl(ALIAS old-name new-name) Add "new-name" as an alias for "old-name". An alias name is permitted anywhere that level names are accepted, but an alias name is never output by the client -- it is silently converted into the canonical name. For example, "OTHER" is the canonical level name, and "CRAP" is its alias. If you do /WINDOW LEVEL CRAP It will tell you *** Window level is OTHER $levelctl(LOOKUP level-name) Returns the refnum for a level-name. The level-name can either be a canonical level name or an alias level name. $levelctl(LOOKUP refnum) Returns the canonical level name for a refnum. $levelctl(NORMALIZE level-name(s)) Given a comma-and-space separated list of level names (like what you specify to /window level), this returns a canonical form which is suitable for /save'ing or displaying to the user. For example, ALL,-CRAP would return PUBLIC MSG NOTICE WALL WALLOP OPNOTE SNOTE ACTION DCC CTCP INVITE JOIN NICK TOPIC PART QUIT KICK MODE USER1 USER2 USER3 USER4 USER5 USER6 USER7 USER8 USER9 USER10 EPIC5-0.0.7 *** News 10/12/2005 -- The %D status bar expando (dcc activity) improved Previously, the %D status bar expando showed you "packets" of activity. A packet was 2k of data, so it looked like this: ( hop: 99 of 2350 (4%)) Now this is all done in reasonable units, like so: ( hop: 198Kb of 4.7Mb (4.2%)) Perhaps in the future I might remove the "b" after "Kb" and "Mb". *** News 10/12/2005 -- New script 'highlight', removal of highlight ignores This is kind of experimental for now, but there is a new script 'highlight' which you can /load, which will implement the features that were previously available via "highlight ignores". I haven't yet moved /set highlight_char over to the script, but I will do that for EPIC5-0.0.8. If you want to help me make this script more robust, drop me a line or just /msg me. If you have no idea what I'm talking about, don't worry, you're not missing anything. *** News 10/12/2005 -- New built in functions $b64encode() and $b64decode() The $b64encode() function takes an arbitrary string and returns an expanded string using the Base64 encoding, which uses all of the characters A-Z, a-z, 0-9, and + and /. If you count them up, this is 64 distinct characters (hence the name base64), which represent six bits of information. Thus, 3 bytes of data (24 bits) can be transformed into 4 characters which are safe to send through data channels that can only handle text (such as the irc server, or DCC CHAT, email, or a web server) *** News 10/12/2005 -- Can now set both level and target with with /XECHO Maybe it would help if I just re-documented /XECHO's options: XECHO options can always be abbreviated as long as they're not ambiguous. For example, /XECHO -CURRENT can be used as /XECHO -C, and /XECHO -WINDOW can be used as /XECHO -W. -CURRENT The same as -W 0 (output to current global window) -LINE <number> Must be used in conjunction with -WINDOW. This allows you to replace the <number>th line on a specified window. This is the feature of the old "scratch windows". All windows behave as scratch windows these days. -LEVEL <window-level> The window levels are of course NONE, CRAP, PUBLICS, MSGS, NOTICES, WALLS, WALLOPS, OPNOTES, SNOTES, ACTIONS, DCCS, CTCPS, INVITES, JOINS, NICKS, TOPICS, PARTS, QUITS, KICKS, MODES, and USER1 through USER10. If you use this option without using -WINDOW, then the output will be sent to the window that claims the corresponding level for the current server. If you use this option together with -WINDOW, then it will be sent to that window anyways. This is new, because previously you could not tag output to a window with a level that didn't belong to that window. -VISUAL Output to a visible (non-hidden) window. If window 0 (the global current window) is visible, it is used. If window 0 is hidden, then another visible window is chosen. -WINDOW <refnum> Output to a particular window, instead of the normal one. Remember that window refnum 0 is special, and represents the global current window which **may or may not** be connected to the current server. If you need the current server's current window, use $serverwin(). -ALL Output to each and every window. -BANNER Prefix the output with the /SET BANNER value. -RAW Send the output directly to the terminal and do not attempt to pre-process it. This is used to send special control sequences to your terminal that EPIC does not support (such as sequences to change your character set) -NOLOG Do not allow the line of output to be written to the global logfile, a window logfile, or to a /LOG logfile. -SAY Use say() to output the line, which makes it subject to output suppression rules. An example will help: alias loud { xecho -b This is always displayed } alias noloud { xecho -b -s This is not always } In the first case, you could do loud or ^loud and you would still see the output either way. There is no way to "suppress" the output. In the second case noloud shows the output but ^noloud does not. -X ("Extended" output?) Ignore /SET DISPLAY_MANGLE for just this one line, and pretend it was /SET to NORMALIZE instead. The most obvious use for this would be to output something in color even when the user has used /set display_mangle to strip color. -F Do not allow the line of output to trigger hidden-window- notification. Normally output to a hidden window will cause that window's refnum to show up in the %F status bar expando. Using this keeps that from happening. -- End of argument processing: there are no more arguments after this and everything else should just be output directly xecho -- -f <- the -f here is output! *** News 10/05/2005 -- Automatic Scrollback rebuilding, new /WINDOW operation Whenever the width of a window changes (either because you're swapping it in, and the screen is a different size than when you swapped the window out, or because it's visible on a screen that is changing size), the window's scrollback will be rebuilt. This is accomplished by clearing the scrollback and then rebreaking the lastlog. If you have a large scrollback and a small lastlog, this may result in some data lossage. Although the rebuilding process attempts to keep the top of the window the same before and after the rebuild, it's possible for what you were seeing on your window to be gone after the rebuild. If this occurs, the window will be set to the very top of the scrollback (ie, as far back as it can go). Generally, it is advisable from now on to have a lastlog that is at least as large as your scrollback, if you want to avoid any chance of problems when your windows resize. *** News 10/05/2005 -- DNS Helper (hostname lookups) now nonblocking by default EPIC5-0.0.6 shipped with a dns helper, but it was synchronous (it did not fork off a subprocess). It has been debugged and now it is turned on, and it is asynchronous. This DNS helper is only used for server connections. This is the final stage in making connections to server fully nonblocking. yay! However, forking off a child process to do the nonblocking dns lookup can cause issues with resource limitations. (Typically you are only permitted to run a certain number of processes at a time, and this would count against that limit.) *** News 10/05/2005 -- New /SET, /SET MANGLE_DISPLAY, many sets removed There is a new /SET, /SET MANGLE_DISPLAY, which mangles all output being sent to your display, naturally. It works in the same way that /set mangle_inbound, mangle_outbound, mangle_logfiles works, and its default value is "NORMALIZE". You should always specify either "NORMALIZE" or "MANGLE_ESCAPE" -- if you turn off both, then it will be possible for remote people on irc to send raw escape sequences to your display, and that is bad! The following /SETs have been superceded by recent changes, and have been removed: alt_charset blink_video bold_video color display_ansi display_pc_characters inverse_video underline_video *** News 10/05/2005 -- Functions that changed because of unified mangler $leftpc(<COUNT> <TEXT>) This function performs a $stripcrap(NORMALIZE <text>) now. $numlines() This function performs a $stripcrap(NORMALIZE <text>) now. $printlen(<TEXT>) This function performs a $stripcrap(NORMALIZE <text>) now. $stripansicodes(<TEXT>) This function is the same as $stripcrap(NORMALIZE <text>) $stripc() This function is the same as $stripcrap(COLOR <text>) $stripcrap() See below for important information. *** News 10/05/2005 -- Unified string mangler/normalizer Forget everything you thought you knew about the old mangler and normalizer (sorry!). This is (believe it or not) much less complicated than before, and certainly more well documented than before! EPIC5 now includes a unified mangler/normalizer that is used by the following features: (The above functions) The input prompt /lastlog -mangle /log mangle /set mangle_inbound /set mangle_outbound The status bar All characters are grouped into one of 9 "types": 0 Normal chars 32-127, 160-255 1 High bit control chars 128-159 2 Escape char ^[ 3 Color char ^C 4 Highlight toggle ^B ^E ^F ^O ^V ^_ 5 Unsafe char ^M (\r) 6 Control char ^@ ^A ^D ^H ^K ^L ^N ^P ^Q ^R ^T ^U ^W ^X ^Y ^Z ^\ ^] ^^ 7 Beep ^G 8 Tab ^I 9 Non-destructive Space ^S There are the 12 following mangle types: MANGLE_ESCAPES NORMALIZE STRIP_COLOR STRIP_REVERSE STRIP_UNDERLINE STRIP_BOLD STRIP_BLINK STRIP_ND_SPACE STRIP_ALT_CHAR STRIP_ALL_OFF STRIP_UNPRINTABLE STRIP_OTHER The mangle types transform the characters, according to this table: ----------------------------------------------------------------------- A = Character or sequence converted into an attribute M = Character mangled (ie, ^A into ^VA^V) S = Character stripped, sequence (if any) NOT stripped X = Character stripped, sequence (if any) also stripped T = Transformed into other (safe) chars - = No transformation Type 0 1 2 3 4 5 6 7 8 9 (default) - - - - A - - T T T NORMALIZE - - A A - X M - - - MANGLE_ESCAPES - - S - - - - - - - STRIP_COLOR - - - X - - - - - - STRIP_* - - - - X - - - - - STRIP_UNPRINTABLE - X S S X X X X - - STRIP_OTHER X - - - - - - - X X (/SET ALLOW_C1) - X - - - - - - - - ----------------------------------------------------------------------- There are only *three* ambiguous cases: * Type 2: MANGLE_ESCAPES has first priority, then NORMALIZE, and finally STRIP_UNPRINTABLE * Type 3: STRIP_UNPRINTABLE has first priority, then NORMALIZE, and STRIP_COLOR. You need to use both NORMALIZE and STRIP_COLOR to remove color changes in escape sequences * Type 6: STRIP_UNPRINTABLE has first priority over NORMALIZE. *** News 10/05/2005 -- "ANSI" mangle type linked to "NORMALIZE" The mangler type "ANSI" has been renamed to "NORMALIZE". You can continue to use "ANSI", but it will be silently changed to "NORMALIZE" internally. *** News 10/05/2005 -- New behavior for $mask() [jm] Jm needs to write me some documentation for this... *** News 08/30/2005 -- De-support of 7-bit-only terminal/emulators Once upon a time, long long ago, on a planet far away, there used to be terminals and terminal emulators that did not know how to handle characters with the 8th bit set. Thankfully, nobody uses these any more. There has been rudimentary support in ircII clients to support this old hardware, but it is now unnecessary and only causes to torment people who can't figure out how to input their 8 bit characters! /SET EIGHT_BIT_CHARACTERS has been removed, and its behavior is now hardcoded to the previous "ON" value. If you really need 7-bit support, epic4 will always have it... *** News 08/30/2005 -- Simplification of display mangling, part one There are 15 /set's that control how output to the display is prepared ("mangled"). That's about 14 too many. The following /set's are at best historical curiosities and do not serve any modern purpose. Their function has been eliminated. BEEP_MAX TAB TAB_MAX ND_SPACE_MAX *** News 08/23/2005 -- New /window verb, /WINDOW FLUSH_SCROLLBACK The /WINDOW FLUSH_SCROLLBACK option, which I already regret the naming of, deletes all of the items in your scrollback buffer. Because the scrollback buffer is used to paint your window, your window will be cleared when you do this. *** News 08/10/2005 -- Mangle level "ALL" does not include "UNPRINTABLE" This was a mistake, and "ALL" should never have included the UNPRINTABLE mangle level. So if you want to do them both, then just do ALL,UNPRINTABLE. Although you would never want to do that, since UNPRINTABLE is more encompassing than ALL. EPIC5-0.0.6 *** News 08/08/2005 -- Updated behavior for /DCC GET Based on a request from larne, the following syntaxes are now supported for /DCC GET: * /DCC GET nick file1 file2 ... fileN /directory This will download one or more files to /directory. * /DCC GET nick offered-file not-offered-file This will do a /DCC RENAME get nick offered-file not-offered-file and then a /DCC GET not-offered file. You cannot combine this with the previous syntax. * /DCC GET nick Download ALL files that "nick" have offered you. You can combine this with the first one, (/DCC GET nick /directory) to download all files offered to a single directory. *** News 08/06/2005 -- Generalized status bar activity handling There are three new windowctls, and /window commands to follow later on. They allow you to control a new status bar expando %E. This will take a bit of explanation. Each window has 11 "activity levels". An "activity level" is a format and a $* value. The activity levels are numbered 0 to 10, and 0 is special. Each window has a 'current activity level' which you can set with: $windowctl(SET <winref> CURRENT_ACTIVITY <number>) When you set the current activity level to a number 1 to 10, then the %E status bar expando will replace with that activity level's format expanded against that activity level's data. The activity level 0 is not usable, but instead stores default values that are used if you don't define something else for the activity level. To set a window's activity level format: $windowctl(SET <winref> ACTIVITY_FORMAT <number> <string>) Remember that $'s in <string> need to be doubled, in order to protect them from being expanded at $windowctl() time. You do not need to (and you should not) put quotation marks around <string>. If you do not set an activity format for a given level, then the activity format you set for level 0 will be used as a default. To set a window's activity level data: $windowctl(SET <winref> ACTIVITY_DATA <number> <string>) This is the value of $* that will be used to expand the activity format each time epic redraws the status bar. If you do not set an activity data for a given level, the activity data you set for level 0 will be used as a default. So to put this all together, if you set the current_activity value to 0, then the %E expando will expand to nothing. If you set the current_activity level to any value 1 to 10, then it will expand to the "activity_format" value, which is expanded on the fly using the "activity_data" value as $*. If you use a current_activity level that is missing an activity_format or activity_data value, the values you set for level 0 are used as defaults. Example: $windowctl(SET 1 ACTIVITY_FORMAT 1 ^C3$$*) $windowctl(SET 1 ACTIVITY_FORMAT 2 ^C4$$*) $windowctl(SET 1 ACTIVITY_FORMAT 3 ^C7$$notifywindows()) $windowctl(SET 1 ACTIVITY_FORMAT 4 ^C12$$*) $windowctl(SET 1 ACTIVITY_DATA 0 booya) $windowctl(SET 1 ACTIVITY_DATA 2 hazmat) Now if I $windowctl(SET 1 CURRENT_ACTIVITY 1) then %E will expand to a green "booya" because the format is ^C$$* and the value of $* is "booya" (from activity_data 0) If I then $windowctl(SET 1 CURRENT_ACTIVITY 2) then %E will expand to a bold red "hazmat" because the format is ^C4$$* and the value of $* is "hazmat" (cause I set activity_data 2) If I then $windowctl(SET 1 CURRENT_ACTIVITY 3) then %E will expand to a yellow $notifywindows() that will dynamically update, because there are two $'s in front of it. It will ignore the "booya" because it does not reference $*. This feature is intended to be used to create a %F workalike that gives you full and utter control over all aspects of how the expando will show up, since you get 10 levels, plus a default level, and you get to control which ones use the default and which ones get their own custom $* and you control the formats -- you control it all. Have fun! *** News 08/05/2005 -- New flag to /LASTLOG, /LASTLOG -MANGLE The /LASTLOG -MANGLE flag takes a mangle description (the thing that $stripcrap() takes) and mangles each line in the lastlog before matching against your pattern. THIS IS VERY EXPENSIVE so you should not do this unless you really want to. *** News 08/05/2005 -- *** IMPORTANT *** /EXEC -OUT changed /EXEC -OUT will now output to the window's current target, instead of to the window's current channel. The current target is $T, and is the window's current query (if there is one) or the window's current channel (if there isn't a query). *** News 08/05/2005 -- Text following unmatched braces/parens/brackets If an unmatched brace/bracket/paren is found, about 20 chars after the character will be output in the error message, to help you find it. Unfortunately I can't give you line numbers yet. *** News 08/05/2005 -- New server status state, "ERROR" The "ERROR" status state is used in the following cases: * DNS lookup failed (DNS -> ERROR -> CLOSED) * Connect failed (CONNECTING -> ERROR -> CLOSING -> CLOSED) * Socket write failed (<ANY> -> ERROR -> CLOSING -> CLOSED) *** News 08/05/2005 -- The script "altchan.bj" is renamed to "altchan" ... ... Because we deleted the old (lame) altchan script. *** News 08/05/2005 -- New /set, /SET OLD_MATH_PARSER To use the old math parser, you need to /SET OLD_MATH_PARSER ON. This is a promoted version of /xdebug old_math, which you should no longer use, it will no longer work. Please make plans to migrate away from the old math parser in epic5 scripts. *** News 08/05/2005 -- New mangle type, "UNPRINTABLE" You can now mangle "UNPRINTABLE", which closely models the /SET NO_CONTROL_LOG variable. When you mangle UNPRINTABLE, all characters that are not printable (isgraph() || isspace()) will be removed, which includes all of the highlight characters. *** News 08/05/2005 -- New script 'ison' from jm Jm has written a new 'ison' script which implements an ison queue, and acts as the backend for the 'notify' script which he has also revamped for this release. This notify script will soon take over the notify duties for epic, so it'd be a good thing to test it before the builtin notify goes away... *** News 08/05/2005 -- New operators, === and !== The new math parser now supports two new operators, which do case sensitive string comparisons. === returns true if the two strings are the same, and !== returns true if the two strings are not the same. *** News 08/05/2005 -- *** IMPORTANT *** Case insensitive changes All case insensitive string comparisons should now be handled in the CASEMAPPING=ascii sense. This means '{' and '[' are not equal, and | and ~ and ] and } are not equal. *** News 08/05/2005 -- 005 CASEMAPPING value now handled (sort of) Some networks use CASEMAPPING=ascii, which means that the characters {|} are not the same as [~] as they are when CASEMAPPING=rfc1459. So in order to keep track of nicks correctly on the former servers, we now track CASEMAPPING and use it as best we can. *** News 08/05/2005 -- Flexible on patterns can match against themselves When you do /ON TYPE '<pattern>', the value of $* in the pattern is the $* of the ON, so you can essentially do self-references. For example, this now works... /on general_privmsg '% $0 *' { ... you sent a message to yourself ... } *** News 08/05/2005 -- Can set NOTIFY_NAME via $windowctl() You can $windowctl(SET <refnum> NOTIFY_NAME <stuff>) now, instead of having to battle with quoting hell with /window notify_name "..." *** News 07/27/2005 -- Extended $userhost() behaviour. Given a channel name in place of a nick, $userhost() will return the unknown userhost as it did previously, but as a side effect, it will attempt to resolve the remaining list of nicks in the given channel only. This is a somewhat esoteric feature which should only be useful for people who use the $serverctl() maxcache feature. *** News 07/27/2005 -- New function $shiftbrace(). Given a variable name, $shiftbrace() will remove the braced expression at the beginning of the variables value and return it. This is a somewhat experimental scripting feature. /assign qwer {asdf zxcv} {zxcv asdf qwer} /eval echo $shiftbrace(qwer) => asdf zxcv /eval echo $shiftbrace(qwer) => zxcv asdf qwer *** News 06/21/2005 -- Many scripts desupported Many scripts that shipped with epic4 are now unsupported in epic5. They will always be available at but they won't be supported for use with epic5, unless someone makes a motion to re-support them. Here are the scripts: alias altchan autokick autoop away basical columns dcc_spacefix dcc_timeout deban dig dns edit efnext environment events events.hop fake-dcc fe fe.pf genalias hybrid6 imap ip-to-int ircprimer keybinds killpath kpstat langtrans list ls meta mkpdir more mudirc netsplit.env newformat nicks old-dcc prefix recursion repeat scandir sdiff silent sound starutils stat status_lag tabkey.th tabkey tc time tls vi-binds webster window *** News 06/03/2005 -- New script, "newnick" (blackjac) The "newnick" script, which is loaded automatically by the 'global' script [and which you need to load from your startup script if you like this functionality] implements the functionality of the old /SET AUTO_NEW_NICK feature, which has recently been removed. It exposes four /SETs (default values): /SET AUTO_NEW_NICK ON When ON, automaticaly mangle the nickname to ensure that you can connect to the server without delay. /SET AUTO_NEW_NICK_CHAR _ Then character to append to your nickname in the mangling process. /SET AUTO_NEW_NICK_LENGTH 9 The maximum nickname length that can be mangled. /SET -AUTO_NEW_NICK_LIST A list of nicknames to try to use before mangling begins. *** News 06/03/2005 -- New /TIMER flags, and execution contexts. [ About timer domains ] Each timer now belongs to one of three domains: 1) Server timers 2) Window timers 3) General timers A "server timer" is created when you run /TIMER down-wind from an /ON caused by a server event. A "window timer" is created when you run /TIMER at any other time. [ Forcing a particular type of timer ] You can manually force a timer to be in a particular domain with a flag: /TIMER -SERVER <refnum> Create a server timer /TIMER -WINDOW <refnum> Create a window timer /TIMER -GENERAL Create a general timer. [ When timers go off, they change the current window and server ] When a timer goes off, if it is a server timer, then it will switch the default server to its server, and switch the current window to that server's current window. If the timer is a window timer, it switches the current window to its window, and switches the current server to that window's server. If the timer is a general timer, it leaves the current window alone, and switches the server to the current window's server. [ When timers go off and their window or server is gone ] So each server and window timer is "attached" to something, either a server refnum or a window refnum. If that thing it is attached to no longer exists when the timer goes off, then the timer has to either turn into a general timer (and bind to the current server) or simply not execute at all. If you use the -CANCELABLE flag, the timer is cancelable and will not execute in this case. The default is to be non-cancelable and to be treated as a general timer if the window or server disappears. *** News 06/03/2005 -- The SIGUSR2 signal now supported If you send epic the SIGUSR2 signal (kill -USR2 pid) from an outside shell, EPIC will throw a "system exception" which makes epic give up on whatever it is doing and fully unrecurse. This can be used to break out of an infinite loop. *** News 06/03/2005 -- Several builtin /set's removed, now scripted The following builtin sets which no longer are used in the base executable have been formally removed. Many of them have been implemented by the 'builtins' script. AUTO_NEW_NICK AUTO_RECONNECT AUTO_RECONNECT_DELAY AUTO_REJOIN_CONNECT CONNECT_TIMEOUT HELP_PAGER HELP_PATH HELP_PROMPT HELP_WINDOW MAX_RECONNECTS SWITCH_CHANNEL_ON_PART *** News 06/02/2005 -- Your startup script is loaded before connecting Historically ircII has always loaded your startup script (~/.epicrc or ~/.ircrc) after you successfully completed your first connection to a server. This was to allow you to do irc commands in your startup script, such as /JOIN and /NICK and /UMODE, etc. This has now been changed so your startup script is always loaded at startup before epic has attempted to connect to a server. If you need or prefer the prior semantics, please wrap your startup script in a wrapper like so: on #^connect -1 * { ... your old ircrc goes here ... @hookctl(remove -1) } What this does is tell epic that you want the body of your script (the "... your old ircrc goes here ..." part) to be loaded after you connect to a server, and the @hookctl(remove -1) part makes sure that the /on deletes itself after it goes off. This exactly effects the prior behavior. The -B command line option continues to be supported, but is ignored since its behavior is now the default. *** News 06/02/2005 -- Changes to how server descriptions w/o ports get handled As you may or may not know, epic handles all server descriptions in a unified way. Server descriptions are the things that look like: <hostname>:<port>:<password>:<nick>:<group>:<protocol> The places you can use a server description ("hostname") are: /SERVER -ADD <hostname> /SERVER -DELETE <hostname> /SERVER +<hostname> /SERVER -<hostname> /SERVER <hostname> /DISCONNECT <hostname> $serverctl(REFNUM <hostname>) /XEVAL -S <hostname> /XQUOTE -S <hostname> /MSG -<hostname>/target $winchan(#chan <hostname>) /LOG SERVER <hostname> /WINDOW SERVER <hostname> (Every other place in epic requires a server refnum) This change makes the following backwards-incompatable change: When you add the first server for a given <hostname>, the port you use on that server becomes the default port for that <hostname> in all other places in epic. This is true even if you later add other servers for <hostname> using other ports! Example 1: /server -add irc.host.com:6666 adds a new server, we'll call server refnum 0. So now any place you use "irc.host.com" (see above) will default to port 6666, and thus "irc.host.com" will refer to server refnum 0, rather than being an error. Furthermore, let's say you do: /server -add irc.host.com:6667 and this becomes server refnum 1. The "irc.host.com" continues to refer to server 0, even though this server uses port 6667. Example 2: /server -add irc.host.com:6666 adds a new server, server refnum 0. Then later on you do /server irc.host.com This will connect to server refnum 0, on port 6666, and WILL NOT create a new server on port 6667 and connect on port 6667! *** News 06/02/2005 -- /QUIT with no arguments uses per-server quit messages If you do /QUIT with no arguments, then any per-server quit message that you have previously set with $serverctl() will be used, and will not be clobbered by /SET QUIT_MESSAGE. Thus, /SET QUIT_MESSAGE is only used as a fallback for servers that do not have their own quit *** News 06/02/2005 -- Preliminary support for nonblocking dns for server DNS lookups for server connections are now done in a nonblocking way. This code has not been tested very much so it may be unstable. If you try to break it, you probably will. Let me know how you did it. Nonblocking DNS lookups require the use of a "dns helper" process, which is fork()ed for every lookup. Since you only have a limited number of processes available to you, this means you shouldn't try to connect to an unreasonable number of servers concurrently. If you do manage to do that, you may receive errors. It should be safe to use /server to cancel a very long nonblocking dns request. *** News 06/02/2005 -- /WINDOW SERVER shows server list instead of error Previously /window server without an argument displayed an error which isn't terribly friendly. So now it will display the server list, as if you had done /server without any arguments. *** News 05/10/2005 -- Nickname rejections handled differently now When the server rejects your NICK change request after you have already connected and registered with the server, no further action will take place because there's nothing that can be done to stop a nick collision, and nothing that needs to be done if your new nick is taken. If the server rejects your NICK during the registration process, epic will no longer "fudge" your nickname by adding _'s and/or rotating your nickname, and will no longer "reset" your nickname by prompting you for a new one and stopping the world until you give it a new one. Rather, EPIC will throw /on new_nickname and if you don't use it, will warn you that you need to use /NICK to choose a new nickname (and /server +<ref> to reconnect if you ping out) This also means that /SET AUTO_NEW_NICK is gone (soon to be replaced by a script, which will implement the old hardcoded behavior), and handling nickname change failures is fully under control of script! *** News 05/10/2005 -- New window verb, /WINDOW KILLABLE (default on) When you /WINDOW KILLABLE OFF, the following changes occur: * /WINDOW KILL will fail[1] with an error message. * /WINDOW KILL_ALL_HIDDEN will not kill the window. * /WINDOW KILL_OTHERS will not kill the window. * /WINDOW KILLSWAP will fail[1] with an error message. [1] A failure of *any* /window operation causes any subsequent operations in the same statement to be discarded, to avoid accidentally operating on the wrong window. This behavior is not a special-case. *** News 05/07/2005 -- New $windowctl(SET <ref> TOPLINE <num> <stuff>) You can now set a topline literally (without wrangling with the /window command) this way. The <stuff> is not subject to any sort of double-quoting shenanigans. BTW, don't get too excited, I haven't implemented any other $windowctl(SET *) operations, but this one needed to be done right away. *** News 05/02/2005 -- New $serverctl(), $serverctl(GET <ref> LOCALPORT) This returns the port used by the local side of a connection to the server. If the server is not connected, it returns 0. *** News 04/28/2005 -- *** IMPORTANT *** /ON LEAVE changed to /ON PART This is a hard cut-over breaking of backwards compatability. There is no way to "go back", but there are means you can take to compensate: @type = (info(i) < 1224) ? [LEAVE] : [PART] on $type * { ..... } This is the last vestige of the senseless substitution of the non-irc word "LEAVE" for the irc word "PART". Good riddens to it and may it never return again. *** News 04/25/2005 -- /SET -CREATE removed (use /ADDSET) The /set -create command which was first marked for deletion 6 months ago has now been removed. You can use the /addset alias provided by the "builtins" script to get the same functionality: /ADDSET <name> <type> { ... callback ... } Don't forget to /SET <name> <value> after you do an /addset to give it an initial value! *** News 04/25/2005 -- *** IMPORTANT *** NEW MATH PARSER NOW DEFAULT The new math parser, which first appeared in 1998 in EPIC4pre2.001, and has been stable since early 1999, has now finally been made the default math parser. IT IS POSSIBLE THAT VERY BADLY WRITTEN SCRIPTS OR SCRIPTS THAT DEPEND ON BUGS IN THE OLD MATH PARSER MAY BREAK AS A RESULT OF THIS CHANGE! If you need to continue using the old math parser, you can turn it back on with: /xdebug old_math The old math parser won't be going away any time soon, but it will never again be the default math parser (barring rioting crowds with effigies and pitchforks at my door...) The operation /xdebug new_math is now a no-op, and doing it will have no effect. YOU CANNOT TURN OFF NEW MATH BY DOING /XDEBUG -NEW_MATH ANY MORE. YOU MUST DO THE ABOVE /xdebug old_math. EPIC5-0.0.5 *** News 04/18/2005 -- New $windowctl() stuff $windowctl(GET <win> DISPLAY_SIZE) to replace $winsize() $windowctl(GET <win> SCREEN) to replace $winscreen() $windowctl(GET <win> LINE <num>) to replace $winline() *** News 04/18/2005 -- /SET -OLD_SERVER_LASTLOG_LEVEL turns that feature off. If you /set -old_server_lastlog_level, then the window's level is not changed when it is connected to an existing server. This is probably a bad idea as it can lead to level duplication, the effects of which are poorly-defined *** News 04/18/2005 -- /SET -NEW_SERVER_LASTLOG_LEVEL turns that feature off. If you /set -new_server_lastlog_level, then the window's level is not changed when it is connected to a new server. This allows you to completely opt-out of this feature. *** News 04/18/2005 -- New $serverctl() stuff So $serverctl(LAST_SERVER) and $serverctl(FROM_SERVER) have been added and are used by the builtins script *** News 04/18/2005 -- Translations now fixed I apologize to you RUSSIAN_WIN translation users for it being broken. This release fixes it up, and the fix was back-ported to EPIC4. *** News 04/18/2005 -- /SET STATUS_* subformats no longer size limited Previously, the effective size that /SET STATUS_* values you could expand to (especially if you have /SET STATUS_DOES_EXPANDOS ON) as limited on a per-expando basis. This was inconveniently small for some people who were using lots of characters doing color markup, etc. So all of those limits have been removed. *** News 04/18/2005 -- "ERROR --" changed to "INFO --", not used for exec, dcc The "ERROR --" io messages were alarming people so they were changed to "INFO --" in this release. Further, EXEC and DCC connections do not output these diagnostic messages. *** News 04/18/2005 -- /XECHO -v outputs to current window if it's visible /XECHO -v has traditionally output to the first visible window (the first window on the first screen), but it was agreed that it is more reasonable to output to the current window if that is visible. *** News 04/18/2005 -- Ways to keep dcc xfers from swamping your cpu... There are three new ways to keep high speed dcc xfers from swamping your cpu with unnecessary status bar redraws: $dccctl(UPDATES_STATUS [0|1]) Turns off (on) whether or not the %D status bar value should be updated whenever any activity occurs. If you turn this off, then %D is frozen until you turn it back on. The argument is optional, and if you don't supply it, it returns the current value. If you do supply it, the old value is returned. $dccctl([SET|GET] <refnum> UPDATES_STATUS [0|1]) Turns off (on) whether or not a particular dcc should update the %D status bar expando when it has activity. /ON ^DCC_ACTIVITY * # Hooking (and suppressing) /ON DCC_ACTIVITY will prevent the status bar from being redrawn as a result of this activity. This won't stop %D from being updated, but it does stop the status bar from thrasing uncontrollably. *** News 04/18/2005 -- The "builtins" script (Blackjac) *** IMPORTANT *** IMPORTANT *** IMPORTANT *** As noted below in "notes for forward compatability", there is a new script called "builtins" which is loaded by "global" which is not loaded by default if you have a startup script. This means it is really important that you add either /LOAD global or /LOAD builtins to your ~/.ircrc (or ~/.epicrc) file, or you will notice that the following features have "disappeared": *** IMPORTANT *** IMPORTANT *** IMPORTANT *** Commands: BYE, DATE, EXIT, HOST, IRCHOST, IRCNAME, LEAVE, REALNAME, SAVE, SIGNOFF, WHOWAS Functions: LASTSERVER, SERVERGROUP, SERVERNAME, SERVERNICK, SERVERNUM, SERVEROURNAME, SERVERTYPE, WINBOUND, WINCURSORLINE, WINLEVEL, WINLINE, WINNAM, WINNICKLIST, WINNUM, WINQUERY, WINREFS, WINSCREEN, WINSCROLLBACKSIZE, WINSERV, WINSIZE, WINSTATSIZE, WINVISIBLE Sets: AUTO_REJOIN, AUTO_REJOIN_DELAY, AUTO_UNMARK_AWAY, AUTO_WHOWAS, BEEP_ON_MSG, COMMAND_MODE, DCC_TIMEOUT, FULL_STATUS_LINE, NUM_OF_WHOWAS, REVERSE_STATUS_LINE, SHOW_END_OF_MSGS, SHOW_WHO_HOPCOUNT, VERBOSE_CTCP Furthermore, the following new commands have been added: ADDSET, DELSET which allow you to add a scripted built in /set variable (which is used extensively by this script). *** IMPORTANT *** IMPORTANT *** IMPORTANT *** THESE FEATURES NO LONGER EXIST AS HARDCODED EPIC5 FEATURES AND ARE SOLELY IMPLEMENTED AS SCRIPT FEATURES VIA "builtins"! IT IS NOT A BUG THAT THEY ARE "MISSING". The way to fix this is to /load global or /load builtins in your startup script. *** IMPORTANT *** IMPORTANT *** IMPORTANT *** *** News 04/18/2005 -- New script, "loadformats" (fudd) This script presents the usable interface to implied on hooks. See the explanation for implied on hooks below. /ADDFORMAT <type> [value] This binds the implied on hook for /ON <type> to a new builtin set that is called /SET FORMAT_<type>. Remember that implied on hooks are only used if you don't hook serial number zero! /DELFORMAT <type> This removes the implied on hook for /ON <type> and removes the builtin set /SET FORMAT_<type>. /DUMPFORMATS This does a /DELFORMAT for all possible values of <type>. You will not have any implied on hooks after this is done. /LOADFORMAT <filename> Given a file that contains lines that look like this: <TYPE> <VALUE> Run /ADDFORMAT <TYPE> <VALUE> for each line. /SAVEFORMATS <filename> Save all implied hooks to a file so it can be loaded later with /LOADFORMAT. *** News 03/29/2005 -- Just a note about mangling of /on patterns... In EPIC5, your /ON patterns are no longer "filled out" to the expected number of words in $*. An example: EPIC4: /ON PUBLIC "hop" becomes /ON PUBLIC "hop % *" EPIC5: /ON PUBLIC "hop" stays as it is. Note that the EPIC5 version *WILL* *NOT* *EVER* *MATCH* any real /on public's because it doesn't match any valid $* values. Why was this change made? Because /on is a general purpose function, there were problems where some /on's sometimes did not always have the minimum number of words in $* that they were supposed to, and it would lead to spaces in odd places, preventing matches. It was decided that we would leave "filling it out" as a task that the scripter would need to be responsible for. In the above case, you can simple solve the problem with: EPIC5: /ON PUBLIC "hop *" *** News 03/20/2005 -- History recall moved into a script (Blackjac) *** IMPORTANT *** IMPORTANT *** IMPORTANT *** History recall (BACKWARD_HISTORY and FORWARD_HISTORY) are now scripted features, implemented by /load history instead of hardcoded into the client. If you want to be able to use cursor up and down to recall input history you need to add /load history to your ~/.epicrc or ~/.ircrc file. Remember, if you do not put /load history in your startup file, then you will find that input history recall will no longer be available. So just put /load history in your startup file, and everything will be cool! EPIC5-0.0.4 *** News 03/14/2005 -- New option to $line(), $line(<num> -TIME) You can now provide the "-TIME" option to the $line() function which returns the timestamp when that line was added to the lastlog. You can use -TIME together with -LEVEL, and if you do, the level is always the second-to-last word, and the timestamp is the last word. You can pass the timestamp to $strftime() for conversion. Example: $line(<num> -LEVEL -TIME) might return "Some line goes here CRAP 1110000000" *** News 03/11/2005 -- Implied ON hooks ($hookctl(SET LIST <type> IMPLIED <str>). See the "loadformats" info above for how to practically use this. You can compile out this feature if it offends you by #undef'ing IMPLIED_ON_HOOKS in config.h *** News 03/03/2005 -- New status bar expandos, %{2}+ and %{3}+ These two status exapndos act like %+ and %{1}+ respectively, except that %{2}+ and %{3}+ will contain only the mode string, and not any subsequent arguments. Specifically, %{2}+ and %{3}+ don't include the channel's key or user limit (if any). *** News 03/03/2005 -- New configure options, --with-multiplex You can choose which multiplex function you want epic to use with a new configure option, "--with-multiplex". These are the values: --with-multiplex=select --with-multiplex=poll --with-multiplex=freebsd-kqueue --with-multiplex=pthread If you don't choose a multiplexer, or you choose one that isn't supported by your system, it will use select instead. Note that the kqueue() support is only tested on freebsd-4 and freebsd-5. If you use --with-multiplex=pthread and --with-ssl together, and your openssl version was not compiled to be thread-safe, then you won't be able to use ssl with pthreads (it would crash if you tried). *** News 03/03/2005 -- New $hookctl() operation, "GET HOOK <ref> STRING" The $hookctl(GET HOOK <ref> STRING) operation returns a string that is suitable for passing to /eval {....} to recreate the hook. This will be used by scripts that want to "/save" an /on. If you write these values to a file, you will be able to /load it later. *** News 03/03/2005 -- Changes to how error messages are displayed Traditionally, ircII clients have tried to emit one error message for each error event, putting all of the information into one line. This has its advantages and disadvantages. The main disadvantage is that if information is difficult to pass around, it is usually discarded, and the user may be given a vague or useless error message. So now error reporting has been revamped in epic. Whenever an error is actually generated (or first noticed), the information that is available will be displayed to the screen. Then a failure condition will be sent up to the higher levels of epic, and each higher level will give you whatever information it has available. Thus, while an error message may take up 3 or 4 lines of display, each line has a little piece of information the others don't, and taken together, you get the full picture of what happened. Each error line looks like this: *** ERROR -- <message> where "***" is the value of /SET BANNER. These errors are yells, so if you want to hide them, use /ON YELL. *** News 02/28/2005 -- !!! IMPORTANT !!! "global" SCRIPT NO LONGER AUTO-LOADED This is a backwards incompatable change! A lot of people who run script packs find that the "global" script which is automatically and unconditionally loaded by epic at startup is a nuisance which must be eliminated. After much deliberation, it seems agreeable to most parties that *by default*, the "global" script will only be loaded for those people who do not have their own startup script (~/.ircrc or ~/.epicrc) If you *have* your own startup script, and you *like* having the global script loaded at startup then you need to make a one-time change to the top of your startup script: load global which will give you old backwards-comaptable behavior. If you do not make this change, then some /on's, /alias's, and other things that you may have had access to will disappear until you make this change. For those of you who do not have your own startup script, you will not notice any changes. For those of you who have wished the 'global' script to die a horrible fiery death, you have got your wish. *** News 02/28/2005 -- New built in command, /SUBPACKAGE The /SUBPACKAGE command is probably temporary, so don't get too attached to it yet. This command should be used in scripts that are loaded from other scripts, who have used the /PACKAGE command, to set up a "nested" package name. For example: File "one": /PACKAGE one /LOAD two /ALIAS frobnitz {echo hello!} File "two" /SUBPACKAGE two /ALIAS oofda {echo hi!} In this example, "frobnitz" will be in the package "one" and "oofda" will be in the package "one::two". *** News 02/28/2005 -- The invite command changed here This was never documented for some reason until June 2007. You can now invite people more flexibly: INVITE #chan nick1 nick2 nick3 or INVITE nick #chan1 #chan2 #chan3 or INVITE nick and it will just figure out the right thing to do. *** News 02/04/2005 -- Clarification on /EXEC process output. Until this time, partial lines of output from /EXEC processes were silently ignored, could not be redirected, and callbacks and /on's were not executed in the correct window. Now the partial-line and full-line handler for /exec processes has been unified, and they both work like this: 1) Set up the correct server and window Note: Yes, I am aware this may need further refactoring. 2) Increment the line output Note: Partial lines now count as a line of output for the -LIMIT counter. 3) If you used -MSG or -NOTICE or -OUT, send the the text to the target. Note: This usually hooks an /ON SEND_* hook (see below) 4) If you used -LINE or -LINEPART or -ERROR or -ERRORPART, call the appropriate callback -or- 4a) If you didn't use them, hook the appropriate /ON. 4b) If you didn't hook the /ON, and you aren't redirecting the output to another target, output it to the screen. Note that (4b) and (3) work in complement to each other. If you do /EXEC -MSG nick w then the output will be handled through /ON SEND_MSG (either from your own on, or from the default output). If you do not, then the output will be handled by /ON EXEC. *** News 02/02/2005 -- New $serverctl() attribute, "PROTOCOL" $serverctl(GET <refnum> PROTOCOL) now either returns "IRC" or "IRC-SSL" depending on whether you're doing a regular irc, or an ssl-enabled irc connection, eh! *** News 02/02/2005 -- Automatic creation of ALTNAME for servers, %S changes. When you create a new server (with /server or /server -add or /window server), the server will automatically be given its first "altname" (see "Alternate server names" below). Each server's first altname is constructed as: * If the server name is an IP address, the whole name, * Otherwise, the first segment that does not start with "irc". This is the value you have historically seen at %S on your status bar. This value is now automatically created as your server's first altname, and you can change it! @serverctl(SET <refnum> ALTNAMES <whatever>) %S, and %{1}S have been changed to use the "first altname" for the server, whatever you choose that to be. Please note: If you clear your server's altname list, then %S and %{1}S will have no choice but to show the server's full name. This is intentional and not a bug. %{2}S will continue to show your server's full name all the time. *** News 01/27/2005 -- New status expando, %{3}S %{3}S is just like %{2}S, but instead it shows the server's group instead of the server's name. *** News 01/26/2005 -- Notes for forward compatability *** /SET -CREATE DEPRECATED *** EPIC5-0.0.3 includes /SET -CREATE, but this is deprecated, and will be removed by EPIC5-0.0.5. It is suggested that you start using $symbolctl() [see below] in scripts that support EPIC5-0.0.3 and up. *** THINGS ARE GOING TO BE CHANGING *** Starting before EPIC5-0.0.5, there will be a signficant shift to try to script as many /set's and builtin commands as possible. This stuff is all intended to be put into the 'builtins' script which is loaded from 'global' (ie, it's automatically loaded). Some of you do not load 'global' and so by EPIC5-0.0.5, if you do not load builtins, you will find a lot of features gone. I won't consider this a "bug", but the natural progression of EPIC5's development... Builtin commands that will be moved to 'builtins' /BYE /DATE /EXIT /HOST /IRCHOST /IRCNAME /LEAVE /REALNAME /SIGNOFF Builtin functions that will be moved to 'builtins' $serverhost() Builtin SETs that will be moved to 'builtins' AUTO_REJOIN AUTO_REJOIN_DELAY AUTO_UNMARK_AWAY AUTO_WHOWAS NUM_OF_WHOWAS BEEP_ON_MSG COMMAND_MODE FULL_STATUS_LINE REVERSE_STATUS_LINE SHOW_CHANNEL_NAMES SHOW_END_OF_MSGS SHOW_WHO_HOPCOUNT VERBOSE_CTCP EPIC5-0.0.3 *** News 01/25/2005 -- New built in function, $symbolctl() Here's the plan. An all-encompasing low-level symbol manipulation thingee. This interface is not intended to replace $aliasctl(), but rather to supplement it. No, $aliasctl() will never be removed. Yes, much of its functionality (but not all) is duplicated here. $symbolctl(TYPES) Return all of the types supported in this version of EPIC: ALIAS ASSIGN BUILTIN_COMMAND BUILTIN_FUNCTION BUILTIN_EXPANDO BUILTIN_VARIABLE $symbolctl(PMATCH <type> <pattern>) Return all symbols of type <type> that match <pattern>. You can use the special value "*" for <type> to get symbols of all types. $symbolctl(CREATE <symbol>) Ensure that <symbol> exists in the global symbol table. When symbols are first created, they do not contain any actual values, but rather act as a placeholder in case you want to set any. You must ensure that a symbol exists before you try to change its values. CREATEing a symbol that already exists is harmless; feel free to do it. $symbolctl(DELETE <symbol>) $symbolctl(DELETE <symbol> <type>) Delete all of the values of a particular symbol, or just one type. Example: $symbolctl(DELETE booya ALIAS) is the same as /alias -booya Warning: You can delete built in variables/functions/etc with this! There's no way to restore them back if you do! Caution! $symbolctl(CHECK <symbol>) Inspects <symbol> to see if it has any values left. If there are no values left for <symbol>, it is removed from the global symbol table. You must then CREATE it again if you want to use it later. *** IMPORTANT NOTE ABOUT "LEVELS" **** In order to "get" or "set" a symbol's values, the symbol needs to exist. If you try to "get" or "set" a symbol that doesn't exist, $symbolctl() will return the empty string to tell you that it failed. You need to use the CREATE operation above to bootstrap a new symbol before using it. Now, /STACK PUSH and /STACK POP work by manipulating "levels" in the symbol table. By rule, <level> == 1 always refers to the "current" value of a symbol. If you do /STACK PUSH, then the value you pushed will be copied to <level> == 2. If you /STACK PUSH something else, that values moves to <level> == 3. So what you can do is use "GET x LEVELS" to find out how many levels a symbol has, and then use "GET x <num>" to find out if there is a symbol type that interest you at that level. IN THIS WAY you can directly manipulate the /stack push values without having to actually use the /stack command. In general, <level> is always 1 for everything you want to do, unless you are intentionally monkeying around with your /stack values. *** NOW BACK TO YOUR REGULARLY SCHEDULED HELP *** $symbolctl(GET <symbol> LEVELS) Return the number of levels of <symbol> that are /STACKed. This value is always 1 unless you have /STACK PUSHed something. $symbolctl(GET <symbol> <level>) Return all of the <type>s that are defined for <symbol> at <level>. If <level> is 1, it gets the current value(s). If <level> is > 1, it starts looking at the /STACK PUSHed values. $symbolctl(GET <symbol> <level> ALIAS VALUE) $symbolctl(GET <symbol> <level> ALIAS STUB) $symbolctl(GET <symbol> <level> ALIAS PACKAGE) $symbolctl(GET <symbol> <level> ALIAS ARGLIST) Retrieve one of the values for one of your aliases $symbolctl(GET <symbol> <level> ASSIGN VALUE) $symbolctl(GET <symbol> <level> ASSIGN STUB) $symbolctl(GET <symbol> <level> ASSIGN PACKAGE) Retrieve one of the values for one of your assigns . $symbolctl(GET <symbol> <level> BUILTIN_VARIABLE TYPE) $symbolctl(GET <symbol> <level> BUILTIN_VARIABLE DATA) $symbolctl(GET <symbol> <level> BUILTIN_VARIABLE BUILTIN) $symbolctl(GET <symbol> <level> BUILTIN_VARIABLE SCRIPT) $symbolctl(GET <symbol> <level> BUILTIN_VARIABLE FLAGS) Retrieve information about a /SET. The "TYPE" is one of "STR", "INT", "BOOL", or "CHAR" Generally, either "BUILTIN" or "SCRIPT" is set, but not both. $symbolctl(SET <symbol> <level> ALIAS VALUE <string>) $symbolctl(SET <symbol> <level> ALIAS STUB <string>) $symbolctl(SET <symbol> <level> ALIAS PACKAGE <string>) $symbolctl(SET <symbol> <level> ALIAS ARGLIST <string>) Change one of the values for one of your aliases. If you omit the <string>, it will clear the value. $symbolctl(SET <symbol> <level> ASSIGN VALUE <string>) $symbolctl(SET <symbol> <level> ASSIGN STUB <string>) $symbolctl(SET <symbol> <level> ASSIGN PACKAGE <string>) Change one of the values for one of your assigns. If you omit the <string>, it will clear the value. $symbolctl(SET <symbol> <level> BUILTIN_VARIABLE) Create a new user-created /SET with default values (type == BOOL, data == OFF, script is <empty>.) $symbolctl(SET <symbol> <level> BUILTIN_VARIABLE TYPE <set-type>) $symbolctl(SET <symbol> <level> BUILTIN_VARIABLE DATA <string>) $symbolctl(SET <symbol> <level> BUILTIN_VARIABLE BUILTIN) $symbolctl(SET <symbol> <level> BUILTIN_VARIABLE SCRIPT <code>) $symbolctl(SET <symbol> <level> BUILTIN_VARIABLE FLAGS) Change one of the values for one of your /set's. You cannot change values for system /set's, sorry. Setting the TYPE value changes the DATA value to a default (<empty> for strings, 0 for everything else) so always set DATA after setting TYPE. Yes, you can change the TYPE of a /set after you create it! It's probably a bad idea to set FLAGS for the present. *** News 01/13/2005 -- New $logctl() feature, $logctl(CURRENT) $logctl(CURRENT) can return one of these values: -1 Nothing is being logged right now 0 Something is being logged to the global log, or to a window log >0 Something is being logged to the given log refnum. You can use this log refnum with $logctl(). *** News 01/11/2005 -- New /ON, /ON NEW_NICKNAME This allows you to create your own nickname mangler whenever EPIC is resetting your nickname to register to a server. Remember that your nickname is (usually) not reset if /SET AUTO_NEW_NICK is ON, so you need to /SET that to OFF before this can be used. Each time that your nickname is "reset" (see next entry), the /ON NEW_NICKNAME hook will be thrown. $0 - Server whose nickname is being reset $1 - Your current nickname ("*" if you're unregistered) $2 - The nickname you tried to change to ("*" if none) You should somehow use this information to generate a new nickname and then do a /NICK <newnick> operation. You should avoid trying to do anything else, particularly asynchronous things like /wait, because the status of the connection is unknown when this /ON is thrown (ie, you may not be connected). Just stick to /NICK. If you do not hook the /ON, or you do not do a /NICK within the /ON, then epic will prompt you for a new nickname in the way it has always done (so this change is opt-in and backwards compatable). *** News 01/11/2005 -- Changes to how nick change errors are handled Until this point, EPIC sometimes (rather aggressively) forced the user to provide a new nickname, when it wasn't necessary. This was called "resetting" the nickname. The variable /SET AUTO_NEW_NICK existed to try to avoid needlessly annoying the user with requests for nicknames when a new one could be whipped up. Well, anyways, this has all been refactored. If you are already connected to the server, epic will no longer "reset" your nickname when it recieves one of the many numerics that indicate that your nickname is not acceptable. "Resets" will only occur when you are unregistered (when you do not have a nickname yet). As with before, if you have /SET AUTO_NEW_NICK ON, then your nickname will never be "reset" unless the nickname mangler is unable to come up with an alternate nickname for you. *** News 01/11/2005 -- Change to how /set new_server_lastlog_level works In EPIC4, when you do /window server, the window that is changing server has its window level changed to /SET NEW_SERVER_LASTLOG_LEVEL. This can result in two windows with level ALL if the new server can't be connected to. Starting with now, this change will not occur until after we have registered with the server (when we get the 001 numeric). This means if you do /window new server foo.com, that the new window will be level NONE until the connection to "foo.com" is successful. Furthermore, only the "current window" for "foo.com" will be changed. If you move multiple windows to a new server at the same time, only one of them is the "current window" and only that window ever has its window levels changed. The others stay at "NONE". *** News 01/06/2005 -- Can now set ipv4/ipv6 vhosts separately The /hostname (/irchost) and -H command line option may now take two hostnames, separated by a slash. Example: /hostname foo.bar.com Use "foo.bar.com" as the vhost for both ipv4 and ipv6 /hostname foo.bar.com/faz.bar.com Use "foo.bar.com" as the vhost for ipv4, and "faz.bar.com" as the vhost for ipv6. /hostname foo.bar.com/ Use "foo.bar.com" as the vhost for ipv4 only -- don't change the ipv6 vhost /hostname /foo.bar.com Use "foo.bar.com" as the vhost for ipv6 only -- don't change the ipv4 vhost *** News 01/06/2005 -- New serverctl attr: $serverctl(GET <num> ADDRFAMILY) The $serverctl(GET <num> ADDRFAMILY) value returns either ipv4 if the server connection is ipv4 ipv6 if the server connection is ipv6 unix if the server connection is a filename *** News 01/06/2005 -- Old (undocumented) function: $servports() There has been a function $servports() around for a very long time, but it's never been documented. It returns two values, the first one is the remote port (what you connected to the server with), and the second one is the local port (which is not always useful). *** News 01/01/2005 -- Argument lists for hooks, and $fix_arglist() (howl) $fix_arglist(arglist) returns how argument list arglist will be parsed by epic. It is now possible to supply argument lists to hooks, just like one would do for aliases: /on hook "*" (a,b,...) {echo $a $b $*;}; They are optional, of course. /ON lists the argument list as part of its output, and does also include the hook's userial (unique serial) *** News 01/01/2005 -- New function: $hookctl() (howl) This function presents a low-level interface to the /on system === ADDINGnum> STUFF) Return an /ON's <stuff> ("stuff" == the ircII code when the /ON goes off) $hookctl(GET HOOK <refnum> TYPE) Return an /ON's <TYPE> -- In these SET HOOK operations, if you attempt to change a value so it>) *** News 01/01/2005 -- New function: $hookctl() (howl) This new function, $hookctl(), lets the users do pretty much whatever they $hookctl() arguments: ADD <#!'[NOISETYPE]><list> [[#]<serial>] <nick> [(<argument list>)] <stuff> Argument list not yet implemented for $hookctl() ADD <#!'[NOISETYPE]><list> [[#]<serial>] <nick> <stuff> - Creates a new hook. Returns hook id. COUNT - See COUNT/LIST HALTCHAIN <recursive number> - Will set the haltflag for eventchain. May halt the current chain, or any chain currently being executed. Returns 1 on success, 0 otherwise. DEFAULT_NOISE_LEVEL - returns the 'default noise level'. It is not currently possible to change the current noise level, and probably never will be. DENY_ALL_HOOKS <arguments> - this sets the deny_all_hooks flag, or gets it's value. If set, to anything non negative, all hooks will be "ignored", and the default action of any event will be taken. Similar to a /DUMP ON but doens't actually remove any hooks. EMPTY_SLOTS - will return a list of empty slots in the hook-list. EXECUTING_HOOKS - will return a list of the current executing hooks. This is a 'recursive' list, listing the current hook first. FIRST_NAMED_HOOK - returns FIRST_NAMED_HOOK HOOKLIST_SIZE - will returns HOOKLIST_SIZE LAST_CREATED_HOOK - returns the value of LAST_CREATED_HOOK LIST - See COUNT/LIST NOISE_LEVELS <pattern> - Returns a list of 'noise-types'. If <pattern> is specified only noise levels matching pattern will be returns. NOISE_LEVEL_NUM - Returns NOISE_LEVEL_NUM NUMBER_OF_LISTS - Returns NUBER_OF_LISTS PACKAGE <package> [<list>] - Returns a list of hooks of the given package. If <list> is specified, it will return only hooks in list <list> RETVAL <recursive number> [<new value>] - If recursve number isn't specified, 0 (the current) is specified. Will either return the value of retval for the given hook, or set it. SERIAL <serial> [<list>] - Works exactly like PACKAGE. GET <type> <arg> - See GET/SET LOOKUP <list> <nick> [<serial>] - Returns hook matching given parametres. MATCH <list> <pattern> - Returns a list of matching hooks. REMOVE <hook id> - Removes the hook with the given hook ID. Returns 1 on success, 0 otherwise. SET <type> <arg> - See GET/SET * GET/SET usage GET gettype <arguments> - will return 'gettype' SET gettype <arguments> - will set 'gettype' or similar, and return 1 on success. Not all 'gettypes' may be set, and not all gettypes will silently ignore being set. It is very important to remember that GET won't ever SET anything(!!!) Won't, and shouldn't. * GET/SET types: HOOK <argument> - More info on this under GET/SET HOOK LIST <arguments> - More info on this under GET/SET LIST NOISE <argument> NOISY <argument> - More info on this under GET/SET NOISE/NOISY MATCHES <argument> - More info on this under GET/SET MATCHES * GET/SET HOOK usage: GET HOOK <hook id> <prop> <arg> SET HOOK <hook id> <prop> <arg> <prop> may be one of the following: ARGUMENT_LIST - Returns or sets the argument list, if SET and <arg> is empty, it will be set to NULL, and therefore not used. FLEXIBLE - Returns or sets the value of flexible NICK - Sets or gets the hook's nick. The position of the hook will be changed if needed, and it is not possible to change this to a "crashing nick" NOT - Sets or gets the value of NOT. NOISE NOISY - Sets or returns the value of noisy. PACKAGE - Returns or sets the hook's packagename SERIAL - Returns or sets the serial for the hook. The hook's position in the list will be changed if necesarry, and it is not possible to set the serial to a crashing serial. SKIP - Returns or sets the value of skip. STUFF - Returns or sets the value of stuff. TYPE - Returns or sets the type. * GET/SET LIST usage: GET LIST <listname> <prop> SET LIST <listname> <prop> - not functional <prop> may be one of the following: COUNT - Returns count of hooks FLAGS - Returns flags MARK - Returns mark NAME - Returns name PARAMETERS PARAMS - Returns value of params * GET/SET NOISE/NOISY usage: GET NOISE <noisename> <prop> SET NOISE <noisename> <prop> - not functional <prop> may be one of the following: ALERT - returns value of alert. CUSTOM - returns value of custom. DISPLAY - returns value of display. IDENTIFIER - returns value of identifier. NAME - returns name. SUPPRESS - returns value of suppress. VALUE - returns value of value. d'oh! * GET/SET MATCHES: - This function is not ready yet, and will currently RETURN_NULL. * COUNT/LIST usage: COUNT / LIST work doing the same, the only difference is that COUNT will return the count of lists/hooks, while list will return a list The following options are permitted: LISTS <pattern> - Will either return all lists available, or only the matching ones. POPULATED_LISTS <pattern> - Works _just_ like LISTS, but will only return "populated" lists HOOKS <pattern> - Will either return all the hooks on the system, or all the hooks in the matching lists *** News 10/29/2004 -- New /ison features. To go with the -d and -f switches, the following switches have been added to exploit the new queueing mechanism: -n # Prioritise this request. -s # Send the next ison request now. -len number # Change the number of nicks per request. -oncmd {commands} # Run these commands for online users. -offcmd {commands} # Run these commands for offline users. The descriptions of these switches are simplistic and a little inaccurate. Some clarifications follow. -len will change the maximum length of an ISON request from 500 to the given number. This number will be used for _all_ requests from then on including those from the notify system. In practice it may be necessary in some cases to tune this value downwards to avoid the server dropping some names off the list when they are all online. -oncmd and -offcmd will run the given lines of code with $* set to all the users that a reply indicates are online or offline, respectively. Note that a single /ison request can generate multiple replies. Also note, that there is no guarantee that the code will or will not run if $* is empty. -n will place the current requests at the head of the queue. This is useful when many requests are waiting to be sent and it is necessary to have this one replied to quickly. Note that if the request isn't actually sent by the time the next -n is used, the newer request will always get the higher priority. -s will "kick" the ison system back into action in cases where something has gone wrong and it has become necessary to use the -f flag for eg. This is more of a debugging tool. It will not actually cause more than $serverctl(get $servernum() maxison) requests to be sent. *** News 10/01/2004 -- New status format, %{3}W This is a compromise between %W and %{2}W. %W only shows in the input window when there are split windows, and %{2}W shows in all split windows. So %{3}W shows in the input window, even if it is the only visible window (ie, there are no split windows). *** News 10/01/2004 -- New window option, "toplines" You may now reserve 0 to 9 lines at the top of every windows to be removed from the scrollable portion of the window, creating a place for you to put things like a channel topic, or channel users, or whatever. /WINDOW TOPLINES <N> Reserves and displays <N> lines at the top of the window, which will not be part of the window's scrollable display. By default, toplines are blank until you set them with... /WINDOW TOPLINE <N> "<string>" Sets the window's <N>th topline to <string>. <N> must be 1 to 9. You should put <string> in double quotes. You can change toplines even if they aren't visible. $windowctl(GET <refnum> TOPLINES) Returns the number of toplines reserved at the top of the window. $windowctl(GET <refnum> TOPLINE <n>) Returns the <n>th topline for the window. *** News 09/14/2004 -- Added Howls shebang script support. It is now possible to write epic scripts that run from the shell command line. Yaaay..! The nature of the epic binary itself made this a little difficult at the interface level, so a little hackery was required. The form of the shebang line is this: #!/path/to/epic -S [command line options] -l Note that the -l switch must be the last on the line, and -S must be the first. Also note that at this point in time, -S will only work as the first part of the first argument. The results of the use of this switch anywhere else is currently undefined. *** News 09/14/2004 -- Added some features to the commandqueues script. The first argument to /1cmd may now have a second number attached, with a coma. This number if given, will cause any recurrences of the same command within that number of seconds to reset the last-executed time _without_ executing the command. Use of the 0 or 1 argument form of /qcmd will now cause the timer to be reset to 5 seconds, and if called as a function, the command will be returned without executing it. *** News 08/25/2004 -- New target syntax, -<serverdesc>/<target> You can now send messages to the special message type: -<serverdesc>/<target> where <serverdesc> is a server description (see below) and <target> is obviously a nick or channel on that server. This allows you to send a message to a nickname on a server other than your current window's server. For example: @serverctl(SET 0 ALTNAME booya) (set an altname) /msg -booya/nick hi there! Will send the message to "nick" on server 0. *** News 08/25/2004 -- "Server descriptions" Anywhere EPIC expects you to give it a server, it now expects a "server description" which is one of the following (in this order) 1) A number, which is taken as a server refnum 2) An "ourname" of an open server 3) An "itsname" of an open server 4) A "group" of an open server 5) An "alternate name" of an open server 6) An "ourname" of a closed server 7) An "itsname" of a closed server 8) A "group" of a closed server 9) An "alternate name" of a closed server The server description may be a wildcard! The first server (starting with server 0) that matches is used. This means you can do something like this: @serverctl(SET 0 ALTNAME booya) (set an altname) /server -0 (disconnect from 0) /server +booya (reconnect to "booya") and it will connect to server 0, because server 0 has the alternate name of "booya"! *** News 08/25/2004 -- Alternate server names You may now give a server a list of "alternate names". There is no limit. You add a new alternate name with: $serverctl(SET <refnum> ALTNAME <name>) You can totaly replace the list with: $serverctl(SET <refnum> ALTNAMES <names>) <name> and <names> should be space-separated list of words. You can get the list of alternate names with $serverctl(GET <refnum> ALTNAMES) *** News 08/25/2004 -- Aliases now shown with their argument lists When you do /alias, you now see the argument list along with all of the other stuff. *** News 08/25/2004 -- Mangling now supports "ALT_CHAR" How did this ever get missed? *** News 08/25/2004 -- New function, $mktime() Usage: $mktime(year month day hour minute second DST) The first six arguments are required. Returns -1 on error. Returns whatever mktime(3) on your system would return, usually the number of seconds since the epoch represented by the arguments. *** News 08/25/2004 -- Support for ircnet's "unique id" nicknames You can now always use your "unique id" as your nickname, and also the "0" shortcut nickname, on ircnet. *** News 08/25/2004 -- New status expando, %{1}F This is just like %F, it displays all of the "notified" windows, except it uses the window's "notify_name" instead of its refnum. *** News 08/25/2004 -- New window command, /WINDOW NOTIFY_NAME If you change the /WINDOW NOTIFY_NAME, and use the %{1}F status expando, then the notify_name, and not the window's refnum, will show up. Howl wanted to colorize the refnum up, and this is how you should do it. *** News 08/25/2004 -- Support for +I channel mode (ratbox) Because adm asked me to *** News 08/25/2004 -- User-created /SETs, /SET -CREATE -- WARNING *** DEPRECATED *** You can create your own /set's for now with /SET -CREATE <name> <type> {<code>} where <type> is one of BOOL, STR, or INT. <code> is code that will be run any time the user does /SET <name> <newval>. You can /SET the value within <code> to override the user's value. *** WARNING *** This interface is temporary and will go away in the future. It will be replaced by $symbolctl() which has not yet been written, so stay tuned for more information! *** DEPRECATED *** This feature only existed in EPIC5-0.0.3 and was/will be removed in EPIC5-0.0.4. Do not use this feature. *** DEPRECATED *** *** News 08/25/2004 -- Unification of symbol namespaces There is now one big table that holds all of the symbol names for aliases, assigns, built in commands, built in functions, sets, and inline expandos. You should not notice any changes at all, except maybe epic runs faster. This was done to pave the path towards plugins, which will need to be able to add their own commands and functions on the fly! *** News 08/25/2004 -- The /IRCNAME and /REALNAME commands removed here... because they are duplicates of /SET REALNAME. Use the /SET now. *** News 08/25/2004 -- $stripcrap(ALL) no longer strips "ALL_OFF"... because the crap-mangler makes liberal use of ALL_OFFs and it is of no harm to leave them in there, and it does great harm to take them out. ;-) If you want to remove them, do $stripcrap(ALL,ALL_OFF) *** News 08/25/2004 -- New script 'builtins' loaded from global Some things are starting to migrate from hardcoded builtins to script features. They are not "being removed", their implementation is just changing. This script will contain backwards compatability stuff for epic4. You really do need this script! Do 'make install'! *** News 08/25/2004 -- Automatic command completion removed You can no longer do /whoi as a replacement for /whois. You'll have to spell out the command name in full now. *** News 08/25/2004 -- Using your nickname as a command removed here You can no longer do /<mynick> as an alias for /me. Just use /me. *** News 08/25/2004 -- The COMMAND_COMPLETION keybinding removed here A new script replacement is forthcoming -- stand by! *** News 08/25/2004 -- New serverctl, $serverctl(GET <refnum> STATUS) This returns the server's current status, which is described below in the /on server_status stuff. *** News 08/25/2004 -- New script, 'slowcat' This script cats a file to your current target, 2 lines per second to avoid triggering flood control. /load slowcat /slowcat filename *** News 08/25/2004 -- New status expando, %{2}W This acts just like %W, but it shows in every window, and not just the current window. *** News 08/25/2004 -- New /SET, /SET OLD_SERVER_LASTLOG_LEVEL When you /WINDOW SERVER to move a window to a server that is already connected, the window's level will be set to this /set value. This is important, because of the following situation: Window 1, server 0, level ALL Window 2, server 1, level ALL If you do /window 1 server 1 then you have two servers connected to server 1 with level "ALL". Who wins? Not you. ;-) This defaults to NONE, which is probably the only sensible value. *** News 08/25/2004 -- /WINDOW KILL_ALL_HIDDEN kills your hidden windows. After you run this, you will be left with only your visible windows. *** News 08/25/2004 -- /ON TYPE !"PATTERN" acts as an exception. If you use this syntax, then the default action will occurs whenever the pattern is matched. This is how ircII's /ON TYPE ^PATTERN works. For example: on ^msg * echo msg from $0: $1- on ^msg !"nick" In this case, if anyone but nick sends you a msg, it is echoed as in the first /on. But if nick sends you a message, it will be displayed in the "default" way by epic, as though you did not have an on at all. *** News 08/25/2004 -- It is now always safe to delete ONs from within ONs Up until now, you needed to /defer the removal of any /ONs from within other /ONs, or you risked crashing epic. This meant you could not safely stop an /ON with a higher serial number from running by deleting it. All of this has been fixed now. You can delete /ONs without restriction and the change takes place immediately. *** News 08/25/2004 -- ONs no longer compile patterns to regexes This was fraught with peril, so ONs no longer compile their patterns to regexes, and now we do things like we have always done with ONs *** News 08/25/2004 -- New built in function, $regcomp_cs() This is just like $regcomp(), but it's case sensitive. *** News 08/25/2004 -- In /ON DCC_RAW "* E *", $3 is the port number Previously it held the "othername", which wasn't terribly useful. *** News 08/25/2004 -- /WAIT =<fd> waits for a dcc connection to complete If you do $connect(), it is nonblocking and returns before the connection is ready to be used. If you need to wait until the connection completes, like it did in epic4, do this: @fd = connect(host port) wait =$fd and it's pretty much the same. This wait is of course recursive (and does not block the client) EPIC5-0.0.2 *** News 08/25/2004 -- Level names are always plural, except for CRAP The levels have these names now, and they're gonna stay this way: 08/18/2004 -- Overloadable function aliases. When an alias name "collides" with a built in function, the built in function has traditionally been called. This update changes this behaviour to calling the alias first. NOTE: This will cause recursive loops in scripts that rely on this behaviour. To fix this the aliases in question need to be renamed or rewritten to use the following ::function() feature. It is possible to call the built in with the :: notation used for global variables, as in $::function(). $:function() will explicitly call the alias. NOTE: Do not use the $::function() and $:function() features just yet, as they will crash epic if the alias or the built in doesn't exist. It is safe to use if you are sure they do though. *** News 08/02/2004 -- New commands for $dccctl(). $dccctl(readables) will return the refnums of DCCs that have data waiting to be read and $dccctl(get [refnum] readable) will return a 1 or 0 depending on whether the given refnum is readable or not. Since epic will automatically read data from all unheld DCCs, this feature is expected to be useful only for DCCs in the "held" state. *** News 08/02/2004 -- General improvements to the flood detection system. Flood detection now works for channel PARTS and for ctcp replies, which is bound to the NOTICES flood. Also, the first argument from each flood returned from $floodinfo() when flood_maskuser is set to 1 or 2 is now a valid user@host mask for the flooder in question, suitable for putting in a ban or kline. *** News 08/02/2004 -- New argument for /on flood. The fourth argument ($3) in the flood hook is now the number of repeats of the flood in question. This makes it easy to deal with particular kinds of floods in different ways, as they occur. For example: /on flood "% parts % 5" mode $2 +b *!*@$after(@ $userhost()) /on flood "% ctcps % 5" mode $2 +b *!*@$after(@ $userhost()) /on flood "% % % 50" mode $2 +b *!*@$after(@ $userhost()) *** News 08/02/2004 -- Improvements to $floodinfo(). : u@h mask that matches the flooder. Defaults to "*". channel mask. Defaults to "*". flood type mask. Defaults to "*". Server number. Defaults to -1, which matches all. Numeric minimum number of flood hits. Numeric minimum duration of flood. Numeric minimum flood rate. flood "% parts % 5" { if (floodinfo("$userhost() $2 joins $servernum() 5")) { mode $2 +b *!*@$after(@ $userhost()) } } Or alternately: /on flood "% joins % 5" { if (floodinfo("$userhost() $2 parts $servernum() 5")) { mode $2 +b *!*@$after(@ $userhost()) } } *** News 08/02/2004 -- Changes to /userhost and notify. The changes to the ison back end mentioned in the previous entry now also apply to the userhost back end, along with the caveats relating to /wait and flush. This applies to the /userhost, /userip and /usrip commands. A new option has been added to these commands. /userhost -count [number] will change the number of items that epic will put into each USERHOST request. This isn't a particularly significant change since this number is already tuned to a number that works on all servers. To change the number of USERHOST requests sent at one time, use $serverctl(set [servernum] maxuserhost [number]). Set it to 0 to turn the new behaviour off. *** News 07/08/2004 -- Changes to /ison and notify. The back end of the notify system and the /ison command has been changed to permit only a certain number of ISONs to be sent to a server at one time. The benefit of this is that it will typically prevent a large notify list flooding the client off the server. The down side is that it will cause scripts like $is_on() in script/guh that use "/wait for ison .." to fail until they have been fixed. The fix is to put "@ serverctl(set $servernum() maxison 0)" at the top of any alias that uses it. This will turn the new behaviour off. The notify system itself will not queue an ISON to be sent if there are any ISONs waiting to be sent, but the /ison command will. One final note is that the "waiting to be sent" queue won't be flushed when the client reconnects to the server or when "/ison -f" is run. This won't cause any particular damage, but it's not nice and will probably change soon. *** News 07/08/2004 -- Userhost updating in NICK changes. This is relevant to those who use the $serverctl() maxcache feature, which, if in effect, will prevent a /who message being sent to a server, and thereby make $userhost() fail for every nick that joined the channel before the client did. This patch will grab the userhost information for these users from the NICK message itself, and help to rebuild the $userhost() database faster. It also makes it possible to manually get it into the database by /pretend'ing a NICK message with a userhost obtained from other sources such as the /userhost command. *** News 03/19/2004 -- New built in function, $tobase(<base> <num>) [howl] This function converts <num>, a number in base 10, to base <base>. For example, $tobase(16 65536) returns "10000" *** News 03/19/2004 -- New built in function, $strtol(<base> <num>) [howl] This function converts <num>, a number in base <base> to base 10. For example, $strtol(16 10000) returns "65536" *** News 03/19/2004 -- Changes to /WINDOW NOTIFY, /WINDOW NOTIFIED These two /WINDOW operations now take ON, OFF, or TOGGLE arguments, instead of taking no arguments and behaving as toggle switches. If you do not provide an argument, they show you their current values as the other /WINDOW boolean values do. *** News 03/19/2004 -- Addition and changes to $windowctl() $windowctl(GET <refnum> MISCFLAGS) always returns 0, and $windowctl(GET <refnum> NOTIFY) returns 1 if /window notify is on $windowctl(GET <refnum> NOTIFIED) returns 1 if /window notified is on. NOTIFY and NOTIFIED replace MISCFLAGS. *** News 03/19/2004 -- New key binding, SWITCH_QUERY Whenever a window has multple nicknames in its nickname list, and one of those nicknames is active as the window's query, it is possible to use this binding to switch between all of the nicks in the nick list, just in the same way you can switch between channels using SWITCH_CHANNELS. If the window does not have an active query, this key binding will have no effect, even if the window has nicks in its nicklist! *** News 03/19/2004 -- Unification of /WINDOW QUERY and /WINDOW ADD Historically, when you /WINDOW QUERY (or just /QUERY) <NICK>, then it would add <NICK> to the window's "NICK LIST". The "NICK LIST" is a list of nicknames for which output goes to that window, just like output to channels go to windows. Output to or from a nick that is not on any window's "nick list" goes to the LEVEL_MSG level. When you used /WINDOW QUERY <NICK2> to change the query, it would remove the old query from the nick list, and messages to and from the original query went back to LEVEL_MSG. Well, this has been unified somewhat. Now the following rules apply: 1) When you /WINDOW QUERY <NICK>, then <NICK> is added to the window's nick list. 2) If the window already has a query, then the old query nickname is NO LONGER REMOVED from the window's nick list. Output to that nick will continue to go to the window as it had before. 3) When you use /WINDOW QUERY to cancel a query, then the current query IS STILL REMOVED from the window's nick list, and output to or from that nick will go to LEVEL_MSGS. It is no longer possible to have a window query that is not on the window's nicklist, because the query is selected from the members of the window's nicklist, rather than being a separate thing. *** News 03/17/2004 -- Change to how /SET INDENT behaves Historically, if you have /SET INDENT ON, and the first word of the first line of output is wider than 1/3 of your screen, then the second (and subsequent) line(s) of output are NOT INDENTED. This has been changed so subsequent lines are indented 1/3 of the window's width. To understand this change, think about how /set indent usually works, and if it would indent more than 1/3 of your screen, then it will indent 1/3 instead of not at all. *** News 03/17/2004 -- New flag to /XECHO , /XECHO -F If you use the /XECHO -F flag, "hidden window output notification" will not occur for any hidden windows that receive the output. *** News 03/16/2004 -- You can now bind the 255 character (�) There has been a problem with the new key binding system that made it difficult for Russian language speakers to bind the 255 character which is in their alphabet. This should be fixed now. *** News 03/16/2004 -- Can now join channels simultaneously per window Previously, if you attempted to join multiple channels in the same window simultaneously, you were not assured that all of the channels would go to that window. Now you can be assured of this. This should make reconnection/rejoin scripts much more sane. *** News 03/16/2004 -- New built in function, $startupfile() This expands to the file that the client loaded at startup as your "startup file". Usually this is ~/.ircrc or ~/.epicrc or whatever you specified as the IRCRC environment variable or the argument to -l or -L on the command line. *** News 03/16/2004 -- Unknown CTCP requests offered via /ON CTCP_REQUEST It was pointed out that unknown/unhandled CTCP requests were only being hooked through /on ctcp, so it wasn't possible to use /on ctcp_request to handle EVERY request. Well, now unhandled CTCPs are hooked through both /on's just like handled CTCPs are. *** News 03/16/2004 -- Semantic changes to $connect() You used to be able to depend on /ON DCC_RAW "% % E %" or /ON DCC_RAW "% % C" hooking before $connect() returned. Now that $connect() is nonblocking, YOU CAN NO LONGER DEPEND ON THIS. You must set up your script to assume that /ON DCC_RAW will be hooked asynchronously, after at least the next sequence point. Think of it as being like not being able to depend on /WHOIS returning the numerics. I'll probably add a way to /wait for a connection in the future. Stay tuned. *** News 03/16/2004 -- DCC connections are now nonblocking All connect()ions for DCC, including /DCC GET, /DCC CHAT, /DCC RESUME and $connect() are all fully nonblocking. This means all connects in EPIC are now fully nonblocking! HUZZAH! *** News 03/15/2004 -- /HELP command now handled by script The built in /HELP command has been replaced by a script that was written by howl for our use. Much thanks to him! *** News 03/14/2004 -- Six new USER lastlog levels You may now use USER5, USER6, USER7, USER8, USER9, and USER10 as levels with your window, lastlog, flood, and ignore. Just use /xecho -l USER5 for example to send to your USER5 window. *** News 01/20/2004 -- kqueue() support You can uncomment #define USE_FREEBSD_KQUEUE in newio.h if you want to play around with this experimental feature. *** News 01/15/2004 -- /WINDOW DISCON and /WINDOW NOSERV now the same There was a subtle semantic difference between /WINDOW DISCON and /WINDOW NOSERV that had to do with the window's "last server" that was used for reconnects. Because the client no longer does reconnections, this difference is moot. These two commands now always do the same thing, which is to disassociate the window with any server. The window becomes "server-less". *** News 01/15/2004 -- Changes to /SERVER command /SERVER Show the server list. /SERVER -DELETE <refnum|desc> Remove server <refnum> (or <desc>) from server list. Fails if you do not give it a refnum or desc. Fails if server does not exist. Fails if server is open. /SERVER -ADD <desc> Add server <desc> to server list. Fails if you do not give it a <desc> /SERVER +<refnum|desc> If the server's state is "CLOSED", change it to "RECONNECT". This allows the server to reconnect if windows are pointed to it. Note: server reconnection is asynchronous /SERVER -<refnum|desc> Unconditionally close a server connection Note: server disconnection is synchronous! /SERVER + Switch windows from current server to next server in same group /SERVER - Switch windows from current server to previous server in same group /SERVER <refnum|desc> Switch windows from current server to another server. *** News 01/08/2004 -- /ON WIDELIST went away here This /ON hasn't been hooked in many a year, and here it officially passed into the void. *** News 01/07/2004 -- Removal of WINDOW BIND feature As part of the larger project to decouple windows from channels, the "window bind" feature has been removed. This means you can no longer /WINDOW BIND, /WINDOW REBIND, /WINDOW UNBIND, and you cannot use $windowctl(* BIND_CHANNEL *) or $winbound(). It is expected that eventually scripts will take over the job of routing channels to the appropriate windows and EPIC will stay entirely out of the way. *** News 01/07/2004 -- New /ON, /ON SERVER_STATUS This /ON is thrown every time a server changes its "state". The states are listed below in "Server States" and I won't go into that again here. $0 - The server changing state $1 - The old status (a string, not a number) $2 - The new status If you find that you do something particularly onerous in this /ON and EPIC panics or crashes, try /DEFERing it, and if that doesn't work either, let me know. *** News 01/07/2004 -- Removal of NOTE support I doubt anyone will notice this, and if you do, bummer. *** News 01/07/2004 -- Server states Servers now exist in one of several "states" each time it connects to the server. It moves through each of the states from start to end, and stays at the end until manually reset by the user (or script) RECONNECT As soon as a window is attached to the server, the server should be connected to. CONNECTING A connection to the server is in progress. The server is not ready to be used. REGISTERING We are attempting protocol registration (NICK/USER) with the server. The server is open, but we cannot really use it yet. SYNCING Our registration has been accepted and we're doing whatever meta-tasks are needed to get the connection fully active ACTIVE The connection is fully ready for all use. EOF An End Of File (EOF) has been detected from the server. The connection was closed by the server and cannot be used any longer. CLOSING The connection to the server is being shut down. If the previous state was "ACTIVE" then you can still send something to the server. If the previous state was "EOF" then it's too late. You cannot stop the closing of a server. CLOSED The server is disconnected and cannot be used. The server (and any windows connected to this server) stay in this state until the user resets the state to RECONNECT. *** News 01/07/2004 -- Channels are not tracked across disconnects When you are disconnected from a server for *any* reason, EPIC will not retain knowledge of the channels for the next connection and will not rejoin them. It is expected that scripts will use this to their advantage to fully control the semantics you will have governing "auto-rejoin-on-reconnect". *** News 01/07/2004 -- Server connections are now brought up asynchronously When you do /WINDOW SERVER or /SERVER or otherwise change the server of a window, the server is not immediately connected or disconnected, and the change will not take effect until the next time through the event looper. This means that all server connections are "asychronous" (they don't interrupt the current flow of the script). This means you most definitely cannot do /WINDOW SERVER <host> CHANNEL <channel> any more. So please stop doing that. ;-) Use /ON SERVER_STATUS to join channels. *** News 01/07/2004 -- /XDEBUG SERVER_CONNECT a lot more interesting If you want to watch epic work its gory nonblocking connects, you can turn on this /xdebug and see everything in its glory. *** News 01/07/2004 -- Nonblocking server connects EPIC now does all server connections using asynchronous, nonrecursive, nonblocking connections. And yes, it still supports multiple protocols and multiple addresses (ie, "us.undernet.org"), and *yes*, it will try another address if a server refuses us registration ("You do not have access to this server"). *** News 01/07/2004 -- EPIC no longer tracks server "dialect" per se The $version() string now always returns "2.8" since all servers are nominally 2.8 class (rfc1459) servers, and epic does not attempt to determine if it's an undernet, ircnet, efnet, or dalnet server, etc. This is mostly because scripts can hook /on 004 if they care, and the 005 numeric (ISUPPORT) is making dependance on the server's version much less important. *** News 01/07/2004 -- EPIC loads ~/.ircrc (or ~/.epicrc) on 001 now Traditionally, ircII has loaded your ircrc when it received the 002 numeric, and traditionally, epic has done it when it received the 004 numeric. Due to some refactoring in epic, it is now possible for epic to load your ircrc when it receives the 001 numeric *and before it hooks /on 001* This means you shouldn't have to suffer the default epic output for any of the numerics from the server. *** News 01/07/2004 -- Usermodes now tracked as strings instead of bits Before this change, ircII clients had always tracked your user and channel modes as bits, and the valid (supported) modes were hardcoded into the client at compile time. With this change, EPIC will no longer track your modes using bits, but instead using strings. This means that epic won't need source code changes to support new modes from your server. You can't do $serverctl(SET|GET <refnum> UMODES) any more (but the old "UMODE" still works) *** News 01/07/2004 -- /ON wildcard patterns now compiled into regexes *** OBSOLETE *** At or around this date, EPIC started converting wildcard patterns used by /ON into extended POSIX regexes and compiling them, and using the regexes instead of the pattern matcher. In the future, epic will allow you to specify your own regexes. "Flexible" /on hooks are still wildcard pattern matched (for now) because recompiling the pattern every time the /on is thrown is senseless. *** OBSOLETE *** This feature was removed (see note above for 08/25/2004) *** OBSOLETE *** EPIC5-0.0.1 *** News -- 12/16/2003 -- New levels, KICK, QUIT, and MODE So just for a canonical list, here are all of the levels supported by flood, ignore, lastlog, and windows: -- 12/16/2003 -- Unification of ignore, flood, and lastlog levels Previously, the ignore, flood, and lastlog levels used the same names, but they had different meanings in each subsystem (ie, CRAP in flood was different from CRAP in ignore, and CRAP in lastlog). Now all three subsystems use the same levels, all named the same, and (more or less) all defined the same. There are some holes in this conversion cause I didn't check every possible combination. Report any odd behavior to me so I can fix it. *** News -- 12/16/2003 -- New noise type for /ON, /ON %TYPE The /ON %TYPE modifier acts just like /ON ^TYPE, because it suppresses the default action, but it is unlike /ON ^TYPE because it does not turn off the display (what /ON ^TYPE does is it prefixes all the commands in the ON body with the ^ modifier, which turns off output for that command.) This new modifer does not prefix each command with ^, so any commands not so prefixed will generate their normal output. The idea is you can use this for /on set's /ON %SET "HOLD_MODE *" {WINDOW HOLD_MODE $*} *** News -- 12/16/2003 -- Removed /SET BEEP_WHEN_AWAY This feature can be re-implemented in one line of script: /ON #MSG 617 * {IF (A) {BEEP}} *** News -- 12/16/2003 -- Removed /SET BEEP_ON_MSG The /SET BEEP_ON_MSG feature has been removed because it was only half-implemented, and even that half didn't work right. Keep an eye out for a scripted re-implementation of this in the future. *** News -- 12/16/2003 -- Runtime auto-append-of-$* removed Historically, the ircII language has allowed you to auto-append $* onto the end of an alias at runtime by creating an alias that does not refer to any of the command line arguments. For example, /alias m msg behaves at runtime as /alias m msg $* but with a performance penalty. This behavior has now been removed and if you wish to have $* appended to your aliases, you need to change them. This change would be backwards compatable with epic4. # End of file
https://sources.debian.org/src/epic5/2.0.1-1/UPDATES/
CC-MAIN-2019-09
refinedweb
34,865
68.3
It has come to my attention recently that there is still a lot of confusion around Autodiscover, Outlook, and the way the two interact with one another. I’d like to try to clear some of that up by consolidating some of the information that I recently used to assist some of my own customers. Hopefully it will help you as well. By now, everyone SHOULD be familiar with Autodiscover and what it does. Essentially, Autodiscover does the following: - Configures profile settings for Outlook 2007 and above clients as well as for various types of mobile devices - Provides Outlook clients with the URL’s they need to access Exchange Web Services and the OAB download URL. EWS provides free/busy and Out of Office services. Autodiscover is a function of EXCHANGE. Outlook 2007 and above clients are coded in such a way that they can take advantage of Autodiscover. It all works together (or it should). For Outlook to take advantage of the new Autodiscover features in Exchange 2007 and above, Outlook must be told where to go, or who to talk to if you will, to get all those fancy new features. That’s where the Outlook logic comes into play in these scenarios. The order of logic that Outlook uses when trying to figure out where to get that information from is as follows: - SCP lookup – Outlook will get Autodiscover information from Active Directory. If that fails, Outlook begins it’s “non-domain” connected logic (as I like to call it), and will go in order down this list - HTTPS root domain query – Outlook, if not domain joined, uses the RIGHT HAND SIDE of the users SMTP address to do this query. So using the domain from my example, it will search for - HTTPS Autodiscover domain query – If the above search yields no response, the next URL Outlook will try is - HTTP redirect method - SRV record query - Local XML file - cached URL in the Outlook profile (new for Outlook 2013) There are some important points to remember so that you can determine what behavior to expect form Outlook. The first thing you need to know is: - Is the Outlook client domain joined AND can it currently CONTACT a DC in the domain! The highlighted portion of that statement is an important distinction. If you are running Outlook on a laptop and you disconnect the network cable from said laptop and then try to configure a new Outlook profile, for example, your Outlook client will use it’s NON domain joined logic at that point, because it CANNOT CONTACT ACTIVE DIRECTORY! Where this becomes important is what I’d call “one off”, or “unusal” circumstances, such as the absorption of one company by another. I ran across two such scenarios recently which is what prompted me to create this blog post. In these types of cases, it may be necessary to more fully control or tweak the Outlook to Exchange Autodiscover connection logic so that users will have a more robust experience, at least until the are fully absorbed into their new environment. In both recent cases, the client computers were connected to Domain A (the “old” company) but needed to connect to Exchange servers in Domain B (the “new” company). One case was complicated further by the presence of Exchange 2007 in BOTH Domain A AND Domain B. And we needed to “test” connecting users in Domain B back to mailboxes created for them in Domain A WITHOUT making any modifications to either environment. Since my client (meaning the actual computer I’m testing this with) is domain joined, and I am logged in with a domain account, Outlook automatically fills in my Name and E-mail address. If I just click next, Outlook will begin is logic and first try to locate Autodiscover settings via SCP. If my domain joined client can currently contact AD, it will get Autodiscover settings from AD and proceed with trying to configure the profile to connect to my mailbox (assuming I have a mailbox in that domain, assuming Exchange is installed and properly configured in the WingTipToys domain, etc). The problem in the scenarios I referenced earlier, is that the customer wanted the Outlook client to connect to a mailbox in a completely different forest/domain. While you can certainly use the “Manually configure server” option, there are a few other things you could do. At minimum, we must change the email address that has been autopopulated for us to the one that has been assigned to us from the “other” domain. For example purposes, I’ll be using kris@corkandale.com. For my customer, we decided to use a local autodiscover.xml file to override Outlook’s default behavior, so that we could test some things without having to modify anything in either forest/domain. Think of this as a type of “hosts” file for Outlook. What you will do is create a file called autodiscover.xml that will look like this: <?xml version=”1.0″ encoding=”utf-8″?> <Autodiscover xmlns=”″> <Response xmlns=””> <Account> <AccountType>email</AccountType> <Action>redirectUrl</Action> <RedirectUrl></RedirectUrl> </Account> </Response> </Autodiscover> Then you need to configure the client machine to query that XML file by adding the following registry key: Tell Outlook to use a local xml file: - for Outlook 2007: HKCU\Software\Microsoft\Office\12.0\Outlook\Autodiscover - for Outlook 2010: HKCU\Software\Microsoft\Office\14.0\Outlook\Autodiscover STRING_value <your_namespace> = path to XML file Or 15.0 for Outlook 2013 Where <your namespace> would be the namespace that Outlook will be querying for (corkandale.com in my example), and path to xml is the path to file. NOTE: My client will STILL have to be able to resolve autodiscover.corkandale.com to an IP address. So, be sure that DNS is properly configured! Even with the xml file in place, we still found that there was a pretty significant delay in both profile autoconfiguration AND in free/busy lookups. Using information from we excluded the following per the article: - SCP lookup - HTTPS root domain query Once this had been done, we found things moved along much more quickly. This allowed my customer to do some more testing before fully integrating clients from Forest B into their own Forest. How do you force outlook to use the local XML autodiscover file? With the PreferLocalXML key, as noted here: support.microsoft.com/…/2212902 Pingback from Office 365 Migration–Notes from a newbie. Or Killer Mistakes I made. | Title (Required) Pingback from Office 365 Migration–Notes from a newbie. Or Killer Mistakes I made. | Title (Required) Pingback from Office 365 Migration–Notes from a newbie. Or Killer Mistakes I made. | Title (Required)
https://blogs.technet.microsoft.com/kristinw/2013/04/19/controlling-outlook-autodiscover-behavior/
CC-MAIN-2017-17
refinedweb
1,115
59.94
The name conflict may occur in react native when you import components from react native as well as from any other third party libraries with same name. Just have a look at the snippet given below: import React, { Component } from 'react'; import { View, Text } from 'react-native'; import Svg,{ Circle,Text } from 'react-native-svg'; As you notice, we have imported two Text components, one from react native and other from react native svg. This cause name conflict and one of those two components would not work. So to make use of two components, you have to change the code as follows: import React, { Component } from 'react'; import { View, Text } from 'react-native'; import Svg,{ Circle,Text as SvgText } from 'react-native-svg'; Yes, I imported the Text component from react native svg as SvgText so that you will not face name conflict. You can use the text component of react native svg as SvgText in the file. 1 thought on “How to Solve Name Conflict While Importing Two Modules with Same Name in React Native” I have similar components that I need to import like this, but when I try, it tells me “Identifier ‘AWSS3Image’ has already been declared. Here is the code: ` import AWSS3Image from ‘components/partials/AWSS3AnnouncementImage’; import AWSS3Image as ProfileImage from ‘components/partials/AWSS3Image’; `
https://reactnativeforyou.com/how-to-solve-name-conflict-while-importing-two-modules-with-same-name-in-react-native/
CC-MAIN-2021-31
refinedweb
218
50.09
using iotop to find disk usage hogs 887 146309 average rating: 1.8 (88 votes) (1=very good 6=terrible) 486 244430 Workaround and fixes for the current Core Dump Handling vulnerability affected kernels 161 127304 average rating: 1.3 (28 votes) (1=very good 6=terrible) 38 94246 #include <openssl/rand.h> int RAND_egd(const char *path); int RAND_egd_bytes(const char *path, int bytes); int RAND_query_egd_bytes(const char *path, unsigned char *buf, int bytes);. RAND_egd() retrieves entropy from the daemon using the daemon's ``non-blocking read'' command which shall be answered immediately by the daemon without waiting for additional entropy to be collected. The write and read socket operations in the communication are blocking._query_egd_bytes() returns the number of bytes read from the daemon on success, and -1 if the connection failed. The PRNG state is not considered. RAND_egd_bytes() is available since OpenSSL 0.9.6. RAND_query_egd_bytes() is available since OpenSSL 0.9.7. The automatic query of /var/run/egd-pool et al was added in OpenSSL 0.9.7.
http://www.linuxhowtos.org/manpages/3/RAND_egd.htm
CC-MAIN-2021-10
refinedweb
173
66.13
!- Search Loader --> <!- /Search Loader --> Hello, I'm a programming and C++ beginner so I apologize for my confusion. I have windows 10, Intel IPP 2020 Update 1 installed, Visual Studio 2019. My compiler is local windows debugger(I think...I'm trying to follow forum detalis for question posting). I am trying to use old image processing code. Some of the IPP libraries it uses are out of date. Accordingly, I installed Intel legacy libraries here ( ). The compiler produced errors, saying that libraries like ippm.lib could not be opened (LNK 1104). I clicked on the project, click on properties, go to linker and change the name of the additional dependency from ippm90lgc.lib. This would get rid of the error. Here is the problem, it is unclear which .lib file replaces the ippCore.lib as none follow the naming format (ippCore90lgc.lib does not seem to exist). Note: some additional changes I made were adding #include "ipps90legacy.h" and #include "ipps90legacy_redef.h" among other #include's at the top of all source files,in VC++ directories I added legacie's include folder to include directories and lib folder to library directories. These steps were following the directions in the readme.txt file in the ipp legacy download. #if !defined( __STDCALL ) #define __STDCALL IPP_STDCALL #endif to one of the .cc files. This was following the recommendations of a forum user here ( ). It successfully got rid of the error. If I need to add more information please let me know. Any help is much appreciated. Thanks Hi Colton, There is no ippcore library in legacy package. You should use "ippcoremt.lib" from IPP 2020. Regards, Sergey Thank you very much Colton Hello Again, I have an update. I used the ippmcoremt.lib as instructed. I am now getting 29 errors. All are either LNK 2001 or LNK 2019 "unresolved external symbols"(see attached photo). I also will attach a photo of all the .lib files I added to the properties->Linker->input->additional dependencies section. I hope this information is sufficient for diagnosing my problem. If not I apologize and let me know what I need to add. Thanks Again Colton
https://community.intel.com/t5/Intel-Integrated-Performance/LNK-1104-Cannot-open-file-quot-ippCore-lib-quot/m-p/1186612
CC-MAIN-2020-50
refinedweb
360
62.44
Scripting Skype Guest post by Vincent Oberle, Skype developer Scripting simple tasks I’ve been using Python as my main scripting language for a while now. The Skype Linux tools I wrote a little while ago were already in Python. They used a custom wrapper for the API that Skype exposes for 3rd party applications. There is however a much better Python wrapper available now, Skype4Py, which is even officially supported by Skype. With it, it becomes quite simple to script Skype for some simple tasks. Added benefit of using Python, your scripts will be portable across the 3 desktop platforms supported by Skype, Windows, Mac and Linux. Here is an example of a small scripts I wrote to solve a problem. Skype multi-chats are great for discussions on a project or in a team. But sometimes I need an answer from each member of a multi-chat. Just throwing the question in the chat will result that systematically some people don’t answer (don’t ask me why, I don’t get it either…). So I wrote a little script that will send as individual chat messages the text that follows the /all command. For example write in the multi-chat: /all When do you go in holidays this summer? and each member of the chat will receive the “When do you go in holidays this summer?” individual message. Here is the code: import Skype4Py import re skype = Skype4Py.Skype() skype.Attach() # Attach to Skype client def message_status(Message, Status): if Status != Skype4Py.cmsSent: return if Message.Sender != skype.CurrentUser: return r = re.search (r'/all (.+)', Message.Body) if r: msg = r.group(1).strip() for member in Message.Chat.MemberObjects: if member.Handle == skype.CurrentUserHandle: continue # don't sent to myself skype.SendMessage(member.Handle, msg) skype.OnMessageStatus = message_status while(True): pass # Infinite loop, ctrl^C to break Intercepting chat messages can be used for many things. Do you want a /sms command in a chat that will send an SMS to each member in the chat? It will probably not take you much more lines of code. Moods to Twitter and command-line file transfer I have blogged before about how using the Skype4Py library makes it very easy to script Skype and add little features to it, in a portable way. I have written two such little scripts recently. Their code is short and simple, and while I only tested them under Linux they should also work under Windows and Mac. They can be found with my Skype tools. The first script will send your own mood messages to Twitter. There are two reasons for doing that. First many people use the mood message like Twitter, to say what they think or as a micro-blogging tool. So the mood message can be a very good “Twitter editor”. The second reason is that Skype doesn’t keep an history of your mood messages. This provides such an history, which can be private if you set your Twitter privacy settings accordingly. The second script is to make my life easier. Under Linux I’m often in the command-line and I often have to send some file to colleagues. Currently that requires to get to the Skype UI, find the contact, choose the Send file option and navigate to the directory where my file to send is, lots of clicks. So I’ve written the little send_file.py script. Just specify the Skype name(s) or the display name(s) of the people you want to send the file too, and it will open the file selection window to the current directory. From there you just have to choose the file to send. Why not specify directly the file name to transfer on the command-line? The Skype API doesn’t allow this, to prevent external applications to transfer files without the knowledge of the user. Yet despite this limitation, the script makes the file transfer operation much faster. Note that “send_file.py John” will be enough to send the file to all contacts that have John in their name. Under Linux, this script requires at least Skype 2.0.0.27.
http://skypejournal.com/blog/2008/01/scripting_skype.html
crawl-002
refinedweb
696
74.29
This article is a simple how-to article on using the Adobe Acrobat 7 ActiveX control within a C++ application using MFC. The sample project included uses a minimal MDI MFC application to show that you can use multiple instances of the control within the same application. The code was kept simple and slim so that it focused more on the actual content. The source code and project file was written using Microsoft Visual C++ .NET 2003, but the code can easily be integrated into any version of Visual C++. Previous versions of Adobe Acrobat did not allow the ActiveX control to be used within an external application. According to the Acrobat Developer FAQ, the ActiveX control was only developed for use in the Microsoft Internet Explorer Web Browser and was not supported or licensed for use in any other application. With the release of Adobe Acrobat 7, the Acrobat Reader ActiveX control is now fully supported and documented, this gives application writers much more flexibility when distributing PDF files to customers. The PDFs that are opened with this control can also take advantage of the Acrobat JavaScript to communicate with each other just as they would if they were opened within a web page. The code is very simple and straightforward to use, it is mainly intended to show you how to import the control into your own application and not just for copying and pasting the code from the demo source. I chose to use a CWnd as the view instead of a CFormView for the demo project, this will help you understand how to create the control programmatically instead of just plopping it on a dialog from a toolbox. CWnd CFormView The first step to using the Acrobat Reader 7 control is to create an MFC class from an ActiveX control using the Class Wizard. While in the Class View tab, right click on your project and select "Add->Add Class..." and choose the option "MFC Class From ActiveX". Now click "Open" and a new dialog will appear. Under the section labeled "Available ActiveX Controls", find and select the "Adobe Acrobat 7.0 Browser Document <1.0>" control from the combo box. The interface "IAcroAXDocShim" will appear in the list of interfaces. Select the interface and click the ">" arrow to add a class for it. The wizard will then create the class CAcroAXDocShim and add it to your project. IAcroAXDocShim CAcroAXDocShim You are almost finished, now you just need to use this generated class in your own window. Add a member variable in your window view class that is of type "CAcroAXDocShim": #include "CAcroAXDocShim.h" class CAcroViewer : public CWnd { public: ... // Attributes private: CAcroAXDocShim m_ctrl; }; Now that you have added the variable, you just need to create it and use it to open PDF Files. To create it, first create a control ID, which is just an integer that is not currently being used by another control in your application. Now, at the end of your window's OnCreate handler, you call the control's Create function. The first parameter needed is a string for the window name, you can use anything for this value, I chose "AdobeWnd". The next parameter is the window style, you need WS_CHILD to make it a child and WS_VISIBLE to make it visible. Next is a RECT for the window position, I used all 0's because I made a WM_SIZE handler to resize the control when the window is resized. The next parameters are the window parent (this), and the control ID that was created. OnCreate Create AdobeWnd WS_CHILD WS_VISIBLE RECT WM_SIZE this const int CTRL_ID = 280; int CAcroViewer::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CWnd::OnCreate(lpCreateStruct) == -1) return -1; //Create the control, just make sure to use WS_CHILD and WS_VISIBLE. if (!m_ctrl.Create("AdobeWnd", WS_CHILD | WS_VISIBLE, CRect(0, 0, 0, 0), this, CTRL_ID)) { AfxMessageBox("Failed to create adobe wnd"); return -1; } return 0; } The ActiveX control is now created and visible in your application. The only problem now is that its size is 0. To fix this we add a WM_SIZE handler to the window and resize the control to match the window's client rect. We must make sure that the window is valid first by calling IsWindow(m_hWnd). After this call we just simply call GetClientRect() and pass that rect to MoveWindow() for our control. IsWindow(m_hWnd) GetClientRect() MoveWindow() void CAcroViewer::OnSize(UINT nType, int cx, int cy) { CWnd::OnSize(nType, cx, cy); //Resize the control with the window. if (IsWindow(m_hWnd)) { CRect rc; GetClientRect(rc); m_ctrl.MoveWindow(rc); } } The only thing left now is to open a PDF file. This can be done with one simple call to LoadFile on the control. The file parameter can be either a file on the local file system or a URL to a file on an HTTP server. LoadFile file void CAcroViewer::Open(const char *file) { //Just load the file that is opened. m_ctrl.LoadFile(file); } Unfortunately, even though this control is now outside of the Microsoft Internet Explorer Browser, it still seems to suffer from the bugs that it has had in all other versions of the Acrobat Reader. Occasionally it seems necessary to open up Task Manager and "End Task" the "AcroRd32.exe" application to get things work properly again. It's not a problem related to this application, it's a problem with the control itself. When opening and closing several PDF files from within Internet Explorer, you occasionally run into a problem where the PDF viewer just freezes. I was hoping that Adobe would have fixed this bug with the release of Acrobat Reader 7, but it appears to be still there. I hope that this article has helped those of you who want to use PDF files within your own applications and only have the Acrobat Reader. If you have the full version of Adobe Acrobat, there are many other options open to you for using PDF files within your own applications, such as OLE automation. I have worked pretty extensively with OLE automation of Acrobat within C++ and C# applications and it is a much better approach in most cases but is limited to Acrobat Professional and not supported on systems that only have the Acrobat Reader
http://www.codeproject.com/Articles/9537/Adobe-ActiveX-Control-with-MFC
CC-MAIN-2013-20
refinedweb
1,042
60.24
Here's my issue, I would like to call the getters/setters of one of my objects, but not directly, I want to do it by using a std::string. I found this but it won't work on my case I think it is because my function aren't defined in my main method but in my square class. Also my function are not all defined the same way there's void(std::string) std::string() void(int)... here's an exemple of what a would like to do. my object square #include <map> #include <functional> #include <string> class Square{ private: std::string name; int width; float happinessPoint; //extremly important for your square. public: void setName(std::string); void setWidth(int); void setHappinessPoint(float); std::string getName() int getWidth() float getHappinnessPoint() } #include "Square.h/cpp" int main(){ Square square = Square("Roger",2,3.5); // here in my magicalFunction I ask to the users the new values for my square (all in std::string for now) vector <std::string> newValueForSquare = magicalFunction(); for (unsigned int i=0; i < newValueForSquare.size(), i++){ //here I have a function which tell me if my std::string // is in fact a float or an int // and I would like to call each of my setters one by one to // sets my Square to some value I asked to the user before all that. // something like that: // someFunction("setName","Henry") Several solutions are available to you. You basically want to parse user input to fill your Square class attribute. One way is to use the std::stoi family of functions: std::vector<string> values { "Roger", "2", "3.5" }; std::string name = values[0]; // No problem, two strings int width = std::stoi(values[1]); // stoi = stringToInt float happiness = std::stof(values[2]); // stof = stringToFloat I'm not sure why you'd need the for loop, unless there is something I didn't understand in your question. I'll update my answer accordingly. Update 1 After reading other answers, I would like to propose my solution to your problem. As stated several times in my comments, this is not an easy answer ! I needed such a class to write a generic test engine, and this is the code I used. It works really well with any type of function (except for routines with a return type of void -- a simple template specialization would solve it though) # include <functional> # include <tuple> template<int ...> struct seq { }; template<int N, int ...S> struct gens : gens<N - 1, N - 1, S...> { }; template<int ...S> struct gens<0, S...> { typedef seq<S...> type; }; struct callable_base { virtual void operator()() = 0; virtual ~callable_base() { } }; class Task { private: template<class RT, class Functor, class ...Args> struct functor : public callable_base { functor(RT& result, Functor func, Args ...args) : _ret(result) { _func = func; _args = std::make_tuple(args...); } void operator()() { _ret = call(typename gens<sizeof...(Args)>::type()); } template<int ...S> RT call(seq<S...>) { return (_func(std::get<S>(_args)...)); } private: std::function<RT(Args...)> _func; std::tuple<Args...> _args; RT& _ret; }; public: Task() { _functor = nullptr; } template<class RT, class Functor, class ...Args> Task(RT& result, Functor func, Args... args) { _functor = new functor<RT, Functor, Args...>(result, func, args...); } void operator()() { (*_functor)(); } ~Task() { delete _functor; } private: callable_base *_functor; }; The idea behind this code is to hide the function signature in the inner class Task::functor and get the return value in the first parameter passed to the Task(...) constructor. I'm giving this code first because I think it might help some people, but also because I think it is an elegant solution to your problem. Bear in mind that to understand most of the code, you need solid C++ knowledge. I'll detail the code in subsequent updates if needed. Here's how you'd use it: int main() { int retVal; std::string newName; std::map<std::string, Task *> tasks { {"setName", new Task(retVal, &Square::setName, &newName)} ... } /* Modify the name however you want */ ... tasks["setname"](); } This whole class could be optimized, of course, primarily thanks to C++14 and move semantics, universal references and all, but I kept it simple ~ A major problem is that you have to use pointers if you don't know the values of the parameters at the time you fill the task map. I'm working on another version to simplify this aspect, but I wanted to show you that C++ is not designed to do what you ask simply. Maybe you come from a functional or JS world, in which this would be trivial x) Update 2 I just wanted to point out that with C++14, you could omit the first 3 structures that are here to help me expand my tuple in an argument list using interger_sequence
https://codedump.io/share/Glp4Hror0pVi/1/how-to-call-a-function-from-an-object-with-a-stdstring
CC-MAIN-2017-34
refinedweb
785
61.97
HTML Tutorial: Angular 7/8 Template Syntax - Interpolation, ngFor & ngIf Directives In this tutorial, we’ll teach you HTML which is used as the template language for Angular. We’ll build a simple HTML “app” using JAMStack approach and we'll learn about the Angular advanced concepts (Such as data binding, interpolation, loops with the ngFor directive , and conditional rendering with the ngIf directive). HTML is a prerequisite in any web development and one of the three pillars of the web along with JavaScript (or compiled TypeScript) and CSS. Note: HTML is the language of Angular templates What is HTML? HTML stands for HyperText Markup Language is an artificial markup language that can be used by programmers to create the structure of web documents. It’s one of the three pillars of the web along with JavaScript and CSS. It can be interpreted by a web browser which transforms an HTML source code comprised of HTML tags to an actual web page with text, tables, forms and links etc. How Does Angular Use HTML? A web browser can only understand plain HTML, JavaScript and CSS. While Angular uses HTML for creating views, it adds some template structures such as loops and conditional directives along with other syntax symbols for variable interpolation and data binding which are not part of HTML thus they are compiled ahead of time and transformed to plain HTML. An Angular application is executed when a typical index.html file is served to the browser: <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Angular Demo</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <link href="" rel="stylesheet"> <link href="" rel="stylesheet"> </head> <body> <app-root></app-root> </body> </html> The JavaScript bundles for Angular are built and injected into the index.html file after building the application. Except for the typical HTML tags, we also have a custom <app-root> tag which is used to include the Angular root component which is by convention called App. This will result in including all the children components and eventually the full Angular application. Angular also uses HTML for the individual components' templates which are used to create the views of the application. For example, the root component in an Angular application generated with the official Angular CLI has an associated template called app.component.html. This is not a convention as we should explicetly tell the component where to find the template. This is done using a templateUrl meta-property as follows: import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Angular Demo'; } We can also use an inline HTML template using a template property. Note: The <html>, <body>, and <base>tags have no useful role in Angular templates. Angular Template Syntax In Angular templates, you can use plain HTML but also special syntax and directives that allow you to take benefits of the full power of Angular features such as interpolation, binding, ngClass, ngStyle, ngFor and ngIf, etc. Interpolation Interpolation enables you to use variables and expressions in your HTML template, either between HTML element tags or within attribute assignments. You can embed a variable or expression in your HTML templates using the double curly braces, ``. For example, in the previous App component, we have a title variable with an initial value of Angular Demo. We can use interpolation to display this value in the related app.component.html template: <h1>{{title}}</h1> Angular will dynamically replace the title variable with its value in the template. Note: You can change default interpolation delimiter used by Angular by using the interpolation property in the component metadata. Angular Built-In Directives: ngFor and ngIf Angular provides many builtin directives such as ngFor for iterating over arrays of data and ngIf for conditionally rendering HTML elements. The ngFor directive allows you to iterate through arrays in your HTML templates while ngIf allows you to express conditions in your template. These are powerful programming-like constructs that extend HTML thanks to Angular. We'll see below how to use ngFor and ngIf with a practical example. Extending HTML with Angular Components and Directives Angular allows you to extend the HTML vocabulary of your templates with components and directives that can be considered as new elements and attributes. What is an HTML Document? An HTML document is simply a plain text document with the .html extension instead of .txt Most tags have opening and closing parts. Each tag begins with < symbol and ends with > symbol. For example: - The topmost tag that defines and HTML document is <html></html>. All the content should be contained between the opening and closing tags. - The body tag that defines the body of the web page is <body></body>. - The tag to add a title of the page is <title> … </title>. etc. Tags can have attributes that provide extra information to the browser for how to display the element. Web servers serve only plain HTML to web browsers without any server-side programming constructs. HTML is an essential requirement if you want to create websites. Most developers start their journey in web development by learning HTML, this is the case for both frontend developers that use JavaScript to create client-side apps and backend developers that use server-side languages like PHP or Python to create web apps. Notes: You can also use JavaScript frameworks like Angular or libraries like React or Vue to create apps with JS and HTML. All these tools, make use of components that use HTML as the template language for creating views. You can extend HTML by creating new tags using custom elements and web components which are standard browser technologies that don’t require a third-party tool, framework or library to be interpreted by the browser. Prerequisites You don’t need a fully-fledged development environment with a lot of tools installed to start learning HTML. You only need a text editor (that optionally has syntax highlighting for HTML) and a web browser like Chrome or Firefox or even IE. You also need some basic knowledge to work with your operating system, Windows, Linux or macOS, particularly how to create and open files. You can also use online development environments such as CodePen, JSBin or JSFiddle for trying HTML without creating files in your computer. Actually, these online environments are most useful if you are unable to create files in your system or you are using devices like phones and tablets while you are learning HTML, JavaScript or CSS. In this tutorial, I’ll assume you are working with a Unix-based terminal (present in macOS or Linux) and can be installed on Windows. Don’t worry though, the command we’ll use is for navigating to a working folder and creating a file, you can do this in your preferred way. HTML is not a programming language but instead a markup language that you can use to apply tags on some text to give it a semantic or meaning, create a structure for a page like header, footer, columns, sections and navigation menus. It can be also used to add images and videos to your pages from local or external sources. Note: A programming language has advanced constructs and features like loops for iterating over arrays of data and conditional statements for making decisions etc. HTML doesn't have these constructs so It can’t be considered as a programming language since It just displays and formats visual elements on a web page. Many template languages are built on top of HTML to provide these constructs. For instance, Angular provides a template syntax that includes data binding like interpolation for easily updating the page with data from the parent component, and directives such as *ngForand *ngIffor iterating over data and displaying HTML elements conditionally. Creating the very basic HTML document Go ahead and open a terminal and run the following commands: $ cd ~ $ mkdir my-first-webpage $ cd my-first-webpage $ touch index.html We simply navigate to the home folder. Next, we create a folder called my-first-webpage. Next, we navigate inside it and create an index.html file. Now, use a text editor (like Vim or whatever you prefer) and open the index.html file. Next, simply add the following code: <!DOCTYPE html> <html> <head> <title>My first HTML page</title> </head> <body> <p>This is my first web page</p> </body> </html> We first add a doctype which must be present. Nowadays in modern browsers that understand HTML5 it’s mostly useless, but required. In the old days, it was used to link to some type definition documents that contain syntax rules of the language. According to Wikipedia, this is the definition of a doctype: A document type declaration, or DOCTYPE, is an instruction that associates a particular SGML (for example, a webpage) with a document type definition(DTD) (for example, the formal definition of a particular version of HTML 2.0 - 4.0) or XML document. In the serialized form of the document, it manifests as a short string of markup that conforms to a particular syntax. Next, we add an opening <html> tag with its closing </html> tag which mark the start and end of the HTML code. Between these two tags, you can add the necessary code for creating your web page. Next, we add the head section of the document using the <head> and </head> tags: The [<head>]() element is sort of a container for all the tags that represent some information about your document such as the title which is added using a <title> element. Inline CSS styles or links to external CSS files or meta tags. Next, we add the <body></body> section which contains the content of your web page. Inside the body, we add This is my first web page paragraph wrapped by the <p> and </p> tags. Now, go ahead and open the index.html file with your web browser (Make sure to save its content in the text editor). You should not see the tags but a rendered blank page with This is my first web page just like in the following screenshot: Escaping Special HTML Characters HTML has a set of special characters such as < and > which are used to surround the tag names also characters like " and ' used for the values of tag attributes and &. So, how can you display these characters in your HTML page? i.e tell the browser not to interpret them but simply display them like regular content. You can do this by escaping these characters using their codes: Each code begins with & and ends with ;. HTML Comments When you are writing HTML code, you may need to comment your code but you don’t want these comments to appear in the web page since they are only intended for your or other developers that read the source code of your web page. To write a comment, HTML provides <-- and --> tags. You should surround you comment with them. For example! <!-- This is a comment --> Note: In web browsers, you can read the source code of any web page that is currently displayed without any restrictions using View page source from a contextual menu or pressing CTRL + U in your keyboard. These instructions are valid for Chrome but you should find similar instructions for other browsers. HTML Links and Navigation HTML provides hypertext links using the <a> tag which works by surrounding a text that becomes the link. The target page is specified using the href attribute. For example: <a href="">Go to Techiediaries</a> The href value can reference a local HTML document using its relative path or an external document using its URL (Uniform Resource Locator). Note: You also need to know about HTML Headers, HTML Paragraphs, HTML Sections, HTML Tables, and HTML Forms. Let’s create a simple HTML website which has pages like home, about and contact page. In the contact page, we’ll add an HTML form and thanks to cloud services users can submit their information without needing to add a backend for our app, we’ll use a cloud service FormSpree which allows us to get what usesr submit using our form via emails. Can you build something useful with HTML alone? Yes, you can! Not fully-fledged apps but you can create a static HTML website which you can use to share information with your visitors. You’ll be able to create multiple pages and add navigation between them and you can add content, paragraphs, divisions, sections, headlines and horizontal lines which are enough to present a document or article with a basic appearance. But if you want to take it further, you can use a front-end framework like Angular to build powerful apps that can be hosted in a web server and even server rendered in the server before sent the browser which are required for SEO and performance purposes. Note: You can actually create fully-working static sites following the JAMStack modern approach. What is JAMstack? According to the official website: JAMstack: noun \’jam-stak’\ Modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. JAMstack stands for JavaScript, APIs, and Markup. The term is first used by Mathias Biilmann to describe a modern web development architecture based on JavaScript, APIs, and prebuilt Markup. When you build a JAMstack app, you only serve HTML using a CDN instead of doing any server-side preprocessing. This results in faster loading times, and less security problems. You maybe thinking how can we build real-world useful web apps using JAMstack and without a server since even a simple form submission needs a backend server? But in fact, the web has known a big essor in SaaS products that allow you to do any kind of server functionality via a simple API without the need to build it yourself. For example: - SaaS products like Snipcart, Foxy.io, Moltin and Shopify’s Buy Button can be used for ontegrating e-commerce services in your JAMstack app. - SaaS products like FormKeep, Typeform, Formspree, and even Netlify can be used for processing form. - SaaS products like Algolia, Google Custom Search, Fuse.js, Lunr.js and List.js can be used for integrating search functionality. - Apps like Disqus and Staticman can be used for user-generated content like comments & reviews, etc. Building a Web App with Angular 7/8 and HTML Now, let's build a JAMstack application with Angular, HTML and a third-party API. The app is simply a news app that can be served from a CDN like Netlify. It's made of only HTML, CSS and JavaScript. Note: Angular makes use of TypeScript but this will be compiled to plain JavaScript in the development machine before hosted on a CDN. Installing Angular CLI 8 The Angular CLI is the offical tool for initializing and working with Angular projects. It's based on Node.JS and can be installed from NPM. Open a new terminal and run the following command: $ npm install -g @angular/cli Creating an Angular 8 Project Next, we can initialize an Angular 8 project using the following command: $ ng new angular-html-demo You'll be asked by the CLI if you Would you like to add Angular routing? Type y for Yes and Which stylesheet format would you like to use? Choose the first option which is CSS. Next, navigate to your project’s root folder and run the development server using the following commands: $ cd angular-html-demo $ ng serve You'll be able to visit your Angular app by pointing your web browser to the address: This is how your app looks but this is just placeholder content that you need to replace with your actual content. Let's open the src/app/app.component.html template that is associated with the root component and remove all the placeholder markup and leave only the router outlet directive for now: <router-outlet></router-outlet> If you are not familiar with the standard HTML element tags, you would think that this is part of HTML but it's actually an Angular directive from the router library that tells the client-side router where to insert the component matching the currently-visited path. Creating and Adding Components to the Router Configuration Angular components and directives allow you to reuse and extend HTML templates. In our example application, we can have the following components: - Header, navbar and footer shell components, - Home and about page components. The home and about page components are mapped to specific routes, so they only appear when we we navigate the appropriate route. While, the header, navbar and footer components belong to the shell of the application which resides in the src/app/app.component.html template. Open a new terminal and run the following commands to generate the components and their HTML templates: $ ng generate component home $ ng generate component about $ ng generate component header $ ng generate component navbar $ ng generate component footer We'll have many folders with the TypeScript, CSS files, and HTML templates for each component. You can see that Angular provides a powerful structure for our application than if you are using plain JavaScript, CSS and HTML files. Adding Angular Routing Angular routing allows you to control which HTML templates are rendered when a specific route is visited in the browser which allows you to create fully-fledged apps with JavaScript and HTML completely in the client-side without resorting to server routing. Let's add the home and about components to the router configuration., let's add the header, navbar and footer components to the app shell. Open the src/app/app.component.html file and update it as follows: <app-header></app-header> <app-navbar></app-navbar> <router-outlet></router-outlet> <app-footer></app-footer> How do we know the tag names of each component. You can get and even change the selector used for a component from the associated TypeScript file of the component. For example, this is the header component: import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-header', templateUrl: './header.component.html', styleUrls: ['./header.component.css'] }) export class HeaderComponent implements OnInit { constructor() { } ngOnInit() { } } The selector property contains the app-header value which means we can include the component in HTML using <app-header /> or <app-header></app-header>. If you save your changes, you should see the home page: Now, let's add some HTML markup to our components. Open the src/app/home/home.components.html file and add the following HTML markup: <h2>Home</h2> <p>This is a JAMstack app built with Angular 8</p> Next, open the src/app/about/about.component.html file and add the following HTML code: <h2>About us</h2> <p>This app is built by Techiediaries.</p> Next, open the src/app/navbar/navbar.component.html file and add the following HTML code: <a [routerLink]="['/home']">HOME</a> <a [routerLink]="['/about']">ABOUT US</a> We use the standard <a> tag in HTML and the routerLink directive in Angular to specify the navigation routes instead of the standard href attribute in HTML. Next, open the src/app/header/header.component.html file and add the following markup; <header> <h1>Angular 8 + HTML App</h1> </header> Next, open the src/app/footer/footer.component.html file and add the following markup; <footer> <span> Copyright 2019 </span> </footer> Note: You can see how we are able to use HTML fragments to create the UI of our application thanks to Angular routing and the powerful template syntax. Now, how we do get and display data in our application? Angular provides the HTTP client that we can use to fetch data from third-party APIs. Before we can use it, we only need to import its module and add it to the root module of our application. Open the src/app/app.module.ts file and update it as follows: // [...] import { HttpClientModule } from '@angular/common/http'; @NgModule({ declarations: [ // [...] ], imports: [ BrowserModule, AppRoutingModule, HttpClientModule, ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Next, open the src/app/home/home.component.ts file and import then inject HttpClient as follows:() { } } Next, in the ngOnInit() method, call the get() method to fetch data from the remote API:() { this.httpClient.get(this.API_ENDPOINT).subscribe((data) =>{ console.log(data['articles']); this.data = data['articles']; }); } } Now, you should see your fetched data displayed in the console of your web browser but how do we display these data in the corresponding HTML template? Angular Template Syntax: Interpolation, ngFor and ngIf Here comes the magic of Angular template syntax and directives such as ngFor and ngIf. Open the src/app/home/home.component.html file and update it as follows: <div class="container"> <div * <p> Loading data... </p> </div> <div * <img src="{{ article.urlToImage }}"> <div class="card-body"> <h3>{{ article.title }}</h3> <p> {{ article.description }}</p> <a href="{{article.url}}">Read story</a> </div> </div> </div> Using the ngIf directive, we conditionally render the Loading data... message while the data array is empty. When data is fetched and populatd in the data array, the message will disappear and ngFor will take care of iterating through and displaying each article of the data array. We use interpolation to display the value associated with each article property. Before we see the final result, let's add a bit of CSS styling to our HTML template. Open the src/app/home/home.component.css file and add the following CSS: h1 { color: purple; font-family: 'kalam'; } .container { display: grid; grid-template-columns: repeat(auto-fill, minmax(305px, 1fr)); grid-gap: 15px; } .container > .card img { max-width: 100%; }/header/header.component.css file and add the following CSS code: :host { color: rgb(25, 143, 221); border-top: 0px solid #9154f3; border-bottom-width: 1px; padding: 0 17px; } Now, this is our home page with the fetched data: Conclusion In this HTML tutorial, we've learned about HTML with Angular 7/8 example. We've seen the basic concepts of HTML and how Angular extends HTML with powerful template syntax and directives such as ngFor and ngIf. We've also seen the concept of the modern JAMstack appoarch of building apps with JavaScript, HTML and CSS which can be served via a CDN and consume data via APIs. Note: We also publish our tutorials on Medium and DEV.to. If you prefer reading in these platforms, you can follow us there to get our newest articles. You can reach the author via Twitter:Follow @ahmedbouchefra About the author
https://www.techiediaries.com/html-tutorial/
CC-MAIN-2020-05
refinedweb
3,783
52.39
STAT 19000: Project 3 — Spring 2022 Motivation: We’ve now been introduced to a variety of core Python data structures. Along the way we’ve touched on a bit of pandas, matplotlib, and have utilized some control flow features like for loops and if statements. We will continue to touch on pandas and matplotlib, but we will take a deeper dive in this project and learn more about control flow, all while digging into the data! Context: We just finished a project where we were able to see the power of dictionaries and sets. In this project we will take a step back and make sure we are able to really grasp control flow (if/else statements, loops, etc.) in Python. Scope: Python, dicts, lists, if/else statements, for loops, break, continue Dataset(s) The following questions will use the following dataset(s): /depot/datamine/data/iowa_liquor_sales/clean_sample.csv Questions Question 1 Let’s begin this project by taking another look at question (4) from the previous project. Although we were able to reduce the number of comparisons down a lot (from around 15000000 squared to 40000 squared) — it is still terrible and very very slow. To see just how slow, let’s time it! from block_timer.timer import Timer import pandas as pd # read in the intruder dataset and get the unique ids df_intruder = pd.read_csv('/depot/datamine/data/noaa/2020_sampleB.csv', names=["station_id", "date", "element_code", "value", "mflag", "qflag", "sflag", "obstime"]) intruder_ids = df_intruder["station_id"].dropna().tolist() unique_intruder_ids = list(set(intruder_ids)) # read in the original dataset and get the unique ids df_original = pd.read_csv('/depot/datamine/data/noaa/2020_sample.csv', names=["station_id", "date", "element_code", "value", "mflag", "qflag", "sflag", "obstime"]) original_ids = df_original["station_id"].dropna().tolist() unique_ids = list(set(original_ids)) with Timer(): # compare the two lists for i in unique_intruder_ids: if i not in unique_ids: print(i) Yikes! That’s really not very good! So, what is the better way? To take advantage of the set object! Specifically, read the section titled "Operating on a Set" here, and think of a better way to get this value! Test out the new method — how fast was it compared to the method above? Code used to solve this problem. Output from running the code. Question 2 Unlike in R, where traditional loops are rare and typically accomplished via one of the apply functions, in Python, loops are extremely common and important to understand. In Python, any iterator can be looped over. Some common iterators are: tuples, lists, dicts, sets, pandas Series, and pandas DataFrames. Let’s get started by reading in our dataset and taking a look. import pandas as pd df = pd.read_csv("/depot/datamine/data/iowa_liquor_sales/clean_sample.csv", sep=";") Use the following code to extract the sales amount in dollars into a list. sales_list = df['Sale (Dollars)'].dropna().tolist() Write a loop that uses sales_list and sums up the total sales, and prints the average sales amount. Of course, pandas provides a method to iterate over the Sale (Dollars) Series as well! It would start as follows. for idx, val in df['Sale (Dollars)'].dropna().iteritems(): # put code here for series loop Use this method to calculate the average sales amount. Which is faster? Fill in the following skeleton code to find out. from block_timer.timer import Timer with Timer(title="List loop"): # code for list loop with Timer(title="Series loop"): # code for series loop Code used to solve this problem. Output from running the code. Question 3 You may have been surprised by the fact that iterating through the Series was slower than iterating through a list. Here is a good post explaining why it is so slow! So why use pandas? Well, it starts to be pretty great when you can take advantage of vectorization. Let’s do a new exercise. Instead of calculating the average sales amount, let’s calculate the z-scores of the sales amounts. Just like before, do this using 2 methods. The first is to just use for loops, the len function, and the sum function. The second is to use pandas. I’ve provided you with the pandas solution. How do you calculate a z-score? $\frac{x_i - \mu}{\sigma}$ Where $\sigma = \sqrt{\sum_{i=0}^n{\frac{(x_i - \mu)^{2}}{n}}}$ $n$ is the number of elements in the list. $x_i$ is the ith element in the list. $\mu$ is the mean of the list. $\sigma$ is the standard deviation of the list. Give it a shot and fill in the code below. What do the results look like? import pandas as pd from block_timer.timer import Timer # df = pd.read_csv("/depot/datamine/data/iowa_liquor_sales/clean_sample.csv", sep=";") sales_list = df['Sale (Dollars)'].dropna().tolist() with Timer(title="Loops"): # calculate the mean mean = sum(sales_list)/len(sales_list) # calculate the std deviation # you can use **2 to square a value and # **0.5 to square root a value # calculate the list of z-scores # print the first 5 z-scores print(zscores[:5]) with Timer(title="Vectorization"): print(((df['Sale (Dollars)'] - df['Sale (Dollars)'].mean())/df['Sale (Dollars)'].std()).iloc[0:5]) Code used to solve this problem. Output from running the code. Question 4 While it is nearly always best to try and vectorize your code when using pandas, sometimes it isn’t possible to do perfectly, or it just isn’t worth the time to do it. For this question, we don’t care about vectorization. We want to look at Volume Sold (Gallons) by Store Number. Start by building a dict called volume_dict that maps Store Number to Volume Sold (Gallons). Since we only care about those two columns now, let’s remove the rest. df = df.loc[:, ('Store Number', 'Volume Sold (Gallons)')] You can loop through the DataFrame as follows. for idx, row in df.iterrows(): # print(idx, row) There, idx contains the row index, and row contains a Series object containing the row of data. You could then access either of the column using either row['Store Number'] or row['Volume Sold (Gallons)']. Build your volume_dict. Code used to solve this problem. Output from running the code. Question 5 Great! Now you have your volume_dict. Write a loop that loops through your volume_dict and prints the Store Number and Volume Sold (Gallons) for each key. If the volume sold is less than 100000 use the continue keyword to skip printing anything. If the volumn sold is greater than 149999, print "HIGH: " before the store number, if the volume sold is less than 150000 print "LOW: " before the store number. The output should be the following. LOW: 2190.0 HIGH: 4829.0 HIGH: 2633.0 HIGH: 2512.0 LOW: 3494.0 LOW: 2625.0 HIGH: 3420.0 LOW: 3952.0 HIGH: 3385.0 LOW: 3354.0 LOW: 3814.0 Code used to solve this problem. Output from running the code.
https://the-examples-book.com/projects/current-projects/19000-s2022-project03
CC-MAIN-2022-33
refinedweb
1,140
77.13
Writing Javascript is not trivial. With all the dynamicity it brings, function level scoping, lack of class syntax and all makes it really hard to tackle. Well, no longer... I wasn't a javascript developer until 3-4 months ago, and all the things i mentioned above beat me on an occasion. Then I met Closure Tools at work. It is basically a set of tools and libraries to make your javascript writing easier. The sub-projects that might be of interest (in random order): Javascript Compiler It is not a compiler in the sense that all your javascript is converted into a binary-like language, but it is a compiler that will go through your code, and depending on optimization level, it will analyze your code and eliminate dead code, inline functions/variables, replace variables with shorter names and such which would minimize your code size, as well as make it more performant. Using JSDoc comments, it enforces static type safety, hence making your code less prone to errors. Javascript Linter At a company like google, where your products are JS+HTML, your code can easily grow out of control. With people from different programming backgrounds having to write JS+HTML on occasion, it is really hard to keep a readable codebase. This tool was a result of Google's own needs of a codebase that scales, and was written to ensure that any js code conforms to Google's Javascript Style Guide. Javascript Library Google deals with all the browsers more than anyone, and tried to make it easier. This gave birth to Closure library. It's a library that has utilities like dom manipulation, animation, ui widgets, server communication and such. It comes with a nice unit test framework as well. Sure, there are millions of libraries that do similar jobs, but its modular and well-tested structure makes it one of the most important tools in my toolbelt. It also provides primitives like goog.inherits and goog.base to allow developers write OOP code similar to Java or C#. The library itself has a very good OO structure, and it is well-documented, and almost all of the library is written with cross-browser mindset. Javascript Templating Library The tools come with a relatively mature templating system. The templates you write look somewhat similar to mustache in its syntax. The nice thing is that the templates you write could be both used on server side or on the client side. The nicer thing is that templates are translated into java and javascript code as opposed to being interpreted, thus, they are super-fast. LinkedIn engineers made a good comparison of different template engines, so do check it yourself. CSS Compiler This is very similar to Less or Sass, so if you need a CSS compiler, add this to your list of tools to compare. I don't have any experience with either of these to give a fair comparison. Below is an example (taken from here) @def BG_COLOR rgb(235, 239, 249); @def DIALOG_BORDER_COLOR rgb(107, 144, 218); @def DIALOG_BG_COLOR BG_COLOR; body { background-color: BG_COLOR; } .dialog { background-color: DIALOG_BG_COLOR; border: 1px solid DIALOG_BORDER_COLOR; } The code above will be converted to: body { background-color: #ebeff9; } .dialog { background-color: #ebeff9; border: 1px solid #6b90da; } Of course, you can also specify to minimize it. Javascript Compiler This article will talk about JSCompiler, as it is the tool i want to see people using more. Due to its very flexible and dynamic nature, Javascript also suffers from lack of a good compiler support. Dynamic analysis is good, but most of the time, we want our javascript code to be type safe, and we want it to be easily refactorable. When you don't have compiler support, any little change you are going to make is most likely to result in undetectable errors. If you change a parameter name for a function, it will break the function. If you remove a parameter, it will break all other functions that depend on it, unless you exhaustively and carefully search for every single use case. Well, no more... Using JSDoc annotations you should already be writing, closure compiler does almost everything a compiler would do. It does type checking for parameters of functions and variables, handles dead code elimination that would remove pieces of the code that are not actually used (or if you want, you can choose to keep them), and more. With the help of the compiler, you can write code in a very elegant way, while still keeping it pretty small because it also does minimization. The only downside is that you have to go by its rules. It has it's own restrictions you have to follow. Javascript enthusiasts will probably complain about how this restricts javascript's flexibility and such, but there it goes. Below are some of the examples that I think you will like: Object/Class Constructors /** * Some class description. * @constructor */ namespace.Class = function() { /** * The field definition. * @type {string} * @private */ this.field1_ = 'fieldValue'; /** * The field definition. * @type {number} */ this.field2 = 23; } Couple things to note here: -@constructor directive for indicating that it is a constructor. This is important because the meaning of "this" changes depending on where you are using at, and closure compiler makes sure you know what you're doing this way. -@type directive for indicating the type of the variable. If, at any point in the source code, you try to assign a value with different type, this will make sure it fails. -@private directive for making a variable private. The compiler will complain if you try to access it outside the class definition. Constants The way to define a constant variable is as follows: /** * @type {string} * @const */ namespace.VARIABLE_NAME = 'variableValue'; Again, the compiler will complain if you try to assign this another value later. Compiler-Modifiable Constants If you want to change something during compile time, such as including a code when you are debugging but not when it is released, this is your buddy. /** @define {boolean} */ namespace.IsDebug = true; Dictionary vs Property Accessors In javascript, you can access to any property in two ways (as every object is actually a dictionary): 1. Property accessor such as myObject.property 2. Dictionary Indexer such as myObject['myproperty'] The reason we need to distinguish the two is that the following code is technically correct: var myObject = {}; myObject['hello'] = 123; console.log(myObject.hello); However, it shouldn't be the way you write your code. If you want to use an object as a dictionary, you should be using [''] accessor. The compiler should give you an error when you want to say myobject.hello. The way tekk that to closure compiler: /** * @dict */ var myObject = {}; myObject['hello'] = 123; console.log(myObject.hello); // This will fail. Exposing vs Not Exposing By default, closure compiler will remove unused properties and or rename them with a shorter name to reduce the code size. This is not always ideal. You can prevent that by marking a property with @expose /** * Some class description. * @constructor */ namespace.Class = function() { /** * The field definition. * @type {string} * @expose */ this.field1 = 'fieldValue'; } Be careful with this, since if this is a library code, deleting a property that you expose might be harmful to your users. Inheritance Closure makes inheritance a bit easier since for some JAvascript prototype inheritance is a bit hard to grasp and all they want is a java like mechanism. /** * Some class description. * @constructor */ namespace.BaseClass = function() { /** * The field definition. * @type {string} */ this.field1 = 'fieldValue'; } /** * Some class description. * @constructor * @extends {namespace.BaseClass} */ namespace.Class = function() { goog.base(this); /** * The field definition. * @type {string} */ this.field2 = 'fieldValue'; } goog.extends(namespace.Class, namespace.BaseClass) Here @extends directive ensure the compiler treats the derived Class as a subclass of BaseClass. Any access to property of BaseClass will not cause a compiler error. goog.base will invoke base class constructor. goog.extends() will do the simplified prototype inheritance for you. Interface/Implements Due to it's dynamic nature, javascript doesn't really have the concept of interface and implements. However, in order to make sure you are not missing any property or function definition in objects you want to pass as parameters or such, this is how you do it. /** * An interface. * @interface */ function Interface1() {}; Interface1.prototype.doSomething = function() {}; Interface1.prototype.doSomethingElse = function() {}; /** * An implementation. * @implements {Interface1} */ function Class1() {}; Class1.prototype.doSomething = function() {}; For the following case, the compiler will fail as you have forgotten to implement method "doSomethingElse" in Class1. Parameters and Parameter type check /** * Does something. * @param {!string} mandatoryParam1 The description for param 1. * @param {?number} mandatoryParam2 The description for param 2. * @param {number=} opt_param3 The description for param 2. */ Class1.prototype.doSomething = function(mandatoryParam1, mandatoryParam2, opt_param3) {}; As you see, for each parameter, we have @param keyword, a type expression, the name of the parameter and description. Here, the most important part is the type expression. - {!string} specifies that this value is a string, and it cannot be null. - {?number} specifies that this is a number or null. - {number=} specifies that this is an optional parameter - this does not need to be specifies when calling the function. Conclusion Make sure you take a look at closure tools. They will make your front end (and sometimes back-end?) development much more managable.
http://tech.pro/tutorial/1256/introduction-to-closure-tools
CC-MAIN-2013-48
refinedweb
1,545
57.27
Gatsby is a powerful platform for building marketing sites, blogs, e-commerce frontends, and more. You can source data from static files and any number of content management systems. You can process images, add support for our favorite styling technique, transform markdown, and just about anything else you can imagine. At its core, a Gatsby site is a combination of functionality centered around a single config file, gatsby-config.js. This config file controls an assortment of site metadata, data type mapping, and most importantly, plugins. Plugins contain large amounts of customizable functionality for turning markdown into pages, processing components into documentation, and even processing images. Scaling Gatsby Creating a single Gatsby site works super well. The power of gatsby-config.js, plugins, and more coalesce to make the experience a breeze. However, what if you want to re-use this configuration on our next site? Sure, you could clone a boilerplate each time, but that gets old, quickly. Wouldn't it be great if you could re-use our gatsby-config.js across projects? That's where starters come in. Improving Reusability with Starters One way to create more sites with similar functionality faster is to use starters. Starters are basically whole Gatsby sites that can be scaffolded through the gatsby CLI. This helps you start your project by cloning the boilerplate, installing dependencies, and clearing Git history. The community around Gatsby has built a lot of different starters for various use cases including blogging, working with material design, and documentation. The problem with starters is that they're one-offs. Starters are boilerplate projects that begin to diverge immediately from upstream and have no easy way of updating when changes are made upstream. There's another approach to boilerplate that has become popular in recent years that fixes some problems with the boilerplate approach such as updating with upstream. One such project is create-react-app. In the Gatsby world, you can improve on starters similarly with themes. Truly Reusable Themes in Gatsby If a single gatsby-config.js encodes the functionality of a whole Gatsby site, then if you can compose the gatsby-config.js data structure together you have the base for themes. You can encode portions of our gatsby-config as themes and re-use them across sites. This is a big deal because you can have a theme config (or multiple configs) that composes together with the custom config (for the current site). Upgrading the underlying theme does not undo the customizations, meaning you get upstream improvements to the theme without a difficult manual upgrade process. Why Themes? Defining themes as the base composition unit of Gatsby sites allows us to start solving a variety of use cases. For example, when a site gets built as part of a wider product offering it's often the case that one team will build out a suite of functionality, including branding elements, and the other teams will mostly consume this functionality. Themes allow us to distribute this functionality as an npm package and allow the customization of various branding elements through our gatsby-config.js. Mechanics of Theming At a base level, theming combines the gatsby-config.js of the theme with the gatsby-config.js of your site. Since it's an experimental feature, you use an experimental namespace to declare themes in the config. Themes often need to be parameterized for various reasons, such as changing the base url for subsections of a site or applying branding variables. You can do this through the theme options if you define our theme's gatsby-config as a function that returns an object. Themes also function as plugins and any config passed into the theme in your gatsby-config.js will also be passed to your theme's gatsby-*.js files as plugin options. This allows themes to override any settings inherited from the theme's own plugin declarations or apply gatsby lifecycle hooks such as onCreatePage. Check out the theme examples in this multi-package repo for more examples of using and building themes:. Next Steps Sub Themes and Overriding This is just the first step and it enables us to experiment with further improvements in userland before merging them into core. Sub-theming, for example, is a critical part of a theming ecosystem that is currently missing from Gatsby. Overriding theme elements is possible on a coarse level right now in userland. If, for example, a theme defines a set of pages using createPage you can define a helper function that will look for the page component first in the user's site and then fall back to the theme's default implementation. Then in our theme's createPage call, you simply use the helper to let the user optionally override the default component. This doesn't allow us to make more granular overrides of different components, but it does allow us to replace the rendering of pages and other whole elements. Component Shadowing, a more granular method of overriding, is already in the works. If you want to be involved in the development of theming for Gatsby, join the Spectrum community for Gatsby themes. I'll also be talking about theming Gatsby at Gatsby Days on Dec 7th covering how Gatsby got here and where theming is going next.
https://v4.gatsbyjs.com/blog/2018-11-11-introducing-gatsby-themes
CC-MAIN-2022-05
refinedweb
886
55.03
go to bug id or search bugs for New/Additional Comment: Description: ------------ PHP's opcache seems to create keys for files it caches based on their filepath (including the cwd when the option opcache.use_cwd is set). When turning on opcache in commonly used hosting environments where users are chrooted it is very easy to get key collissions as the full path of a file in a chroot can commonly be /wp-config.php. The file that was accessed on this path first will be stored by opcache and be used by any interpreters executing the same file later on. Even without a chroot it is often easy to predict where files of another user on the same server will be located and they can still be included circumventing any file permissions set on these files even if PHP executes as the correct user (made even more trivial to figure out interesting files if access to the opcache_get_status() function is not restricted by the host). Example is in the "test script" below which shortly shows my relevant config lines. It'd be neat if opcache could implement a runtime config variable to give to an interpreter with a value that mashes up the key by prefixing or xor'ing it without the possibility of being overwritten from within the script. Alternatively it might be possible to use different parts of shm based on a configuration option so the cache is per-user. Test script: --------------- # Permissions on both directories and files are set to rwx for user only # one.php just sets a single variable root@debian:/home# cat one/one.php <?php $one = "one"; ?> # This file tries to include the non-existent "/one.php" in its chroot root@debian:/home# cat two/two.php <?php include "/one.php"; print $one; ?> # # Request one.php on pool one root@debian:/home# curl # Request two.php on pool two root@debian:/home# curl one # That's the content of /one.php which is owned by user one, in its own chroot # being served by user two from a different chroot while user two doesnt even # have read permissions on the file. Expected result: ---------------- Users can only access their own files (when configured correctly). Actual result: -------------- Users can access other users' files when previously accessed by opcache cross chroots and ignoring filesystem permissions. Add a Patch Add a Pull Request Correct PHP version. I think the easiest would be to simply add an option to use a file's device+inode in addition to, or instead of, the full path. We have discussed this a few times, but nobody has gotten around to an implementation yet. I've written a more extensive post with more examples here: Inodes seem to be a good idea, I think APC does it that way? Yes, it was one of the features we lost by dropping APC. It has been on our radar for a while to fix, but nobody has gotten to writing the code yet. 4 month for critical fix... it's bad... I cannot seem to replicate this... But I got open basedir enabled. Can somebody verify that open basedir solve it? noescape at centrum dot sk: I think open_basedir is useless in chroot I can replicate this with php5-fpm 5.5.9+dfsg-1ubuntu4.11. I baffled how a huge security / usability problem like this can exist, yet nobody gives a poop about it. We've got a server with multiple sites chrooted using the same directory structure inside the chrooted /. Obviously this causes fatal problems as there are conflicts, like index.php in those hosts causing it to randomly load from any of those. Because of this, opcache can not be used as we can't change the directory structure. Someone, please investigate this! I replicate this problem with two VirtualHosts (using open_basedir). OS: Debian 8.1 php -v PHP 5.6.14-0+deb8u1 (cli) (built: Oct 4 2015 16:13:10) Zend Engine v2.6.0, Copyright (c) 1998-2015 Zend Technologies with XCache v3.2.0, Copyright (c) 2005-2014, by mOo with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2015, by Zend Technologies with Xdebug v2.2.5, Copyright (c) 2002-2014, by Derick Rethans with XCache Optimizer v3.2.0, Copyright (c) 2005-2014, by mOo with XCache Cacher v3.2.0, Copyright (c) 2005-2014, by mOo with XCache Coverager v3.2.0, Copyright (c) 2005-2014, by mOo Apache: 2.4.10 VirtualHosts: ------------- cat 050-opcache-fail.conf <VirtualHost *:80> ServerName app1 DocumentRoot /var/www/app1/web php_admin_value open_basedir /var/www/app1 <Directory /var/www/app1/web> Options Indexes FollowSymLinks MultiViews AllowOverride None #Order allow,deny #Allow from All Require all granted </Directory> </VirtualHost> <VirtualHost *:80> ServerName app2 DocumentRoot /var/www/app2/web php_admin_value open_basedir /var/www/app2 <Directory /var/www/app2/web> Options Indexes FollowSymLinks MultiViews AllowOverride None #Order allow,deny #Allow from All Require all granted </Directory> </VirtualHost> Example - PHP apps - typical using with composer (dummy): --------------------------------------------------------- apps directory structures: /var/www/ app1 vendor third party lib.php autoload.php web index.php app2 vendor third party lib.php autoload.php web index.php web/index.php (same app1 and app2): ----------------------------------- <?php require_once __DIR__ . '/../vendor/autoload.php'; echo (new third\party\lib())->getName(); ?> vendor/autoload.php (same app1 and app2): ----------------------------------------- <?php require_once __DIR__ . '/third/party/lib.php'; ?> vendor/third/party/lib.php (app1): ---------------------------------- <?php namespace third\party; class lib { public function getName() { return 'good library'; } } ?> vendor/third/party/lib.php (app2): ---------------------------------- <?php namespace third\party; class lib { public function getName() { return 'bad hacked library'; } } ?> Security issue: --------------- It would be a problem in webhostings. After web server restart just run bad app2 as first. Workaround: ----------- opcache.optimization_level=0 important note to the previous example: --------------------------------------- PHP is running as Apache module Is this still beeing considered as a bug or should we just give up on chroot+opcache? This is probably related: I also find it stunning that - at the very least - PHP doesn't warn about this or even better refuse to start in this configuration. This is a serious security issue and all too easy to overlook. Took me a while to figure out whats actually happening. This can lead to very mysterious errors. Fragments or error messages of other unrelated sites on the same server suddenly showing up and so on. Can anybody confirm that "opcache.optimization_level=0" is really a solid workaround that can be used in production? There should be no information leak whatsoever. No files. No data. No code. No nothing. That's why we have a chrooted environment in the first place. But obviously I'd rather have an optimzation level 0 than no opcache at all. I'm concerned about this issue since it has the potential to be a security deal-breaker for Zend OPcache when used on (as an example) a shared-hosting platform. With that said I'm unable to replicate this using michal_micko's instructions for mod_php and open_basedir configurations. I used the same exact two VirtualHosts, directory tree, and file contents. I restarted apache, accessed first and then but both displayed their proper "bad" and "good" texts respectively. Ubuntu 14.04.5 LTS (as opposed to Debian 8.1) Apache 2.4.7 (Ubuntu) (as opposed to 2.4.10, default config plus VirtualHosts) PHP 5.6.27-1+deb.sury.org~trusty+1 (as opposed to 5.6.14-0+deb8u1, default config) Possibly relevant: opcache.enable = On opcache.optimization_level = 0x7FFFBFFF opcache.revalidate_path = Off opcache.use_cwd = On opcache.validate_timestamps = On One potentially major difference between my config and michal_micko's is that I do not have any XCache modules or Xdebug loaded. I would think, with except perhaps the PHP version, that is the only major configuration difference between my set up and michal_micko's. For me it looks safe to use PHP 5.6 OPcache + mod_php + open_basedir. When I dump the output of opcache_get_status() I see only full paths for both the "full_path" value and the array keys. I believe this is the use_cwd option doing its job. The only additional "securing" that I see is setting restrict_api to something out of reach or adding opcache_* functions to disable_functions. I'd greatly appreciate it if anyone is able to correct or confirm what I've found. Does this configuration of Opcache resolve the problem? See: At this point two separate but related issues are being discussed: 1. OPCache is prone to filename collisions across chroots. Adding inode number to the key would fix this. 2. All child PHP processes have access to all scripts in the cache, regardless of file permissions. This renders OPCache unusable and dangerous on shared hosting servers, where OPCache may be used by malicious users to bypass file permissions (and open_basedir FWIW). Adding inode to the cache key would *not* fix this vulnerability, although in practice it would help in cases where parent directory is unreadable. I'll focus on the second issue: The obvious real-world example is a shared hosting server with multiple CMS sites. Most PHP CMS's store database credentials and other sensitive information in PHP scripts. Thus one malicious user (translation: compromised CMS) typically has full access to databases for other users' CMS's if OPCache is enabled. If anyone doubts this, I can provide working proof of concept exploit scripts targeted at WordPress. Fortunately this doesn't seem to be exploited in the wild on any large scale, but I anticipate PHP malware will start incorporating this technique as soon as it's more widely known. I'm attaching a patch against the 5.6 branch which prepends a unique user identifier (username in Windows, euid elsewhere) to the cache key. This should fix issue 2 in all situations where dl() is not allowed (PHP5 with FPM would still in theory be vulnerable unless dl() is disabled). It's not a perfect fix because it requires the PHP process to act as gatekeeper, essentially a substitute for kernel enforcement of filesystem permissions. This is a problem if any PHP child process with a descriptor for the shared opcache can be subverted, for example with a malicious extension, hence my concern about dl()). It will also fix issue 1 *only* in cases where the chroot environments are running PHP scripts with separate user accounts. If the chroots use a shared web server user for PHP, issue 1 is still a problem. I agree with Rasmus that the script inode should also be added to the key, but I wasn't sure if it would be appropriate to stat the script for its inode in the key generation function due to performance concerns, and being unfamiliar with the extension and PHP itself I wasn't prepared to do this in a more appropriate place. I started with 5.6 because I believe this is a fairly serious security vulnerability which many users are unaware of, which isn't adequately explained in the OPCache documentation, and which should IMHO be fixed in a patch release. I'm willing to port it forward to 7.x if there's interest and time allows, but I feel strongly that this patch or something similar should be included in a 5.6 patch release ASAP. Code is only minimally tested. Use at own risk. Apologies for any indentation issues; I did my best to follow the style guide, but existing OPCache code did not. Feedback is welcome. Excuse me but... Wouldn't it be safer, more reliable, less hackish ... to separate the PHP pools of the website ? One pool per site, each pool listening on one port for CGI requests. I mean, we do know for sure, that PHP cannot replace the OS and the configuration to secure webservers in a shared architecture (shared hosting). We've tried things such as safe_mode, open_basedir etc... since the beginning, with no success. Nowadays, with mature OS, mature stacks, and a mature FCGI handler (PHP-FPM), it is easy to build a shared hosting architecture not relying on the PHP language itself to isolate the virtualhosts. Security - of the filesystem here - is the matter of the OS , not PHP. Why don't you create several PHP-FPM pools, and secure them with one Unix user per pool ? Obviously, the OPCache SHM wouldn't be shared here, but that is not what you want : OPCache SHM shouldn't be crossed against several websites. One pool per website = one SHM per website = one unix user per website = we solved every low level security problems, right ? The suggestion as one user per site would have to apply all the way across the stack. So Apache would have to have a single user, and no multi-site, no virtual sites, no resource sharing at all? This would mean full resources for each and every site, which would be extremely wasteful. I have a client with four websites. Does the single client need four users? I have several clients but I want to run all clients and all their sites with a single configuration. Apache can do this, PHP can do this, therefore opcache should be able to support such a configuration. I'll address jpauli's points: Not everyone is using FPM. My patch fixes this problem on both FPM and apache2handler, where we can separate users with mod_ruid2. My experience with FPM is limited so I may be speaking out of turn, but when I tried separate FPM pools with separate users, they were still forked from the same parent FPM master process. Correct me if I'm wrong but the OPCache SHM segment is opened in this master process and inherited by the pools as a file descriptor. The multi-user pools still share a single OPCache, and thus they actually aid in bypassing file permissions, rather than fixing the problem. I agree 100% with your points about relying on the lower stack to isolate vhosts and enforce permissions, but the entire point of this bug report is that OPCache breaks this isolation in real-world configurations since it uses a shared cache passed from Apache parent to children, or from FPM master to pools. You stated "Obviously, the OPCache SHM wouldn't be shared here" in a multi-pool configuration. What would such a configuration look like? My testing shows the opposite to be true. Perhaps you meant to suggest multiple FPM master daemons instead of multiple pools? If the OPCache SHM can be initialized at the pool level instead of in the master process (I don't think this is how it works today; I'd love to be proven wrong on this), this is great for FPM users but does nothing for users of other SAPI's. The big caveat in my testing is that most of it was done under cPanel. Both mod_ruid2/mod_php and FPM have the shared OPCache problem under both EA3 and EA4. This is true for both PHP5 and PHP7. If you think this is a problem with cPanel's apache2handler and FPM configurations then I can take the issue up with cPanel, but I'd love to see how they could fix this for all SAPI's where OPCache is useful. Clearly the problem isn't limited to cPanel though, based on the other users commenting on this bug. Hopefully this will clarify users' concerns with this bug; from your response I'm frankly not sure you fully understand the problem we're reporting. I agree with "jeff at mcneill dot io" that to isolate vhosts with PHP as it stands today, you'd need entirely separate Apache parents (for apache2handler) or FPM master processes (not just multiple pools). php-dev - would it not be best (at least when using fpm) to run each pool under a separate user / group account as discussed here: This would seem to satisfy the original suggestions earlier in the thread to leave as much as possible to the OS level security controls. I mention because I was wondering if your patches (which are warmly welcomed...I've been posting on this issue for a while now on SF & SE) presume that these types of controls would also be implemented or perhaps obviate them. bjh438-git, thank you for your comments. I had previously read that article. The setup described mitigates the security concerns I've outlined for one reason: It recommends disabling OPCache completely. Such a multi-user setup is generally recommended, but as I mentioned in my response to jpauli, it's dangerous when combined with a single SHM cache with a simplistic hash keying scheme which makes no attempt to segregate users. My readings of php-internals archives suggest that OPCache was indeed meant to be usable in multi-user setups so perhaps the maintainers simply haven't thought through the all implications of the key scheme, though they have discussed its other shortcomimgs. I'm working on a brief but more comprehensive advisory which I'll post here and on php-internals soon if we don't get more meaningful engagement on this issue. jpauli, using separate pools does not fix this issue; the cache is maintained by the master process that has access to all pools regardless of permissions of those pools. First off all thank you php-dev to bringing this issue back on the table. I'd like to bring in my view from a shared hosting provider. IMHO the two big hosting panel softwares Plesk and cPanel are vulnerable to this issue. We currently use cPanel. cPanel provides a different set of options to isolate the user from reading other users files. In addition to the user isolation it's also possible to isolate you processes any further by running them in a chroot environment. This could be easy leveraged by using tools/distros like Cloudlinux CageFS (), which we use. All this countermeasures to separate processes users on process and file system level are voided if the OpCache is accessible without the user boundaries. Our stack currently is not vulnerable to the mentioned issue due to the combination of additional tools and server softwares that isolate users also on the OpCache level. The magic is a combination of Cloudlinix as file system isolation and Litespeed as a PHP SAPI process and OpCache isolation. Having some insights into the hosting industry this stack is not the majority. IMHO most of the cPanel users use the built in capabilities of the EasyApache stack, which is vulnerable. In the past we've seen some attacks targeting multiple users on the same server. The most effective was: Symlink Bypass A skilled attacker could easily take over dozens of shared hosting with the help of this vulnerability. Depending on the shared hosting provider I guess an average of users per host is approx. 200-1000 users. Given the current market share of Wordpress (easy extractable secrets) of approx. 60% ( ) the impact of a hacked website/CMS/Plugin is very high. In automatic manner an attacker could take over half of shared hostings only by optimizing for Wordpress. This is IMHO a pressing matter for all the other shared hosting providers and customers support staffs out there. I'm in favor to the current solution to fix the issue by appending a contextual part to the path. IMHO the best solution from a security point of view would be the inode information. From a operations point of view inodes also solve the issue of deploying with symlinks. E.g. dol+list, thank you for your comments. Your observations are consistent with my own experiences, with the exception that I haven't had a chance to test exploitation under CloudLinux/CageFS. I haven't bothered testing with cPanel's "Jail Apache" virtfs feature either, due to unrelated reliability problems with that feature. I'm trying to avoid too much discussion of such platform-specific mitigations because they're out of scope for a PHP bug report, and they shouldn't even be necessary. I feel strongly that this is a PHP bug which needs to be fixed in PHP. I agree that device+inode should ideally be added to the key scheme, but they alone are insufficient. They are not necessarily private information, and can be read if permissions of the parent directory allow. This breaks user expectations; if wp-config.php is unreadable for a user, that user should not be able to execute that script, period. This is why I think EUID should be part of the key. I don't like the thought of opcache playing permissions referee and I think it suggests a fundamental design flaw with the way SHM is being used, but working within the existing SHM design I think EUID is the best way forward. Perhaps a more radical change would be better, but I'm not familiar enough with prior art in PHP opcode caching to know what it is. Using device+inode is also less straightforward than using EUID, otherwise I'm sure the maintainers would have already done it: it seems inappropriate to stat() for this info during key generation for performance reasons. APC apparently got a stat struct passed from the SAPI so the stat() was already done outside of APC code (does this imply APC cached scripts per compilation unit and not per file, since the web server wouldn't know which files are included? I haven't reviewed APC code in enough detail yet to confirm). For my patch I use EUID simply because there's no undue performance hit and it fixes the cross-user permissions bypass in both "out of the box" and common control panel environments. It's probably not perfect but surely we can agree it's better than the existing code. For anyone using FPM who wants to mitigate before PHP fixes the vulnerability, 2 years ago Mattias Geniar confirmed that separate FPM master daemons is the way to go, and provided example configs. It's cumbersome and inefficient and shouldn't be necessary: php-dev, Thanks for providing further insight into this issue. I'd like to take you up on your offer of a PoC for replicating under mod_php + open_basedir. FPM works as well but I'm particularly interested in the mod_php case. Thanks! kmark937, I've emailed you directly with a PoC exploit script. I'd love to hear about your results when you try it. dol+list, one nitpick with your earlier comments; according to OPCache comments any changes to the key scheme need to be prepended, not appended, due to the way the key delimiters are parsed (I wonder what happens when script filenames contain ':'?). To any PHP project people still reading, would it be possible to reclassify this as a bug report instead of feature request? This was opened as a feature request at the suggestion of the original bug reporter in #67481, who withdrew their bug report when they decided that the vulnerable behavior was desirable after they found a workaround by enabling use_cwd. use_cwd does not fix this issue in most cases with common CMS applications because the use_cwd logic is only applied to scripts invoked/included via relative paths, which is less common in real-world web server/CMS environments. So far I've avoided opening yet another duplicate bug tracker item for this behavior, but it absolutely needs to be treated as a bug and not a feature request since OPCache documentation does not warn about it and php-internals discussions dating back to the Optimizer+ days indicate that the maintainers do indeed intend this feature to be usable in multi-user environments. jpauli, I do sincerely appreciate that you took the time to comment on this as a PHP project member. Did you have a chance to review my response? Do you see that multiple FPM pools with separate users to *not* have the "obvious" separate SHM behavior which you seem to think they do? Would you care to try my exploit script? I'm trying to give the PHP project every chance to address the issue before I publish the exploit script, but they've literally had years to address this and show little interest in fixing it. There's nothing novel about the exploit btw; it does nothing different from the PoC configs already given in this request and in bug #67481. I've just tested suexec/mod_fcgid: This configuration seems to be unaffected because separate vhosts' php-cgi processes don't share a common parent PHP process. mod_fcgid plays a similar role to the FPM master, but because it's not PHP and doesn't initialize the opcache, there's no single SHM object being shared. Each vhost gets its own opcache SHM object if I understand correctly. Thus mod_fcgid with suexec provides better compartmentization than FPM since it has no single parent PHP process from which all others are descended. To summarize, - apache/mod_ruid2/mod_php is vulnerable. - php-fpm is vulnerable (probably regardless of webserver). - apache/suexec/mod_fcgid/php-cgi is not vulnerable. Perhaps php-fpm needs a long hard look at the order in which it initializes pools vs extensions? Original reporter here, I sitll read the updates to this. Thanks php-dev for the patch, it seems to work if just the euid is included but correct me if I am wrong for yes the euid gets set after PHP-FPM forks a new pool under a specific user but just running separate sites chrooted under the same user (say, www-data) would still cause the same issues as described in my original report. Now, that's obviously an unwise way to run any PHP shared hosting environment but it might be something we want to change as well. I'll take a look at the patch as well but maybe we can do something closer to inodes to be used in the cache key instead of effective user id. I'll try to replicate with your patch applied over the weekend :) Simon, that's correct; my patch doesn't fix same-user, multiple chroot scenarios. Search the comments for "I agree with Rasmus" for my thoughts on that. I was only attempting to fix the cross-user permissions bypass, which I personally consider the more serious issue. If I knew the appropriate place to do the stat() call I would have added device+inode, but I've only had limited time for this and it's my first exposure to PHP internals so I kept the scope small. Maybe if I get more free time, and reach consensus with the maintainers, I'll have a more complete fix later. I'd prefer something like (untested wip) instead It's untested; but shows the general idea. After a successful chroot (either from fpm or userspace); the path is stored in a global which is prepended to the cache keys. The following patch has been added/updated: Patch Name: validate_permission.diff Revision: 1479208880 URL: I've attached a patch to enable optional file permission validation. With opcache.validate_permission=1 php.ini directive, PHP is going to revalidate readability of cached files using access() syscall. This directive is going to be disabled by default and should be enabled by shared hosting providers. Additional checks lead to ~5% slowdown on Wordpress. The proposed manipulations with keys (e.g. including user name or root directory into key) won't work out of the box, because in some cases opcache doesn't use keys constructed by accel_make_persistent_key(), but uses full-real-name instead. I didn't solve the chroot keys collision problem yet. The following patch has been added/updated: Patch Name: bug69090.diff Revision: 1479222667 URL: The new attached patch should completely fix both problems. It modifies values of hash function XOR-ing them with a value constructed from the root inode number. This value calculated once per request, using stat("/") at request startup, if opcache.validate_root is enabled (disabled by default). The following patch has been added/updated: Patch Name: bug69090.diff Revision: 1479224278 URL: Automatic comment on behalf of dmitry@zend.com Revision:;a=commit;h=ecba563f2fa1e027ea91b9ee0d50611273852995 Log: Fixed bug #69090 (check cached files permissions) The issue in question appears to be fixed, thanks Dmitry for resolving this 2yo security and stability problem. Nevertheless, IMO the current solution looks more like a workaround for a larger problem: there's complex code (for which multiple bugs are fixed in half of the releases) that runs outside the chroot environment with complete visibility of the entire filesystem, making it a perfect escape route (which was actually demonstrated with the current bug). IMO, the appropriate fix would be to initialize everything that could access the filesystem (including internal functionality & all plug-ins) AFTER performing chroot. Only the logic responsible for loading of the configs, libraries and modules should be placed in the pre-chroot portion of the daemon (to avoid placing their files in the chroot environment), but NOT their parsing (for the per-pool configs), initialization (for the modules) and management, which should be performed in the isolated chrooted environments, a separate instance for each chrooted pool. If the current logic is to save some memory that could be shared between the chrooted pools, I do believe that security should not be sacrificed for performance or lower resource usage, especially nowadays with so cheap hardware. PHP devs, please let me know if it's appropriate to open a new bug (type: security) for tracking this change or if this one should be reopened, or if you consider this issue has no merit to expect its implementation some day.
https://bugs.php.net/bug.php?id=69090&edit=1
CC-MAIN-2017-13
refinedweb
4,944
62.38
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login Win a copy of Head First Android this week in the Android forum! JavaRanch » Java Forums » Other » Programming Diversions Author coding simple bayes classifier in java . need help with the algo karthik raghunathan Greenhorn Joined: Nov 12, 2011 Posts: 10 posted Feb 06, 2012 03:55:31 0 edit : deleted original post, removed links to github, copied code here instead. corrected spelling. My naive bayes classifier started out as a spam filter and now has been recruited to classify whether a text is by Dickens or Twain. First of all, would this be the right forum to ask this question ? Second, it doesn't work very well. Can anyone help me correct the algo ? I sorta copied some of it from shiffman.net tutorial, which sorta uses the 'paulgraham : a plan for spam' approach. ps : the code is not in OOP style, it's more or less procedural. Is this a problem ? import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.net.URL; import java.net.URLConnection; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; import org.jsoup.Jsoup; public class Counter { static Map<String, Integer> good = new HashMap<String, Integer>(); static Map<String, Integer> bad = new HashMap<String, Integer>(); static String[] specialCharacters = { ",", "#", ";", "\"", "\'", }; static String empty = ""; static void print(String s) { System.out.println(s); } static void store(String fname, String data) { try { FileOutputStream fos = new FileOutputStream(new File(fname)); fos.write(data.getBytes()); fos.close(); } catch (Exception e) { e.printStackTrace(); } } static String getFile(String path) { try { FileInputStream fis = new FileInputStream(new File(path)); byte[] b = new byte[fis.available()]; fis.read(b); fis.close(); fis = null; return new String(b); } catch (Exception e) { e.printStackTrace(); return null; } } static String getURL(String url) { try { URL u = new URL(url); URLConnection uc = u.openConnection(); InputStream is = uc.getInputStream(); byte[] b= new byte[is.available()]; is.read(b); is.close(); return new String(b); } catch (IOException e) { e.printStackTrace(); return null; } } static String takeOutSpecialCharacters(String s) { for (String p : specialCharacters) { s = s.replaceAll(p, empty); } return s.toLowerCase(); } static int count(Map<String, Integer> map) { int count = 0; for (Integer i : map.values()) { count += i; } // System.out.println(count); return count; } static void train(String text, boolean ham) { String[] tokens = text.split(" "); Map<String, Integer> map = ham ? good : bad; for (String token : tokens) { token = takeOutSpecialCharacters(token); if (map.containsKey(token)) { map.put(token, map.get(token) + 1); } else { map.put(token, 1); } } } static float calculateProbability(String s) { s = takeOutSpecialCharacters(s); float pSpam = 0f; float goodRatio = 0; if (good.containsKey(s)) { goodRatio = (float) (((float) good.get(s)) / (float) count(good)); } float badRatio = 0; if (bad.containsKey(s)) { badRatio = (float) (((float) bad.get(s)) / (float) count(bad)); } if (goodRatio + badRatio > 0) { pSpam = (badRatio / (goodRatio + badRatio)); } if (pSpam > 0.99f) pSpam = 0.99f; if (pSpam < 0.1f) pSpam = 0.1f; return pSpam; } static float interesting(Float f) { return Math.abs(0.5f - f); } static float mul(Float[] f, boolean oneMinus) { float a = 1.0f; for (float f1 : f) { if (oneMinus) { a = a * (1 - f1); } else { a = a * f1; } } return a; } static float classify(String text) { String[] tokens = text.split(" "); Set<String> set = new HashSet<String>(); for (String token : tokens) { set.add(takeOutSpecialCharacters(token)); } List<Float> problist = new ArrayList<Float>(); for (String token : set) { float f = calculateProbability(token); if (f > 0.5f) { // System.out.println( ""+ feature + ":"+ f); problist.add(f); } } Collections.sort(problist, new Comparator<Float>() { @Override public int compare(Float f1, Float f2) { return (int)(100*interesting(f2) - 100*interesting(f1)); //descending order } }); Float[] probabilities = problist.subList(0, 16).toArray(new Float[15]); float product = mul(probabilities, false); float oneMinusTerm = mul(probabilities, true); print("[]" + product + "," + oneMinusTerm); return (product / (product + oneMinusTerm)); } static String[] pullNStore(String nub, String[] files){ List<String> names = new ArrayList<String>(); for(String file : files){ String name = new File(file).getName(); name = name.substring( 0, name.indexOf(".")); names.add(nub+name); store( nub + name, Counter.getURL(file)); print( "downloading :"+file +" as " + nub+name); } return names.toArray( new String[names.size()]); } public static void main(String[] args) { String[] dickens = {"", "", "", "", "", "", "", "", "", "", "", "", ""}; String[] twain = { "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", ""}; String[] storedDickens = pullNStore( "dickens", Arrays.copyOfRange(dickens,0,1)); String[] storedTwain = pullNStore( "twain", Arrays.copyOfRange(twain,0,1)); pullNStore( "twain", Arrays.copyOfRange(twain,6,7)); for(String s : storedDickens) { train( getFile(s), true); System.out.println( "trained,dickens:"+ s); } for(String s : storedTwain) { train( getFile(s), false); System.out.println( "trained,twain:"+ s); } float f = classify( getFile("twain1837") ); System.out.println( "is a twain book :" + f); } } Dave Trower Ranch Hand Joined: Feb 12, 2003 Posts: 86 posted Feb 06, 2012 13:45:21 0 I read an article on how spam filters work but my experience is limited to that one article. You need to create a dictionary where for each word used in any book returns a probability of if the book is twain or Dickens. I think you do this by counting the frequency of each word. Then when you are given a sample book, you look up the probabilities for each word from the dictionary and then apply the Bayesian algorithm. Let me know if this helps. karthik raghunathan Greenhorn Joined: Nov 12, 2011 Posts: 10 posted Feb 07, 2012 20:58:27 0 Looks like I read the same article too. Except I don't know what I am doing wrong in my code 'cos I don't know what output to expect Dave Trower Ranch Hand Joined: Feb 12, 2003 Posts: 86 posted Feb 08, 2012 06:31:57 0 I would suggest you google the words "bayesian for spam". I do not have the original article but this is a good one: webpage Here is a quote from the article: This word probability is calculated as follows: If the word “mortgage” occurs in 400 of 3,000 spam mails and in 5 out of 300 legitimate emails, for example, then its spam probability would be 0.8889 (that is, [400/3000] divided by [5/300 + 400/3000]). So now in the dictionary, the word mortgage probability is 0.8889. So if the word mortgage is used in an e-mail, there 88.89% chance the e-mail is spam. However, the bayesian filter looks at all words in an e-mail. So the total probability of an e-mail being spam would change based on the other words. In your case, you build a dictionary based on how often a word appears in which of the two works. I think the output of your program should be something like: There is a 99.3% chance the book I just looked at is Twain. karthik raghunathan Greenhorn Joined: Nov 12, 2011 Posts: 10 posted Feb 08, 2012 08:54:19 0 Okay , that is pretty much what I am doing ,except for the final part, where I combine probabilities for each of the words. I am making two dictionaries - one for spam and one for ham, then I calculate spam probability by doing rSpam/ (rSpam + rHam) So I _am_ on the right track. Let me see if I am doing something wrong in the combining of probabilities ..... I also return a default of 0.1 if the probability is 0. That might be a downer.. I'll update in a day karthik raghunathan Greenhorn Joined: Nov 12, 2011 Posts: 10 posted Feb 09, 2012 06:38:53 0 I'm going to try with different data. Maybe twain and dickens aren't too different ...... marking this resolved. Thanks for all the help. really appreciate it. I agree. Here's the link: subject: coding simple bayes classifier in java . need help with the algo Similar Threads Android Development for client / server architecture How to convert HashMap to JavaBean REST API Design Rulebook - design The Strange Loop JSF complicated custom component All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/566584/Programming/coding-simple-bayes-classifier-java
CC-MAIN-2015-27
refinedweb
1,361
60.21
TeensyTimerTool - Easy to use interface to the Teensy Timers I just published TeensyTimerTool on GitHub. The library provides an easy to use interface to the Teensy timers. Currently it works on the T4.0 and uses the GPT (2x) and QUAD (16x) timer modules/channels. Additionally it provides up to 20 software based timers with exactly the same interface. Extension to the PIT modules and the other boards is planned. All timers can be used in periodic and one-shot mode. You can either specify which hard/software timer you want to use; or -if you don't care- you can get the next free timer from a pool. Here a quick example how to use a timer from the pool and set it up in one-shot mode: If you want to use GPT1 instead, all you need to do is to replace the lineIf you want to use GPT1 instead, all you need to do is to replace the lineCode:#include "TeensyTimerTool.h" using namespace TeensyTimerTool; Timer t1; void setup() { pinMode(LED_BUILTIN,OUTPUT); t1.beginOneShot(callback); } void loop() { digitalWriteFast(LED_BUILTIN, HIGH); t1.trigger(10'000); // trigger the timer with 10ms delay delay(500); } void callback() // switch off LED { digitalWriteFast(LED_BUILTIN, LOW); } bybyCode:Timer t1; The gitHub readme contains more examples demonstrating the usage.The gitHub readme contains more examples demonstrating the usage.Code:Timer t1(GPT1) Callbacks In addition to the usual pointer to void functions all timers accept non static member functions, lambdas, funktors etc. as callbacks. This makes it very easy to pass state to the member function or do classes which embed timers and callbacks. Examples here. If for some reason you prefer a plain vanilla function pointer interface you can configure the library accordingly. Performance I tried to optimize the performance as much as possible. Any further optimization input would be very welcome. Due to the high speed of the ARM core the performance of the software timers is pretty amazing and actually beats the hardware timers if you don't generate too much instances. Status The code base is quite new and probably contains bugs and improvement possibilities. As always: Any feedback, bugreports and general rants are very welcome.
https://forum.pjrc.com/threads/59112-TeensyTimerTool?s=ef53001f7cef457e12801ca32ce807d1
CC-MAIN-2020-34
refinedweb
366
63.39
Hi, I have a variant of inception v3 network, which is pre-trained and fixed. Now I want to do the inference for that network. I tried both tensorflow and pytorch. Tensorflow: I saved the network as graphdef, and use it for inference. I can use 128 batch size for inference. Pytorch: I put the model in evaluation mode, but I cannot use 128 batch size, which will give me out of memory error. I don't know why this happens. My guess is that tensorflow may not cache the intermediate feature maps in the graphdef mode, but pytorch may do. I also had other problems related to the memory usage. My recent experience suggests that pytorch often uses higher gpu memory than tensorflow. In many cases, I have to reduce batch size, which may or may not solve my problem. Are there any suggestions on how to use pytorch more memory efficiently? You should make sure your input Variables have volatile=True. That tells torch.autograd not to build a graph for backpropagation and that it can free unneeded tensors during the forward pass. Variable volatile=True torch.autograd I'd recommend reading the autograd notes. Also, you might want to try torch.backends.cudnn.benchmark = True to improve the speed. torch.backends.cudnn.benchmark = True Thanks for the pointer. I think that my problem might be a little different. I have two networks A and B, the output of A will be fed to network B, and the loss is defined on the top of network B. The network B is fixed and network A is to be optimized. So I need network B to back propagate the gradients from loss to A, but these gradients do not need to be saved since network B doesn't need to be updated. I am wondering that in this case, can I set all variables in B to be volatile=True? Is there some way for me to know whether the gradients of parameters in B are buffered or not? In your particular case, you need to set B's parameters to not require gradients.Do this: # let's assume input is a Tensor, A and B are networks optimizerA = optim.SGD(A.parameters(), ...) # freeze B's params for p in B.parameters(): p.requires_grad = False iv = Variable(input) optimizerA.zero_grad() out1 = A(iv) out2 = B(out1) out2.backward() optimizer.step() Thanks. This is what I did. One quick question is that if I set requires_grad = False, does it save extra gpu memory for me? Yes it saves memory at certain places OK, I felt a little frustrated. I did everything I could to reduce GPU memory. But the usage of my pytorch code is still more than twice memory-consuming than my tensorflow implementation. Because the code is confidential, I cannot release it in public. My network B is an inception v3 network. I found that after I passed the output of A to B, the GPU memory increases 4 GB. The batch size is only about 20. without looking further, I cant unfortunately comment further but we've benchmarked memory usage on ResNets, Alexnet etc. and we're on par with or better than other frameworks. if you can give a small script that showcases your code (without giving out your actual code), happy to take a look. Is tensor data type is same in PyTorch and TensorFlow? TensorFlow provides only float32 but Pytorch seems to provide both float32 and float64(double). If you use float64 in PyTorch, it is normal that the allocated memory is as twice as TensorFlow. @gaoking132 if you can't share the model, but you can share the training loop, that might still help. Maybe you have some small bug there. In finetuning example, tensorflow batch size is more double than pytorch with resnet152 model and same machine.I don't exactly know but I saw pytorch error's occuring in "input.new()" in batchnorm, so I guess two code treats different intermediate buffer with different ways.if tensorflow use cpu memory to allocate intermediate varaible, goodness would be memory efficiency, but badness would be speed down by frequent cuda memory copy. Thank you for all the helpful replies. The code for both pytorch and tensorflow implementation is a little complex, which certainly cannot be used as a fair comparison. I don't have too much time to extract a small part of the code for debugging purpose. Sorry for not providing helpful feedbacks on this thread. This is my test codes for comparing pytorch and tensorflow Below codes is a pytorch code of fintuning flower examplein my machine gtx980ti , the batch size of pytorch, 8 is available, but 16 is not import torch import torch.nn as nn import torch.nn.functional as F import torchvision from torchvision import datasets, transforms from torch.autograd import Variable import matplotlib.pyplot as plt import numpy as np is_cuda = torch.cuda.is_available() # if cuda is avaible, True traindir = './flower_photos' normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) batch_size = 16 train_loader = torch.utils.data.DataLoader( datasets.ImageFolder(traindir, transforms.Compose([ transforms.RandomSizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize,])), batch_size=batch_size, shuffle=True, num_workers=4) cls_num = len(datasets.folder.find_classes(traindir)[0]) test_loader = torch.utils.data.DataLoader( datasets.ImageFolder(traindir, transforms.Compose([ transforms.RandomSizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize,])), batch_size=batch_size, shuffle=True, num_workers=1) model = torchvision.models.resnet152(pretrained = True) ### don't update model parameters for param in model.parameters() : param.requires_grad = False #modify last fully connected layter model.fc = nn.Linear(model.fc.in_features, cls_num) fc_parameters = [ {'params': model.fc.parameters()}, ] optimizer = torch.optim.Adam(fc_parameters, lr=1e-4, weight_decay=1e-4) loss_fn = nn.CrossEntropyLoss() if is_cuda : model.cuda(), loss_fn.cuda() # trainning model.train() train_loss = [] train_accu = [] i = 0 for epoch in range(1): for image, target in train_loader: image, target = Variable(image.float()), Variable(target) if is_cuda : image, target = image.cuda(), target.cuda() output = model(image) loss = loss_fn(output, target) optimizer.zero_grad() loss.backward() optimizer.step() pred = output.data.max(1)[1] accuracy = pred.eq(target.data).sum()/batch_size train_loss.append(loss.data[0]) train_accu.append(accuracy) if i % 300 == 0: print(i, loss.data[0]) i += 1 below is tensorflow-slm code, tensorflow implementation images, _, labels = load_batch(dataset, batch_size=256, height=image_size, width=image_size) in same network model, same machine(gtx980ti),the batch size of tensorflow, 256 is available, 512 is not Thanks! We'll look into that! I checked it out and there was a problem in autograd indeed. We haven't been freeing some buffers soon enough. I've opened a PR with a fix. After that commit batch_size 256 uses the same amount of memory (4,7GB on my GPU) as batch_size 16 before. Thanks for posting the repro! Thank you for a prompt response! I'll love pytorch more than yesterday. Thanks, it also solves my problem.
https://discuss.pytorch.org/t/high-gpu-memory-demand-for-pytorch/669
CC-MAIN-2017-26
refinedweb
1,157
51.75
CodePlexProject Hosting for Open Source Software This is my code: def memoize(f): cache = dict() def memof(x): try: return cache[x] except: cache[x] = f(x) return cache[x] return memof n,m = 8,3 init = frozenset((x,y) for x in range(n) for y in range(m)) def moves(board): return [frozenset((x,y) for (x,y) in board if x < px or y < py) for px,py in board] @memoize def wins(board): if not board: return True return any(not wins(move) for move in moves(board)) print wins(init) When running this in Python 2.6 or 2.7 under PyTools I get a GeneratorExit exception. When running it with IronPython in PyTools or in Python from the console I do not get an exception. What am I doing wrong? Jules Nothing; CPython is using GeneratorExit as a control flow exception; arguably we should not break on unhandled GeneratorExit by default. You can stop PyTools from breaking on GeneratorExit by going to Debug->Exceptions->Python Exceptions and unchecking GeneratorExit in the list. In general, if the debugger is breaking on an exception which you would like to ignore, you can click "Break" in the exception dialog box, then go to Debug->Exceptions->Python Exceptions and uncheck that exception. Then hit F5 to continue and you won't break on those exceptions any more. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://pytools.codeplex.com/discussions/266780
CC-MAIN-2017-26
refinedweb
265
69.92
Neo4j Bolt driver for Python Project Description The Official Neo4j Driver for Python supports Neo4j 3.0 and above and Python versions 2.7, 3.4, 3.5 and 3.6. Quick Example from neo4j.v1 import GraphDatabase driver = GraphDatabase.driver("bolt:/") Logging The driver provides a built-in logging. The following example code enables debug logging and prints out logs at stdout: from neo4j.util import watch import logging from sys import stdout watch("neo4j.bolt", logging.DEBUG, stdout) Installation To install the latest stable version, use: pip install neo4j-driver For the most up-to-date version (generally unstable), use: pip install git+ Other Information - Neo4j Manual - Neo4j Quick Reference Card - Example Project - Driver Wiki (includes change logs) Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/neo4j-driver/
CC-MAIN-2018-17
refinedweb
145
58.48
Overview line to get a specific version. In this case 1.12.5. $ cd ~ $ wget 02. Unpack the installer to the /usr/local directory. Note that the target directory is just a suggestion. $ sudo tar -C /usr/local -xzf go1.12.5.linux-amd64.tar.gz 03. Update the environment variable to include the go binaries included in the system path. $ export PATH=$PATH:/usr/local/go/bin 04. To test that Go is installed correctly, we create a sample program. $ cd ~ $ mkdir -p go/src/hello-geek $ nano go/src/hello-geek/hello.go Set the content as follow package main import "fmt" func main() { fmt.Printf("Hello, geek!\n") } 05. Compile the source code $ cd go/src/hello-geek/ $ go build 06. Run the generate executable file. $ ./hello-geek
https://aster.cloud/2019/05/22/how-to-install-go-in-ubuntu/
CC-MAIN-2020-50
refinedweb
131
55.1
images. Today we are going to take the next step and use our detected facial landmarks to help us label and extract face regions, including: - Mouth - Right eyebrow - Left eyebrow - Right eye - Left eye - Nose - Jaw To learn how to extract these face regions individually using dlib, OpenCV, and Python, just keep reading. Looking for the source code to this post? Jump right to the downloads section. Detect eyes, nose, lips, and jaw with dlib, OpenCV, and Python Today’s blog post will start with a discussion on the (x, y)-coordinates associated with facial landmarks and how these facial landmarks can be mapped to specific regions of the face. We’ll then write a bit of code that can be used to extract each of the facial regions. We’ll wrap up the blog post by demonstrating the results of our method on a few example images. By the end of this blog post, you’ll have a strong understanding of how face regions are (automatically) extracted via facial landmarks and will be able to apply this knowledge to your own applications. Facial landmark indexes for face regions The facial landmark detector implemented inside dlib produces 68 (x, y)-coordinates that map to specific facial structures. These 68 point mappings were obtained by training a shape predictor on the labeled iBUG 300-W dataset. Below we can visualize what each of these 68 coordinates map to: Figure 1: Visualizing each of the 68 facial coordinate points from the iBUG 300-W dataset (higher resolution). Examining the image, we can see that facial regions can be accessed via simple Python indexing (assuming zero-indexing with Python since the image above is one-indexed): - The mouth can be accessed through points [48, 68]. - The right eyebrow through points [17, 22]. - The left eyebrow through points [22, 27]. - The right eye using [36, 42]. - The left eye with [42, 48]. - The nose using [27, 35]. - And the jaw via [0, 17]. These mappings are encoded inside the FACIAL_LANDMARKS_IDXS dictionary inside face_utils of the imutils library: Using this dictionary we can easily extract the indexes into the facial landmarks array and extract various facial features simply by supplying a string as a key. Visualizing facial landmarks with OpenCV and Python A slightly harder task is to visualize each of these facial landmarks and overlay the results on an input image. To accomplish this, we’ll need the visualize_facial_landmarks function, already included in the imutils library: Our visualize_facial_landmarks function requires two arguments, followed by two optional ones, each detailed below: - image : The image that we are going to draw our facial landmark visualizations on. - shape : The NumPy array that contains the 68 facial landmark coordinates that map to various facial parts. - colors : A list of BGR tuples used to color-code each of the facial landmark regions. - alpha : A parameter used to control the opacity of the overlay on the original image. Lines 45 and 46 create two copies of our input image — we’ll need these copies so that we can draw a semi-transparent overlay on the output image. Line 50 makes a check to see if the colors list is None , and if so, initializes it with a preset list of BGR tuples (remember, OpenCV stores colors/pixel intensities in BGR order rather than RGB). We are now ready to visualize each of the individual facial regions via facial landmarks: On Line 56 we loop over each entry in the FACIAL_LANDMARKS_IDXS dictionary. For each of these regions, we extract the indexes of the given facial part and grab the (x, y)-coordinates from the shape NumPy array. Lines 63-69 make a check to see if we are drawing the jaw, and if so, we simply loop over the individual points, drawing a line connecting the jaw points together. Otherwise, Lines 73-75 handle computing the convex hull of the points and drawing the hull on the overlay. The last step is to create a transparent overlay via the cv2.addWeighted function: After applying visualize_facial_landmarks to an image and associated facial landmarks, the output would look similar to the image below: Figure 2: A visualization of each facial landmark region overlaid on the original image. To learn how to glue all the pieces together (and extract each of these facial regions), let’s move on to the next section. Extracting parts of the face using dlib, OpenCV, and Python Before you continue with this tutorial, make sure you have: - Installed dlib according to my instructions in this blog post. - Have installed/upgraded imutils to the latest version, ensuring you have access to the face_utils submodule: pip install --upgrade imutils From there, open up a new file, name it detect_face_parts.py , and insert the following code: The first code block in this example is identical to the one in our previous tutorial. We are simply: - Importing our required Python packages (Lines 2-7). - Parsing our command line arguments (Lines 10-15). - Instantiating dlib’s HOG-based face detector and loading the facial landmark predictor (Lines 19 and 20). - Detecting faces in our input image (Line 28). Again, for a more thorough, detailed overview of this code block, please see last week’s blog post on facial landmark detection with dlib, OpenCV, and Python. Now that we have detected faces in the image, we can loop over each of the face ROIs individually: For each face region, we determine the facial landmarks of the ROI and convert the 68 points into a NumPy array (Lines 34 and 35). Then, for each of the face parts, we loop over them and on Line 38. We draw the name/label of the face region on Lines 42 and 43, then draw each of the individual facial landmarks as circles on Lines 47 and 48. To actually extract each of the facial regions we simply need to compute the bounding box of the (x, y)-coordinates associated with the specific region and use NumPy array slicing to extract it: Computing the bounding box of the region is handled on Line 51 via cv2.boundingRect . Using NumPy array slicing we can extract the ROI on Line 52. This ROI is then resized to have a width of 250 pixels so we can better visualize it (Line 53). Lines 56-58 display the individual face region to our screen. Lines 61-63 then apply the visualize_facial_landmarks function to create a transparent overlay for each facial part. Face part labeling results Now that our example has been coded up, let’s take a look at some results. Be sure to use the “Downloads” section of this guide to download the source code + example images + dlib facial landmark predictor model. From there, you can use the following command to visualize the results: Notice how my mouth is detected first: Figure 3: Extracting the mouth region via facial landmarks. Followed by my right eyebrow: Figure 4: Determining the right eyebrow of an image using facial landmarks and dlib. Then the left eyebrow: Figure 5: The dlib library can extract facial regions from an image. Next comes the right eye: Figure 6: Extracting the right eye of a face using facial landmarks, dlib, OpenCV, and Python. Along with the left eye: Figure 7: Extracting the left eye of a face using facial landmarks, dlib, OpenCV, and Python. And finally the jawline: Figure 8: Automatically determining the jawline of a face with facial landmarks. As you can see, the bounding box of the jawline is m entire face. The last visualization for this image are our transparent overlays with each facial landmark region highlighted with a different color: Figure 9: A transparent overlay that displays the individual facial regions extracted via the image with facial landmarks. Let’s try another example: This time I have created a GIF animation of the output: Figure 10: Extracting facial landmark regions with computer vision. The same goes for our final example: Figure 11: Automatically labeling eyes, eyebrows, nose, mouth, and jaw using facial landmarks. Summary In this blog post I demonstrated how to detect various facial structures in an image using facial landmark detection. Specifically, we learned how to detect and extract the: - Mouth - Right eyebrow - Left eyebrow - Right eye - Left eye - Nose - Jawline This was accomplished using dlib’s pre-trained facial landmark detector along with a bit of OpenCV and Python magic. At this point you’re probably quite impressed with the accuracy of facial landmarks — and there are clear advantages of using facial landmarks, especially for face alignment, face swapping, and extracting various facial structures. …but the big question is: “Can facial landmark detection run in real-time?” To find out, you’ll need to stay tuned for next week’s blog post. To be notified when next week’s blog post on real-time facial landmark detection is published, be sure to enter your email address in the form below! See you then. Dear Adrian, You have seriously got a fan man, amazing explanations. It’s something like now i do wait for your new upcoming blog posts just as i do wait for GOT new season. Actually you are the one who make this computer vision concept very simple, which is otherwise not. Respect. Thanks and Regards Neeraj Kumar Thank you Neeraj, I really appreciate that. Comments like these really make my day 🙂 Keep smiling and keep posting awesome blog posts. Thanks Neeraj! hi adrian thanks alot for your sweet explanation but if you add maths behind some of technique you are using just like face alignment … i am little bit confused in angle finding Dear Dr Adrian, There is a lot to learn in your blogs and I thank you for these blogs. I hope I am not off-topic. Recently in the news there was a smart phone that could detect whether the image of a face was from a real person or a photograph. If the image was that of a photograph the smart phone would not allow the user to use the phone’s facility. To apply my question to today’s blog in detecting eyes, nose and jaw, is there a way to tell whether the elements of the face can be from a real face or a photo of a face? Thank you Anthony of Sydney NSW There are many methods to accomplish this, but the most reliable is to use stereo/depth cameras so you can determine the depth of the face versus a flat 2D space. As for the actual article you’re referring to, I haven’t read it so it would be great if you could link to it. Dear Dr Adrian, I apologise by forgetting to put the word ‘not’ between. It should read “there was a smart phone that could not detect whether the image of a face was from a real person or a photograph. Similar article, with a statement from Samsung saying that facial recognition currently “..cannot be used to authenticate access to Samsung Pay or Secure Folder….” Solution may well that your authentication system may well need two cameras for 3D or more ‘clever’ 2D techniques such that the authentication system cannot be ‘tricked’. Regards Anthony of Sydney NSW you can detect beautiful lady as well 🙂 That beautiful lady is my fiancée. Excellent . simple and detail code instructions. Can you more details on how to define the facial part. Definition of eye/nose/jaw….. Thanks Hi Leena — what do you mean by “define the facial part”? Hello Adrian, I tried your post for detecting facial features, but it gives me a error saying: RuntimeError: Unable to open shape_predictor_68_face_landmarks.dat where could i have been gone wrong? Thanks in advance Make sure you use the “Downloads” section of this blog post to download the shape_predictor_68_face_landmarks.datand source code. From there the example will work. hello , so basiclly if you dow,loaded “shape_predictor_68_face_landmarks.dat.bz2” with the wget method , you need to unzip it bzip2 -d filename.bz2 or bzip2 -dk filename.bz2 if you want to keep the original archive After –shape-predictor, you need to type in the path of the .dat file. You can try the absolute path of your .dat file. I met the same problem and it has been solved. Hi adrian, how can this be used for video instead of the image as argument. I will be demonstrating how to apply facial landmarks to video streams in next week’s blog post. First of all, nice blog post. Keep up doing this, i learned a lot about computer vision in a limited amount of time. I will buy definitely your new book about deep learning. There is a small error in the face_utils.py of the imutils library In the definition of FACIAL_LANDMARKS_IDXS (“nose”, (27, 35)), Thus should be (“nose”, (27, 36)), As the nose should contain 9 points, in the existing implementation this is only 8 points This can be seen in the example images too. Good catch — thanks Wim! I’ll make sure this is change is made in the latest version of imutilsand I’ll also get the blog post is updated as well. This is amazing. I just wanted to ask one question. If you are running a detector on these images coming from polling a video stream, can we say you are tracking the facial features? Is it true that tracking can be implemented by implementing an “ultra fast” detector on every frame of a video stream. Again, thanks a lot for these amazing tutorials. “Tracking” algorithms by definition try to incorporate some other extra temporal information into the algorithm so running an ultra fast face detector on a video stream isn’t technically a true tracking algorithm, but it would give you the same result, provided that no two faces overlap in the video stream. Hi Adrian, Do you have a knowledge how to detect and change the hair colour? Daamn.. ENGINEER WILL ALWAYS WIN! Dear Dr Adrian, In figure 10 “Extracting facial landmark regions with computer vision,” how is it that the program could differentiate between the face of a human and of a non-human. This is contrast to figure 11 where there are two humans where the algorithm could detect two humans. Thank you, Anthony of Sydney NSW The best way to handle determining if a face is a “real” face or simply a photo of a face is to apply face detection + use a depth camera so you can compute the depth of the image. If the face is “flat” then you know it’s a photograph. Dear Dr Adrian, I think I should have been clearer. The question was not about distinguishing a fake from a real which you addressed earlier. I should have been more direct. In figure 10, there is a picture of you with your dog. How does the algorithm make the distinction between a human face, yours and a non-human face, the dog. Or put it another way, how did the algorithm make the distinction between a human face and a dog. In other words how did the algorithm detect that the dog’s face was not human. This is in contrast to figure 11, there is a picture of you with your special lady. The algorithm could detect that there were two human faces present in contrast to figure 10 with one human face and one non-human face. I hope I was clearer I thank you for your tutorial/blogs, Regards Anthony, Sydney NSW Anthony from Sydney nSW To start, I think it would be beneficial to understand how object detectors work. The dlib library ships with an object detector that is pre-trained to detect human faces. That is why we can detect human faces in the image but not dog faces. Dear Dr Adrian, Thank you for the link. The article provides a quick review of the various object detection techniques and how the early detection methods for example using Haar wavelets produces a false positive as demonstrated by the soccer player’s face and part of the soccer field’s side advertising being detected as a face. The article goes on a brief step-by-step guide on object detection based on papers by Felzenszwalb et al. and Tomasz. A lot to learn Anthony of Sydney NSW Australia Thank you very much Dr. Adrian! All your blogs are amazing and timely. However, this one is so special to me!! I am enjoying all your blogs! Keep it up! Thanks Hailay! Hi,Adrian. Your work is amazing and very useful to me. I’m a undergraduate student and l’m learning things about opencv and computer vision. In my graduation project, I want to finish a program to realize simple virtual makeup. I’m a lazy girl, I want to know what will I look if I put on makeup. I have an idea that I want to draw different colors to different parts of the face,like red color to lips or pink color to cheek or something like that. Now, I can detect 65 points of one face using the realtime camera. I’m writing to ask you, using your way, can I realize my virtual makeup program? And I want to know if you have any good ideas about my virtual makeup program. Your advice will be welcome and appreciate. best wishes! Helen Yes, this is absolutely possible. Once you have detected each facial structure you can apply alpha blending and perhaps even a bit of seamless cloning to accomplish this. It’s not an easy project and will require much research on your part, but again, it’s totally possible. Thank you Adrian. I’ll try my best to finish it. Hi Adrian, I am currently working on creating a virtual makeup. I am wondering if you know of any python functions/libraries to achieve a glossy/matte finish on lips. Thanks in advance. I’ve two questions #1. While extracting the ROI of the face region as a separate image, on line 52 why have you used roi = image[y:y+h, x:x+w] . Shouldn’t it be the reverse ? i.e. roi = image[x:x+w, y:y+h] ?? #2. What does INTER_CUBIC mean ? I’ve checked the documentation. It says INTER_CUBIC is slow. So, why use it at the first place if you’ve a better alternate(INTER_LINEAR) available ? Thanks in advance. 1. No, images are matrices. We access an individual element of a matrix by supplying the row value first (the y-coordinate) followed by the column number (the x-coordinate). Therefore, roi = image[y:y+h, x:x+w]is correct, although it may feel awkward to write. 2. This implies that we are doing cubic interpolation, which is indeed slower that linear interpolation, but is better at upsampling images. If you’re just getting started with computer vision, image processing, and OpenCV I would definitely suggest reading through Practical Python and OpenCV as this will help you learn the fundamentals quickly. Be sure to take a look! Dear Dr Adrian, Suppose a camera was fitted with a fisheye lens. Recall that a fisheye lens produces a wide angled image. As a result the image will be distorted. Question: If the image is distorted, is there a way of ‘processing’/’correcting’ the distorted image to a normal image then apply face detection. Alternatively, if the camera has a fisheye lens, can a detection algorithm such as yours handle face detection. Alternatively, is there a correction algorithm for a fisheye lens. Thank you, Anthony of Sydney Australia If you’re using a fisheye lens you can actually “correct” for the fisheye distortion. I would suggest starting here for more details. Hi adrian, thanks a lot for this blog and all others too i have learnt a lot from you. I have a few questions please. 1. If i want to detect a smile on a face by measuring distance between landmarks 49 and 65 by applying simple distance fornula where unit of distance will be pixil. So my question is how can i know x and y coordinates for particular landmarks so i can apply mathematics and compare with database image? 2. I want to do both face recognition and emotion detection so is there any way i can make it faster? Atleast near to realtime? Stay blessed Memoona 1. It’s certainly possible to build a smile detector using facial landmarks, but it would be very error prone. Instead, it would be better to train a machine learning classifier on smiling vs. not smiling faces. I cover how to do this inside Deep Learning for Computer Vision with Python. 2. I also cover real-time facial expression/emotion detection inside Deep Learning for Computer Vision with Python as well. Hi Andrain, Is it possible to face verification with your face recognition code like the input is two images one is in ID card of the company which is having my face and the other one is my selfie image i need to compare and find both the person are same You can’t do face verification directly with facial landmarks, although dlib does support the ability for face verification. File “detect_face_parts.py”, line 5, in from imutils import face_utils ImportError: cannot import name face_utils sir can i know solution for this error…. Make sure you update your version of “imutils” to the latest version: $ pip install imutils Dear Adrian. Thanks so much for your clear explanation. This blog is very useful, i’ve learnt about computer vision in a couple days. I have a problem when a I’m trying to execute this program in Ubuntu Terminal. The next error apear: Illegal instruction (core dumped) I’ve read about it and it’s probably a Boost.Python problema. Can you give me some help to solve this problem? I would insert a bunch of “print” statements to determine which line is throwing the error. My guess is that the error is coming from “import dlib”, in which case you are importing the library into a different version than it was compiled against. Hey Mr. Adrian, nice tutorial , I wanted to ask can I do this for android since it requires too many libraries? I’m trying to create an augmented reality program for android using unity game engine so can you tell me relative to unity? I don’t have any experience developing Android applications. There are Java + OpenCV bindings, but you’ll need to look for a Java implementation of facial landmarks. Dear Adrian, Thanks a lot for all your works in this blog and codes. It is really amazing that it can detect eyes more accurate than many other face detection APIs. There is a question I want to ask. Recently, I am doing a research which needs really accurate eyes landmarks and this tutorial almost meets my needs. However, I also need the landmark of pupils. Have you ever done it before? Or how can I get an accurate pupil landmark of the eye. I personally haven’t done any work in pupil detection, but I’ve heard that others have had good luck with this tutorial. Hello Adrian, I realize this is a very late response, but thank you so much for your in depth blog post. With regards to finding the pupil landmark, is it possible to infer it by using the two landmarks for each of the eyelids as a sort of bounding box for the pupil and calculate the coordinates of the center? This would theoretically be the pupil landmark. Do you think this is precise enough? I am interested in figuring this out because I want to see if I can accurately calculate the pupillary distance of a person this way. Do let me known if I am missing anything 🙂 Thank you so much for this tutorial! It worked perfectly. I have a question though, can this method of detecting facial features work with images that does not contain whole faces? For example, the picture that I’m about to process only contains the nose, eyes and eyebrows (basically zoomed up images). Or does it only work on images with all the facial features specified above? I’m actually trying to detect the center of the nose using zoomed up images that only contain the eyes and the nose and a little bit of the eyebrows. If this method of detection will not work, can you please suggest any other method that I can use. Thank you so much. 🙂 If you can detect the face via a Haar cascade or HOG + Linear SVM detector provided by dlib then you can fit the facial landmarks to the face. The problem is that if you do not have the entire face in view, then the landmarks may not be entirely accurate. If you are working with zoomed in images, I would suggest training your own custom nose and eye detectors. OpenCV has a number of Haar cascades for this in their GitHub. However, I would suggest training your own custom object detector which is covered in detail inside the PyImageSearch Gurus course. I hope that helps! Hi adrian fantatic post…after using hog i am able to track the landmarks of face in the video.But is it possible to track the face just the way you did for green ball example.So as to track a persons attention.Like if he moves his face up down or sideways there has to be a prompt like subject is distracted…Help much appreciated. You could monitor the (x, y)-coordinates of the facial landmarks. If they change in direction you can use this to determine if the person is changing the viewing angle of their face. How will I detect the nose,eyes and other features in the face? I am a beginner. Thanks in advance. Hi Abhranil — I’m not sure what you mean. This blog post explains how to extract the nose, eyes, etc. Hello Adrian, I need to detect face when eyes are covered with hand using a 2D video. I could’t do this because both eyes and the hands are in skin color. Could you please help me? Thank You I would suggest using either pre-trained OpenCV Haar cascades for nose/lip detection or training your own classifier here. This will help if the eyes are obstructed. Thank you so much for the response. My aim is to detect the situation of hand and face occlusion. I want to check whether a person is covering eyes by his hands or not. Thank You hi sir!! thanks a lot for your wonderful blog posts.. i did face recognition api and eye detection from your blog only..now am trying to do eye recognition api i detected pupil and iris but i dont know how to recognize it.. can you please help me!! How do I get the all the points and store them in a text file for each image?+ You would simply use your favorite serialization library such as pickle, json, etc. and save the shapeobject to disk. Hi Adrian, Thanks a lot for a wonderful blog, it’s so good to see people sharing the knowledge and motivating people to take up the field more seriously . you have made difficult concepts really easy. thanks once again, I am presently working on Lip reading, can you suggest some blogs of yours or repo where I n I can find the work on similar line. Thanks in advance I don’t have any prior experience with lip reading but I would suggest reading this IEEE article to help you get started. Hi u are publishing cool stuff here thank u adrian but i believe the image of facial landmarks is not right u started from 1 to 68 while mentioned the categories 0 to 68 and it does not match the original dlib landmark numbering just saying thank u again Hi Arash — I’ve actually updated the code in the “imutils” library so the indexing is correct. hi i love you work on computer vision and deep learning and have been learning a lot form you can you make a post regarding face profile detection. i have tried using “bob.ip.facelandmarks” but it does not work on windows. can you help I have not worked with profile-based landmark detections but I will consider this for a blog post in the future. Really really amazing article sir. Please provide me the syntax explanation of the code from line number 63 to 69 in visaualize_facial_landmarks() function and why did you find the convex hull? Lines 63-69 handles the special case of drawing the jawline. We loop over the points and just “connect the dots” by drawing lines in between each (x, y)-coordinate. Otherwise, we compute the convex hull so when we draw the shape it encompasses all the points. Hi can this library detect change in landmarks? like when we raise our right eyebrow would this be able to return us the changed position of eyebrow? Thanks. You would need to implement that tracking yourself. A good example of monitoring keypoints can be found in this tutorial on drowsiness detection. Dear Adrian, Your explanation is fantastic. I learned a lot. Sir actually I am trying to develop a project based on face recognition. Some how I want to measure distance between each parts(Mouth, Right eyebrow, Left eyebrow, Right eye, Left eye, Nose, Jawline) of face. Or I want the real position (x,y) of parts of face in the resized image. Sir is it possible? Any reply will be very helpful. Thank you so much. We typically don’t use facial landmarks to perform face recognition. Algorithms for face recognition include Eigenfaces, Fisherfaces, and LBPs for face recognition. I cover these algorithms inside the PyImageSearch Gurus course. Hi Adrian, I was seriously following you posts 4-5 months before. Then I had my own research work in some other area. Now back to vision processing and deep learning. I just moved all your mails to a folder. I have just started to read the mails one by one. Really I am interested with your posts and works. You will see me in frequent comments and queries. Hi Suganya — thank you for being a PyImageSearch reader — without you, blog posts, books, and courses wouldn’t be possible. Are you done with your research work now? Are you interested in doing research or hobby work in image processing and deep learning? That was my Ph.D work. I was busy in writing manuscripts. Now it is over. I was doing my research on Video coding with non-linear representations. I want to enter into the emerging vision tech and deep learning for my future research works. Previously I followed your blogs and posts to complete a project with real time video processing (Motion detection) for remote alert. Now I am back. Hi Adrian, Can you help me how to detect the eyes down in face Thanks does finding the distance between landmarks could help to recognize the face. Every person’s landmarks are different so would it be a good approach to recognize the face using this?If so then please give me some hints regarding that. You would want to use a face embedding which would produce a vector used to quantify the face. The OpenFace library would be a good start. I also cover face recognition inside the PyImageSearch Gurus course. can i ask you about the algorithm under this code and reference paper Great tutorial and maybe you can help me. I am looking for a way to recognize very similar object to each other distinguishing them. For example I’d like to identify different kind of handmade pots creating a pattern witch can help me to recognize the category who the pot belongs. There is an other problem. Several of this pots are broken so I need to recognize just a part of them. Sometimes I have just the foot or sometimes i’ve got just the head. So I need a function lets me create a landmark pattern (like face landmark) to identify the objects. Could you tell me the right way to built it? How can I create my own dataset for objects? I just need the right approach to start. Thanks a lot Hey Stefano — there are a few ways to approach this problem, but I would start with using keypoint detection + local invariant features + feature matching. The final chapter of my book, Practical Python and OpenCV, demonstrates how to build a system to recognize the cover of a book — this method could be applied to your problem of recognize pots/pot categories. To make the system more robust and scalable take a look at the bag of visual words (BOVW) model. I cover BOVW and how to train classifiers on top of them in-depth inside the PyImageSearch Gurus course. I hope that helps! Hey Adrian! Thanks for your great tutorial. I have a question! I need to detect only the eye landmarks, in an eye image. You know if is it posible to used This dlib’s pre-trained facial landmark detector directly in an eye image, without detecting faces? Thanks a lot Unfortunately, no. You need to detect the face first in order to localize the eye region. You may consider applying an eye detector, such as OpenCV’s Haar cascades — but then you still need to localize the landmarks of the eye. This would likely entail training your own eye landmark detector. Ok! Thank You! Im already have the eye region localized, so I suppose that the only possibility now is to train some eye landmark detector I want to recognize and identify each part of the body so that I can accurately determine that this is the eye of a particular person, how can this be done in real time? Is there a particular reason you need to identify each part of the body if you’re only interested in the eye? Hi. Awesome tutorial. How if i want a particularly part ie eyes ? I’m not sure what you mean by particular part of the eye? Hi, your work is great thank you for this post. I have a question. I have used dlib for face landmark detection and now I want to detect using face landmark coordinates cheeks and forehead area and use ROI. How I can do this with dlib You’ll basically want to develop heuristic percentages. For example, if you know the entire bounding box of the face, the forehead region will likely be 30-40% of the face height above the detected face. You could define similar heuristics based on the eyes as well. For the cheeks try computing a rectangle between the left/right boundaries of the face, the lower part of the eye, and the lower part of the face bounding box. To follow up on my comment: If I need the forehead region to be accurately outlined (like the jaw), are you saying that there are no pre-trained models for this? I personally have not encountered any. Thank You Hi Adrian, Is there a way to find other facial features like fore head, right cheek and left cheek from this? Regards Sekar No, not with the standard facial landmark detector included with dlib. That said, see my reply to “Sabina” on February 12, 2018 where I discuss how you can extract the forehead and cheek regions. i need to visualize_only lips facial_landmarks haw can i do it ? regards This post demonstrates how you can extract the facial landmarks for the mouth region. Figure 1 in particular shows you the indexes for the upper and lower lip. You can use these indexes to extract or visualize the lip regions. Hi Adrian, I am Getting This Error.How to Solve it? usage: detect_face_parts.py [-h] -p SHAPE_PREDICTOR -i IMAGE detect_face_parts.py: error: the following arguments are required: -p/–shape-predictor, -i/–image Regards, Dinesh Kumar.K You need to supply the command line arguments when executing the script via the terminal. If you are new to command line argument that’s okay, but you should spend some time reading up on them before continuing. I Got the Output..Thank You So Much hi .i am also getting the same error . could you please help me how sort out this issue thanks thanks….i got the out put thanks Any idea how I would determine if the mouth landmark points are moving? Sorry.. pressed submit too fast I meant that i’d like to determine if the mouth is moving in a VIDEO . This the algo should work regardless of variance in position, rotation, and scale (i.e. zoom) I would suggest computing the centroid of the mouth facial landmark coordinates and then pass it into a deque data structure, similar to what I do in this blog post. Sir i have a an error when we compile our code ,that is dlib module not found . how to fix this error . please sir tell me any specific command to install the dlib module in windows on anaconda. You should double-check that dlib successfully installed on your machine. It sounds like you did not install dlib correctly. I haven’t used Windows in a good many years (and only formally support Linux and macOS on this blog) so I need to refer you to the official dlib install instructions. I hope at the very least that helps point you in the right direction. You are such a freaking guy wherever i go and run to look for the files i can’t find anything really except the usage of your library with those 68 points can you please tell me in an easier way how did you construct your library when i look at your imutils library it has lots of other things which i don’t need i want to use plane Open CV to reduce my memory. So, can you please guide me that? Hey Vamshi: First, I’ve replied to your previous comments in the original blog post you commented on. Please read them. Secondly, I’ve explained why (or why not) you may want to use imutils in those comments. It’s an open source library. You are free to modify the code. You can strip parts out of it. You can add to it. It’s open source. Have fun from it and learn from it; however, I cannot provide you exact code to solve your exact solution. You will need to work on the project yourself. If you’re new to Python, OpenCV, or image processing in general, that’s totally okay — but you will need to educate yourself along the way. Thirdly, I do not appreciate your tone, both in this comment and your previous ones. Please stop and be more considerate and professional. I am making these (free) tutorials available for you to learn from. I’m also taking time to help you with your questions. If you cannot do me this courtesy I will not be able to help you. Thank you. Great tutorial and very helpful I have a question..What is the best way to detect irises? I do not have any tutorials on iris detection but I know other PyImageSearch readers have tried this method. Hi Adrian, do you have a tutorial where i can copy each pixel inside the jaw line and save it to a file while making the other parts transparent? I do not, but you can certainly implement that method yourself without too much of an issue. You’ll want to create a mask for the jaw region, apply a bitwise AND, and then save the resulting masked image to disk. If you’re new to masking and image processing basics that’s totally okay but I would recommend learning the fundamentals first — my personal suggestion would be to refer to Practical Python and OpenCV. i want to draw a curve along lips but don’t know how to access points 48 to 68. how can i do that? There are a few ways to do this. The easiest would be to take the raw “shape” array from Lines 35 and take the indexes of 48 and 68 (keeping in mind that Python is zero-indexed). Hey Doctor, how detect eye to people use glasses How can i outpout a overlaid image as yout figure 2 ? The visualize_facial_landmarksfunction discussed in the blog post will generate the overlaid image for you. Hii, Can you please define me how to get whole face landmarks including forehead and how to detect hairs. There are no facial landmarks for the forehead region. You could either train your own facial landmark predictor (which would require labeled data, of course). Or you can try using heuristics, such as “the forehead region is above the eyes and 40% as tall as the rest of the face”. You’ll want to experiment with that, of course. Hi Adrian, I was trying to extract the eye features in the sense the pupil movement in the eye. To do that i suppose i would have to increase the points on the face. Can you tell me a way to do that. Unfortunately you would need to train your own custom facial landmark detector to include more points. You cannot “add more points” to the current model. Hey Adrain, Thanks a lot for such a wonderful tutorial.Your blogs are the most informative and detailed ones available on the internet today. I have a small problem and I hope you will solve it.When I am providing images with side faces as input (in which only one eye is visible), the above code generates a wrong ROI for the eye which is not even in the frame.Can you please suggest some idea so that I can exclude the ROIs for the features which are not there in the image and display only the ones that are visible. So the face is correctly detected but then the eye location is not even on the frame? That’s really odd as this post is used to detect blinks without a problem using facial landmarks. Dear Adrian, This is an amazing post. I have a question, when we detect eyebrows, nose, lips, there are sharp edges; specifically in eyebrows. if you try to change the color you will notice this very easily. So how to remove these sharp edges I’m not sure what you mean by “sharp edges”. Do you have a screenshot you could share? Hi Adrian, Fantastic tutorial. I am doing a research on improving the speed for attendance system using facial recognition. I wanted to first classify the database to different group of database (based on Facial features such as the size of eyes or nose). Hence in a large group of students, the matching would be faster. Would you advise how I can do that? And is there an algorithm to calculate the size of eyes after detecting the eyes? Look forward to talk more with you on this. Thanks!!! That’s likely overkill for a face recognition system and actually likely prone to errors. You should instead leverage a model that can produce robust facial embeddings. See this blog post for more information on face recognition. How can we do exactly this, but instead of using the 68 landmarks we use the 194 landmarks provided by the the Helen dataset? Since the orderedDict specifies where the facial parts are using 1 to 68. If you are using a different facial landmark predictor you will need to update the OrderedDict code to correctly point to the new coordinates. You’ll want to refer to the documentation of your facial landmark predictor for the coordinates. You’re a champ brother. Im struggling to learn to find if two given images of a same person matches. can you explain the concept or if you have already done. can you share the blog? Please You mean face recognition? Thanks for your interesting article. Why imutils library does not exist in github anymore? I’m not sure what you mean Amir — the imutils library still exists on GitHub: Hi Adrian, I want to do it with a lot of images that are in a directory all at once. How i can do it? Thanks 🙂 You can use the list_imagesfunction from the imutols library to loop over all images in your input directory. Refer to this blog post for an example. HI Adrian, thank you for such an amazing tutorial, I’ve learnt a lot from this. I have 2 questions, First, can I change the dots that detect the eyes to a line that passes through all the dots? Second, after detecting the face parts, I’ve detected only the eyes. However while using face_utils.visualize_facial_landmarks(image, shape) all the face parts are detected. So my question is can we edit the visualize_facial_landmarks function so that it colours only the sliced points(the eyes for example) and not all the face parts. 1. Are you referring to a “line of best fit”? If so, take the (x, y)-coordinates for each eye and fit a line to them. 2. I think this followup post will help you out. Hi Adrian, I’ve checked the post. But I need to use visualize_facial_landmarks function to only colour the eyes in a transparent colour. Could you please help me on that? Hey Rahil, I’m happy to help out and point you in the right direction but I cannot write the code for you. The original visualize_facial_landmarks function is in my GitHub. You can modify the function to only draw the eyes by using an “if” statement in the function to check if we are investigating the eyes. Oh great I found my error. Thank you so much I found my error, thank you so much Awesome, congrats on resolving the issue! Hi Adrian. Excellent Guide. Would you happen to know of any facial recognition models out there that also extract the hairline (yielding a closed circle for the entire face)? For my application, I need to have the forehead extracted as well and I am having trouble finding trained models with these points extracted. Thank you in advance for your help! What if at last, I want to color only one feature? For eg just mouth and nothing else. How could it be done? Hey Muskan, I’ve addressed this question a number of times in the comments section. Please read through them. Hey Adrian, thank you for such an amazing post, I’ve learnt a lot from this. But I need to detect the only mouth and colour only the mouth region.so what changes will be needed in this code? Please see my previous reply to you — you will need to implement your own custom “visualize_facial_landmarks” function. Again, see my previous reply. Hey Adrian, your post is amazing. can you please tell me how to change the colour of detected parts? You can pass in your own custom “colors” list to the visualize_facial_landmarks function. Can you please provide steps for doing it? Got it.Thank you so much Hello Adrian!, Amazing post and thank you for the 17day crash course. Have registered for that course. I am working to classify facial expressions and for that region around mouth is crucial, As I can see from this post, we can easily get ‘mouth’ part. Any thoughts on how can I get more area around mouth which is closer to nose but not including nose. For ex. When we laugh, there are some lines which develop on sides of nose till mouth. Can we get that ? Hey Pradeep, thanks for the comment! So stoked to hear you are enjoyed the crash course! As for your question, typically we wouldn’t directly use facial landmarks for emotion/facial expression recognition. It’s possible for sure, but such a system would be fragile. I would instead recommend training a machine learning model on the detected faces themselves. For what it’s worth, I actually do have a chapter dedicated to emotion/facial expression recognition inside Deep Learning for Computer Vision with Python. Hi Adrian, Is it possible to only have an overlay for the lips? Like how the output would only add “lipstick” to the face and not to the entire mouth region. Thank you There are a few ways to go about it but you’ll ultimately need alpha blending. Refer to this tutorial for an example. Thanks Adrian for you amazing post! Thanks Ali! Hi, Adrian! Just wanted to let you know that the link to “visualize_facial_landmarks” is broken. Thanks, I have updated the link. Hi Adrian, Thanks for the wonderful post – Lot to learn – Amazing article. Thanks again, Deb Thanks so much, Deb! Hello Adrian, What if I need the hair too in the cropped face? Are you trying to create a binary mask for the hair? Or just extract the face + forehead region? No I am not trying to create a binary mask! I need a cropped face with hair without any background. Not only face+forehead. In order to create a cropped face + hair without any background you will need to create a binary mask first. The binary mask is what enables you to segment the background from the foreground. Can you please help me with that? Any links or something would be very helpful! Thank you in advance. I would suggest taking a look at instance segmentation algorithms such as Mask R-CNN, U-Net, and FCN. Hi, Mr Adrian, thanks for sharing. what a cool post! I wanna ask u about how can the face detector read the landmarks if the person in the video is not stable? move forward and backward or left and right? Thank u so much, Mr! Applying facial landmark prediction is a two step process. First the face must be detected. Then landmarks are predicted. As long as the face is detected you can estimate the landmarks. Hello, Is it possible to have a drawing of a face with the numbers for the 194 points ? To locate the position of the face feature in function of the numbers. Thnak you, Figure 1 shows the indexes of the facial landmarks. Is it possible to detect sunglasses? If yes then how? Detect sunglasses in general? Or eyes behind sunglasses? If you simply want to detect sunglasses you could train an object detector to detect sunglasses. The HOG + Linear SVM detector would be a good first step. very helpfulll thankyou very much You are welcome! Hello, Is it possible to detect only the eyes and save the image of the eyes as another image using a part of the code given? Thank you Yes, it is. See this tutorial as an example of exactly that. Hi Andrian, really good post and helpful. I am your fan and was a silent reader for the past 3 years. Do you mind to share some code to do the following sequence: I have thousands of image frame with label. filename.ImageFrame.class.jpg (class is 0 0r 1). I want to: 1) Detect Face, Left_Eye, Right_Eye, Face_Grid from list of image frame in a folder. 2) Create rectangle of Face, Left_Eye, Right_Eye, Face_Grid 3) Extract the detected Face, Left_Eye, Right_Eye, Face_Grid as npy array (npz file)…. Output as Face.npy, Left_Eye.npy, Right_Eye.npy, Face_Grid.npy and y.npy (label 0 or 1) I want to feed these output as to a pre-trained model for classification problem. Can someone help? Really appreciate your kindness. I provide the PyImageSearch blog as an education blog. These tutorials are free for you to use. I would suggest you give it a try yourself. Experiment with the code. Struggle with it. Teach yourself something new. It’s good to push your boundaries, I have faith in you! Hi Adrian, The post is very nice and well explained. One question I have is regarding missing face regions. The predictor always finds all the regions, and even side face images results with two detected eyes. Its like there is a correlation calculation that fits the regions best and not actually search for them. What am I missing? I tried this code.but i got some error in installation of dlib package in windows.can anyone give me solution for it Hi can you recommend me an opensource python code that determine whether the face is being block or not or any face quality estimation python code? Because nowadays many face recognition model can detect face even it left 2 eyes showing. I’m having difficulties to find facial landmark detector that don’t produce result when being blocked so that I can identify that the face is being block. I found opencv haarcascade mouth, eye, nose detector. The accuracy is very bad. Sorry, I do not have any source code for that use case. hey man can you build a basic lip reader with some lip movements. I am waiting for that. Thanks for the suggestion and I’ll definitely consider it, although I cannot guarantee if/when I will be able to cover it. Hello Adrian How if i want to detect faces and cropped like this I have done many ways but, I have not managed to get such results Thank you for the help See this tutorial. hey, when we run your code only we can detect mouth, how can we detect nose and eyes both. This code shows you how to detect the nose, eyes, etc. so I’m not sure what your question is? Hi Adrian, I need to extract exact shape of the lips from an image. The image will contain only the lips of the user. the user might or might not be smiling, showing tooth, different skin tone etc. Do you think your code will help achieving this? Hi You have given such a great tutorial for OpenCV thank you so much Please tell me how to do LIP reading using OpenCV and Raspberry Pi I have been following your tutorial face landmarks which are awsome . thank you Sorry, I don’t have any tutorials on lip reading at the moment. Thank you for this amazing blog. It is really informative. Is there any way I can make this run in real-time using webcam. Yes, see this tutorial. if you are working on mac use Xquartz and on windows use Xminge Dear Adrian. First of all, thank for your awesome blog posts that I have learned. I’m newer of app development. I would like to develop some kind of Android mobile application with using OpenCV and mobile camera to detect eyes, nose, lips, and jaw. Could you tell me what I am suppose to do? or suggest some tutorials? Kind Regard. Thanks Sorry, I don’t cover Android development here. I would suggest looking into using OpenCV’s Java bindings.
https://www.pyimagesearch.com/2017/04/10/detect-eyes-nose-lips-jaw-dlib-opencv-python/
CC-MAIN-2020-05
refinedweb
9,209
73.27
IPFS is the InterPlanetary File System, it's a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting all computing devices. Recently I've implemented a sample project that allows us to encrypt/decrypt files for storing them on IPFS. Which is an ideal solution to hosting and securing any sensitive data. Github project: On my Github page I explain how to encrypt files prior to uploading them to IPFS. Similarly it can decrypt and download these files. The solution uses both RSA and AES encryption algorithms to achieve maximum security. Why IPFS? IPFS dominates over bittorrent in terms of availability and performance. Due to content-addressing it prevents file duplication. Individual file(s) can be easily downloaded from some "source"; whereas with Bittorrent one has to create a ".torrent" file, submit it to tracker(s) and seed it. IPFS on the other hand is much faster on making files available for sharing. IPFS files can be distributed and load-balanced, making it a perfect CDN solution. This isn't possible with BitTorrent at all. File-streaming works out of the box over HTTP in IPFS. Whereas streaming in BitTorrent is a paid feature. Large files are being chunked/sharded in IPFS. So one can download chunks from different nodes and maximize bandwidth usage. This is both done in IPFS and BitTorrent. BitTorrent has a high barrier to entry for new people trying to share files. Whereas IPFS easily integrates to a drag-and-drop interface. With IPFS one chooses which files he/she wants to "seed". While BitTorrent requires you to seed all files within the torrent. BitTorrent clients did improve over the years, it is possible to download file subsets, and it may be possile to seed file subsets. IPFS works over HTTP REST, whereas torrents only work over the BitTorrent protocol. This makes it harder for the community to build p2p apps/services/solutions. Discussion (1)
https://dev.to/codr/ipfs-file-encryption-in-nodejs-1ijd
CC-MAIN-2021-31
refinedweb
340
58.89
EASY Radio Receiver Arduino Introduction: EASY Radio Receiver Arduino Hi, this is my first instructable, but isn't the first Arduino project... Let's Go... Step 1: Get All Needed Parts (Parts List) 1 - Arduino Uno/Nano 1 - 10k Potenciometer 1 - Radio Receiver TEA5767 Some Wires Step 2: Connect Wires Connect all wires as shown in the picture... Step 3: Build Code This is the code that you will use. You will need TEA5767Radio.h library for arduino So, the includes will be: #include <Wire.h> #include <TEA5767Radio.h> #include <math.h> //this is to round readed values then we will declare variables: TEA5767Radio radio = TEA5767Radio(); //declare radio as TEA5767Radio type float freq, lastfreq; //freq and lastfreq should be float because we will use decimal values in MHz int sensorValue; // this is to read potenciometer value Now let's setting up arduino: void setup(){ Wire.begin(); Serial.begin(9600); } Then let's create our loop: void loop() { sensorValue = analogRead(A0); freq = (sensorValue * (20.5 / 1023.0))+87.5; freq = freq*10.0f; freq = (freq > (floor(freq)+0.5f)) ? ceil(freq) : floor(freq); freq = freq/10.0f; //round the freq value to a 1 decimal place if(lastfreq!=freq){ lastfreq=freq; //save frequency to check if frequency was changed radio.setFrequency(freq); //sett choosen frequency Serial.print("Frequency: "); Serial.print(freq); Serial.println("MHz"); } } Step 4: Finally Finally connect your headphones, start your Arduino, and start to listen your favourite radio... What is the range of frequencies that we can hear? I'm sorry for later answer... you want to know the range that we can hear ou we can set on this? we can hear 20hz to 20000hz +/- In the project we can set between 76Mhz and 108MHz Very good project, simple and effective to start using the radio module. I suggest that you add a couple of lines to the wiring description to state that each Arduino board has different SDA and SCL pins, so if you do not use UNO (or Nano) you have to change wiring accordingly. For example MEGA has: SDA pin 20 SCL pin 21 Thanks for your tip... I'll update the post... But I need to test it first.. :P Nice project :) Thanks... :D Cool Arduino project. Where can you buy the reciever module? Hi, Thanks, I bought on ebay for $US 4...
http://www.instructables.com/id/EASY-Radio-Receiver-Arduino/
CC-MAIN-2017-30
refinedweb
390
75.71
JBoss.orgCommunity Documentation To learn the basics of BeanShell () take a look at their documents and tutorials on their website A copy of the BeanShell document is also can be found in the MMAdmin installation directory. Learn how basic scripts can be written and get familiar with some commonly used commands. Note: BeanShell documentation provided is of version 1.3, which is taken as is from their website, for latest changes refer to BeanShell website. The BeanShell version 2.0b4 is currently being used in MMAdmin. Basic knowledge of the Java programming language is required in order to effectively design and develop scripts using the MMAdmin. To learn Java language find learning resources at. You can learn about the Teiid administrative API either using “help()” command or find JavaDoc for in the installation directory. MMAdmin is a specialized version of BeanShell with lots of pre-built libraries and scripts designed to be used with the Teiid system. MMAdmin works in two different modes: interactive or script run mode. In interactive mode, user can invoke the tool and connect to a live Teiid system and issue any ad-hoc commands to control the system or issue a SQL query against connected virtual database and view the results. All the commands executed during interactive mode are automatically captured into a log file, more on this later in the document. In the script run mode, user can execute/play back previously developed scripts. This mode especially useful to automate any testing or to perform any repeated configurations/migrations changes to a Teiid system To use MMAdmin successfully, there are some syntactical rules you should keep in mind. All the commands end with semi-colon [;]. Commands without any input parameters end with open and close parentheses '()' and semi-colon at the end. Example: All commands are case sensitive. So, enter exactly as command is written. If a command requires input parameter(s), they should be declared inside "(" and ")" and if they are string based parameters then they should wrapped with double quotes. A command can have more than one parameter. Example: cd("/home/johndoe"); Any other Java program can be invoked from script, if the required Java class libraries are already in class path. Import the required classes and instantiate the class to execute. Example: import java.sql.*; clazz = new MyClass(); clazz.doSomething(); Take a look at a variety of commands available in Bean Shell to be used along with your scripts. Some of the common ones you can find at You can write and publish your own scripts. See How to write your own scripts? To execute the commands in interactive mode you enter them first and press enter to execute, then enter next command, so on. To exit the tool in the interactive mode, first disconnect if you are connected to the Teiid system by executing “disconnect();” then execute "exit();". In the script mode, when execution of the script finishes the tool will exit automatically, however you still have to disconnect from Teiid system in the script. Note: If SSL is turned on the Teiid server, you would need to supply the correct certificates for connection. Edit the command file used to execute the MMAdmin and make sure correct trust store is defined in the path. The list below contains some common commands used in MMAdmin. The best way to learn scripting in the MMAdmin is to read the scripts in "samples" directory in the MMAdmin kit's installation directory, and experiment your own scripts using a developer instance of Teiid System. print("xxx"); // print something to console help(); // shows all the available admin commands; connect(); // connect using connection.properties file connect(<URL>); // connect to Teiid using the supplied URL connectAsAdmin(<url>); // connect as admin; no need have the vdb name. SQL commands will not work under this connection execute(<SQL>); // run any SQL command. Note in interactive mode you can directly specify SQL on command line currentConnectionName(); // returns the current connection name useConnection(<connection name>); // switches to using the given connection settings disconnect(); // disconnects the current connection in the context exit(); // exit the shell This below command lists all the available administrative API commands in the MMAdmin. Please note that none of the BeanShell commands or custom commands will be shown in this list. Documentation is only source for reviewing those commands presently. mmadmin $ help(); To get a specific definition about a command and it's required input parameters , use the below form of help. The example shown will show detailed JavaDoc description about "addVDB" method. mmadmin $ help("addVDB"); /** * Import a {@link VDB} file. * <br>A VDB file with internal definitions. This is the default VDB export configuration * begining with MetaMatrix version 4.3.</br> * * @param name VDB Name * @param vdbFile byte array of the VDB Archive * @param option Code of the AdminOptions to use when executing this method. There are choices aboutwhat to do when a connector binding with the given identifier already exists in the system. */ VDB addVDB ( String name , String vdbFile , int option If not sure about exact command, and to narrow the list available commands, help can be used in the following form, note the “*” at the end mmadmin $ help(“get*”); This will list all the commands available that begin with “get”, for example “getconnectorBindings, getVDBs” etc. For every administrative API call, there is corresponding command in MMAdmin. For a reference to full administrative API, please look at “documents” sections in the installation directory.
http://docs.jboss.org/teiid/6.2/admin-guide/en-US/html/getting_started.html
CC-MAIN-2013-20
refinedweb
907
53.92
A lot of people are interested in rootless Podman. This tool lets you build, install, and play with containers without requiring users to run as root, or have a big root-running daemon on their systems. Instead, Podman (by default) stores container images in the user’s home directory. Podman takes advantage of user namespaces in order to do this since most container images have more than one UID in the image. I have explained how this works in previous articles. One issue that will not work, however, is storing these images in an NFS-based home directory. Why doesn’t Podman support storage on NFS? First let me say that, for most use cases, rootless Podman works fine with an NFS volume. The use case that does not work well is having the container image store reside on an NFS mount point. This problem is most easily understood when a user attempts to pull an image or install an RPM package. Let’s examine what happens when a user attempts to install a tarball or RPM package on a filesystem using rootless Podman. For our examples, I will use the user myuser with a UID of 1000, and a UID map setup in /etc/subuid that looks like this: myuser:100000:65536 The result looks like this: $ podman unshare cat /proc/self/uid_map 0 1000 1 1 100000 65536 Now, inside of the container I want to install the httpd package. The httpd package will install files both as root and as the apache user with a UID of 60. This fact means that the container’s root process installing the httpd package on your home directory attempts to run something like this: $ chown 60:60 /var/www/html/index.html When this happens on a local filesystem, the kernel checks two things. First, it checks whether UID 60 and GID 60 are mapped inside of the user namespace. Second, it determines whether the process doing the chowning has the DAC_OVERRIDE capability. Since the process is not running as UID 60, it has to be able to override normal UID/GID permissions. The process inside of the container is running as UID 1000 when running as root, when running as UID 60 inside of the container, it is actually uid 100059 on the host. Note that I'm only talking about the user namespace DAC_OVERRIDE, which means that the process inside of the container can OVERRIDE a UID/GID mapped into the user namespace, such as the container. This setup works on all local filesystems because the local kernel can make the decisions. When dealing with NFS, you have to satisfy the local kernel as well as the remote kernel. And in the case of NFS, the remote kernel enforces rules. Look at this issue from the remote NFS server’s kernel’s point of view. The remote kernel sees a process running as UID 1000 (root in the container) trying to chmod a file owned by 1000 to UID 100059 (UID 60 inside of the container). The remote kernel denies this access. The NFS protocol has no concept of user namespaces and has no way to know that the process running as UID 1000 is in one. The NFS server also has no way of knowing that the client process has DAC_OVERRIDE for the user namespace and that UID 100059 is mapped into the same user namespace. In other words, the chance of this information being known by NFS is slim at best. Now, if you have a normal process creating files on an NFS share and not taking advantage of user-namespaced capabilities, everything works fine. The problem comes in when the root process inside the container needs to do something on the NFS share that requires special capability access. In that case, the remote kernel will not know about the capability and will most likely deny access. How can I make NFS work with rootless Podman? There are a couple of ways that you could set up a user’s home directory on an NFS share to use rootless Podman. You could configure the graphroot flag in the ~/.config/containers/storage.conf file to have storage point at a directory that is not on the NFS share. For example, change: [storage] driver = "overlay" runroot = "/run/user/1000" graphroot = "/home/myuser/.local/share/containers/storage" to [storage] driver = "overlay" runroot = "/run/user/1000" graphroot = "/var/tmp/myuser/containers/storage This change will cause the images pulled and created within the container to be handled on a different directory, which is outside of the home directory. Another option would be to create a disk image and mount it onto the ~/.local/share/containers directory. You might use a script like this: truncate -s 10g /home/myuser/xfs.img mkfs.xfs -m reflink=1 /home/myuser/xfs.img Then, you could set up fstab on the machines with the home directories to do something like this: $ mount /home/myuser/xfs.img /home/myuser/.local/share/containers Conclusion Rootless and rootfull Podman work great with remote network shares mounted as volumes, including NFS shares. However, rootless Podman out of the box will not work well on NFS home directories because the protocol does not understand user namespaces. Luckily, with minor configuration changes, you can use rootless Podman on an NFS home directory. [New to containers? Download the Containers Primer and learn the basics of Linux containers.]
https://www.redhat.com/sysadmin/rootless-podman-nfs
CC-MAIN-2020-34
refinedweb
906
61.56
Le Café Central de Deva ... Deva blogs!! I changed the way of blogging. Re-designed the site & started using the latest Windows Live Writer 2011!! Additionally added Microsoft Translator gadget available @ top of page, so that you can change the page in your preferred language!! Oops!! I'm quite late to talk about this, but it's worthy one. Do you know about Anywhere access facility and other new features available with Exchange Server 2007? With Exchange Server 2007, employees get anywhere access* to their e-mail, voice mail, calendars, and contacts from a variety of clients and devices. Please find this article to know more? Please find the list of Ports used by Exchange Server: Ports used by Exchange Server Protocol Port Description SMTP TCP: 25 The SMTP service uses TCP port 25. DNS TCP/UDP: 53 DNS listens on port 53. Domain controllers use this port. LSA TCP: 691 The Microsoft Exchange Routing Engine service (RESvc) listens for routing link state information on this port. LDAP TCP/UPD: 389 Lightweight directory access protocol (LDAP) used by Microsoft Active Directory® directory service, Active Directory Connector, and the Microsoft Exchange Server 5.5 directory use this port. LDAP/SSL TCP/UDP: 636 LDAP over Secure Sockets Layer (SSL) uses this port. TCP/UDP: 379 The Site Replication Service (SRS) uses this port. TCP/UDP: 390 This is the recommended alternate port to configure the Exchange Server 5.5 LDAP protocol when Exchange Server 5.5 is running on an Active Directory domain controller. TCP: 3268 Global catalog. The Windows 2000 and Windows Server 2003 Active Directory global catalog (a domain controller "role") listens on TCP port 3268. LDAP/SSLPort TCP: 3269 Global catalog over SSL. Applications that connect to TCP port 3269 of a global catalog server can transmit and receive SSL encrypted data. IMAP4 TCP: 143 Internet Message Access Protocol (IMAP) uses this port. IMAP4/SSL TCP: 993 IMAP4 over SSL uses this port. POP3 TCP: 110 Post Office Protocol version 3 (POP3) uses this port. POP3/SSL TCP: 995 POP3 over SSL uses this port. NNTP TCP: 119 Network News Transfer Protocol (NNTP) uses this port. NNTP/SSL TCP: 563 NNTP over SSL uses this port. HTTP TCP: 80 HTTP uses this port. HTTP/SSL TCP: 443 HTTP over SSL uses this port. When we execute the SMTP command, we'll receive a reply from the mail server in the form of a three digit number followed by information describing the reply. For example, 250 OK Please find the list of reply codes from the Server. "\r\n.\r\n." 421 The mail server will be shut down. Save the mail message and try again later. 450 The mailbox that you are trying to reach is busy. Wait a little while and try again. 451 The requested action was not done. Some error occurs. Please find the following steps and SMTP commands to test SMTP communication using Telnet on Port 25. 1. At a command prompt, type telnet, and then press ENTER. This command opens the Telnet session. 2. Type set localecho and then press ENTER. This optional command lets you view the characters as you type them. This setting may be required for some SMTP servers. 3.. 4. Type open mail1.fabrikam.com 25 and then press ENTER. 5. Type EHLO contoso.com and then press ENTER. 6. Type MAIL FROM:chris@contoso.com and then press ENTER. 7.. 8. Type DATA and then press ENTER. You will receive a response that resembles the following 354 Start mail input; end with <CLRF>.<CLRF> 9. Type Subject: Test Sampleand then press ENTER. 10. Press ENTER. RFC 2822 requires a blank line between the Subject: header field and the message body. 11. Type This is a test message and then press ENTER. 12. Press ENTER, type a period ( . ) and then press ENTER. You will receive a response that resembles the following: 250 2.6.0 <GUID> Queued mail for delivery 13. To disconnect from the destination SMTP server, type QUIT and then press ENTER. You will receive a response that resembles the following: 221 2.0.0 Service closing transmission channel 14. To close the Telnet session, type quit and then press ENTER... 1: <SCRIPT LANGUAGE="VBScript"> 2: Sub ISMTPOnArrival_OnArrival(ByVal Msg, EventStatus) 3: TextDisclaimer = vbCrLf & "DISCLAIMER:" & vbCrLf & "Sample Disclaimer added in a VBScript." 4: HTMLDisclaimer = "<p></p><p>DISCLAIMER:<br>Sample Disclaimer added in a VBScript." 5: 6: If Msg.HTMLBody <> "" Then 7: 'Search for the "</body>" tag and insert our disclaimer before that tag. 8: pos = InStr(1, Msg.HTMLBody, "</body>", vbTextCompare) 9: szPartI = Left(Msg.HTMLBody, pos - 1) 10: szPartII = Right(Msg.HTMLBody, Len(Msg.HTMLBody) - (pos - 1)) 11: Msg.HTMLBody = szPartI + HTMLDisclaimer + szPartII 12: End If 13: 14: If Msg.TextBody <> "" Then 15: Msg.TextBody = Msg.TextBody & vbCrLf & TextDisclaimer & vbCrLf 16: End If 17: 18: 'Commit the content changes to the transport ADO Stream object. 19: Msg.DataSource.Save ' Commit the changes into the transport Stream 20: EventStatus = cdoRunNextSink 21: End Sub 22: </SCRIPT> Visual Basic 6.0 sample Dim TextDisclaimer As StringDim HTMLDisclaimer As StringImplements IEventIsCacheableImplements CDO.ISMTPOnArrivalPrivate Sub IEventIsCacheable_IsCacheable() 'Just returns S_OK.End SubPrivate Sub Class_Initialize() 'TODO: Replace the sample disclaimer text with your own text. TextDisclaimer = vbCrLf & "DISCLAIMER:" & vbCrLf & "Sample Disclaimer Text." HTMLDisclaimer = "<p></p><p>DISCLAIMER:<br>Sample Disclaimer Text"End SubPrivate EventStatus = cdoRunNextSinkEnd Sub. Microsoft Entourage:mac News Microsoft Office 2004 for Mac 114 update released Tuesday, February 12, 2008. There is no update for Office 2008 at this time. Still a bit early. Updates Today i found Amir's couple of blog posts which has information on Office 2008. How Entourage works How Does Entourage Work? Entourage 2008 - New features Entourage 2008 – New Features Part I Part I describes features for all users. Entourage 2008 – New Features Part II In Part II, Amir describes features which are exclusive to Entourage 2008 users in an Exchange organization where they are working with other Outlook users. Advanced: Event sink registration - Best practices It's must to know when we try to register the event sink in Excahge environment. The following guidelines are helpful when you create event registration items: Note Configure Public folder permissions Please find the lists of management tasks that you can perform to configure and maintain public folder permissions: Managing mail-enabled Public Folders Please find the following topics provide instructions for the management tasks that you can perform for mail-enabled public folders. Exchange Server 2007 - Managing Public Folders Please find the following Technet articles which can be useful to manage the public folders. Global events can only be registered in the following folders: For private mailbox stores:{GUID}/Storeevents/GlobalEvents For public folders: Folders/Non_ipm_subtree/StoreEvents{GUID}/GlobalEvents Access Global Schema: To access the global schema of an application or a public store, use The ##SCHEMAURI## Macro. Need to check: Source Code: The following code demonstrates how to register for a private store-wide event: Note: Registering Store Event Sink - Three Sample attached Example 1: The above command example is to register a specific mailbox. To register event sinks to a specific store with the regevent.vbs script, here is an example:cscript regevent.vbs add OnSyncSave test.sink "{9ADEA9EA-3924-401F-9C70-0 3B6B7378C9D}/StoreEvents/GlobalEvents/testsink" -m deepIn the above command, the event sink is actually registered to the system mailbox named SystemMailbox{GUID} in the mailbox store. To determine the GUID: How to delete store event sinks If you want to delete the event sink you registered in a particular mailbox, you can perform the following steps: 1. Start the command prompt, and change to the directory where the regevent.vbs is located. 2. Type the command to run regevent.vbs. The following code is one example of the command: cscript regevent.vbs delete "" Step> Start the command prompt, and change to the directory of C:\. Type the command to run the regevent.vbs. The following command is one example: cscript regevent.vbs add "onsave;ondelete" ExOleDB.ScriptEventSink.1 "" -m deep -file "c:\testscript.vbs" Note: I. I came across a tricky situation. One of the customer got the (422) Unprocessable Entity using WebDAV Error, when he tries to access Exchange Server 2007 for the query "SELECT “DAV:displayname” FROM “Inbox” WHERE CONTAINS(*, '"Mike"')"...he is referring that "Use an asterisk (*) to include all properties marked for full-text indexing". The request for working fine for Exchange Server 2003, but its not working on Exchange Server 2007. Found full-text indexing is enabled. Requests without CONTAINS predicate work fine on Exchange 2007. He wondered what is wrong with the request? The issue was quite confuseing...during research i found the update from Glen from one of our MSDN forum...(View the thread) "..." FYI... Microsoft Code snippet to delete objects using WebDAV and C#.Net in Exchange Server Environment // TODO: Replace with the URL of an object in Exchange 2000. string sUri = ""; System.Uri myUri = new System.Uri(sUri); HttpWebRequest HttpWRequest = (HttpWebRequest)WebRequest.Create(myUri); // Set the credentials. // TODO: Replace with the appropriate user credential. NetworkCredential myCred = new NetworkCredential(@"DomainName\UserName", "UserPassword"); CredentialCache myCredentialCache = new CredentialCache(); myCredentialCache.Add(myUri, "Basic", myCred); HttpWRequest.Credentials = myCredentialCache; // Set the headers. HttpWRequest.KeepAlive = false; HttpWRequest.Headers.Set("Pragma", "no-cache"); //Set the request timeout to 5 minutes. HttpWRequest.Timeout = 300000; // set the request method HttpWRequest.Method = "DELETE"; // Send the request and get the stream. Stream strm = HttpWResponse.GetResponseStream(); StreamReader sr = new StreamReader(strm); string sText = sr.ReadToEnd(); Console.WriteLine("Response: {0}", sText); // Close the stream. strm.Close(); Note: This code snippet uses the HttpWebRequest class and the HttpWebResponse class in the "System.Net" namespace to delete objects.
http://blogs.msdn.com/b/deva/archive/2008/02.aspx
CC-MAIN-2014-41
refinedweb
1,609
60.51
Automating NoSQL Database Builds A “Python to the Rescue” Story That Never Gets Old That Feeling when your application is so big — and so important — that you just know there’s going to be a dramatic expansion in the storage needs sometime soon. “The ‘Cyclopean’ analysis tool is about to be walloped by another line of business, an acquisition, and a spirited level of organic growth all at the same time. Oh, and the doubling time was just cut in half.” Looks like someone needs to build the new database server farm. Viewed from a distance, the idea of provisioning a NoSQL database server farm in a situation like this seems pretty simple. Allocate servers. Allocate storage. Build the software. Do all the other database configurations required to meet enterprise standards, InfoSec standards, and production support standards. Isn’t this what DevOps is all about? It’s just “Hey! Presto! Database!” Right? When we dig into the details, though, it isn’t quite so simple. And yes, this is going to turn into a “Python to the Rescue” story. What’s Involved? No, Really? The basic outline of provisioning a database server farm requires several resources. The most important resource being patience. These tend to be large servers with A LOT of disk storage. (30Gb to 60Gb RAM.) For some databases — like Cassandra — it seems most sensible to build the nodes serially. In this scenario, the first and last nodes are treated differently from the others, with the first node containing seed information the others need. Therefore, it seems simplest to avoid building database details until all the nodes are built and can start sharing roles, users, and other definitions. This drags out the time to build the next release of the “Cyclopean Database.” Viewed abstractly, we’re going to do the following things to create each individual node of the server farm: 1. Run a Chef recipe from a provisioning server to build each node. 2. Update the domain name. 3. Create entries in any configuration management database that’s used to track assets. 4. Schedule backups (if relevant.) Most of this is just API calls; they’re pretty straight-forward, especially in Python 3. (Hold that thought for later) The Chef recipe problem is where the real work shows up. We need to strike an appropriate balance between using simple Chef scripting and maintaining flexibility. Also, the recipes need to have enough parameters so we can avoid tweaking them every time there’s a change in what we’re building. The problem is that the pace of change to the environmental setup is brisk because these aren’t generic installations: they’re customized for our enterprise needs. Yesterday’s enterprise best practice is today’s “barely-good-enough” practice. We don’t want constant tweaking of the Chef recipe. One alternative solution is to gather data dynamically. But, a recipe that gathers data dynamically means we might have trouble recovering a piece of dynamic data in order to rebuild an existing server. Where’s the middle ground? Finding Flexibility Building the Chef recipes forced us to see that there are a large number of parameters that drive Chef. So many that we needed external tools to collect the data and build something useful with it. The parameters fall into a number of buckets: - Things the user must provide. Application names (“Cyclopean”). Estimated sizes (“1Tb”). Their line of business, and their cost center information. Clearly, this drives the database building request. - Things which the enterprise needs to provide. Cloud configuration details, subnets, other corporate-level details. Some of these configuration details change frequently. There are API’s to retrieve some of them. Doing these lookups in a Chef recipe seemed wrong because the recipe becomes dynamic, and loses its idempotency. - Things which database engineers need to define. Naming conventions, storage preferences, sizing calculations, and other environmental details. This could be part of the Chef recipe itself. - Things which are unique to each database product. For example, the unique way to define users, roles, privileges, connections to an enterprise LDAP. We need the flexibility to let this vary by line of business. Rather than create many variant recipes, we’d prefer to drive this as part of the overall collection of parameters for the build. - Things which are simple literal values, but will change more often than the recipe will need to change. Version numbers of the distribution kits, for example, will change but need to be parameterized. Chef attributes already offer pleasant ways to handle this. The 15-step attribute precendence table shows how a recipe can provide default attributes, an attribute file can provide values, plus an environment can override the values. Our goal is to minimize tweaking the Chef recipe and experience shows this involves a lot of parameters. For some of our NoSQL database installation recipes, this can be as many as 200 distinct values. Python gives us a handy way to gather data from a variety of sources and build the necessary attributes and configuration options so that a — relatively — stable Chef recipe can be used. (remember where we said in the intro this would become a “Python to the Rescue” story?) The idea is to cache the parameters with the Chef recipe, letting us rebuild any node at any time. Having a static template file gives us the needed idempotency in our provisioning. The next question to address is designing the Python app so it can support repeatable but flexible builds. Naïve Design with Python Dictionaries Let’s talk Python. The parameters for the recipe can be serialized as a big JSON (or YAML) document. Python makes this really easy if we create a dictionary-of-dictionaries structure, this can be serialized as a JSON/YAML file trivially. (It’s a “json.dump(object, file)” level of trivial.) How do we build this dictionary-of-dictionaries? Let’s look at a storage definition as an example. We have some parameters that need to go into our Chef recipe. The details involve some calculations and some literals. We can try this: storage = { 'devices': [ 'device_name': '/dev/xvdz', 'configuration': { 'volume_type': get_volume_type(), 'iops': get_iops(), 'delete_on_termination': True, 'volume_size': get_volume_size(), 'snapshot_id': get_snapshot_id(get_line_of_business()), } ] } Emphasis on the word “try”. This will build a tidy dictionary-of-dictionaries data structure. The details are filled in by functions that acquire and compute values. For consistency, we could even wrap literal values as functions to make all the parameters more uniform: def device_name(): return ‘/dev/xvdz’ The problem is that the functions either have a number of parameters or they’re forced to use global variables. It turns out that there are many external sources of configuration information. Passing them all as parameters is unwieldy; a configuration namespace object would be required on every function. Some of the computations are stateful. For a concrete example, think of a round-robin algorithm to allocate database nodes among data centers and racks: each node’s assignment leads to updating global variables. A function with a side effect like this is a design headache and a unit testing nightmare. Declarative Python How can we provide a better approach to using parameters instead of globals? And how can we have stateful objects that fill in our template? Our answer is to use a declarative style of programming. We can — without doing any substantial work — create a kind of domain-specific language using Python class definitions. The idea is to build lazy objects which will emit values when required. Sticking with the storage example, the approach would look like this: class Storage(Template): device_name = Literal("/dev/xvdz") configuration = Document( volume_size = VolumeSizeItem("/dev/xvdz", Request('size'), "volume_size", conversion=int), snapshot_id = ResourceProfileItem(Request('lob'), Request('env'), Request('dc'), "Snapshot"), delete_on_termination = Literal(True), volume_type = ResourceProfileItem (Request('lob'), Request('env'), Request('dc'), "VolumeType"), iops = ResourceProfileItem (Request('lob'), Request('env'), Request('dc'), "IOPS", conversion=int) ) For this, the details are created by instances of classes that help build the JSON configuration object, and instances of classes that fill in the items within a configuration object. There’s a hierarchy of these classes that provide different kinds of values and calculations. All of them are extensions to a base Item class. The idea is to build an instance of the Template class that contains all of the complex data that needs to be assembled and exported as a big JSON document for the Chef recipes to consume. The subtlety is that we’d like to preserve the order in which the attributes are presented. It’s not a requirement, but it makes it much easier to read the JSON if it matches the Python trivially. Further, we need to extend Python’s inheritance model slightly so that each subclass of Template has a concrete list of it’s own attributes, plus the parent attributes. This, too, makes it easier to debug the output. We’re going to tweak the metaclass definition for Template to provide these additional features. It looks like this: class TemplateMeta(type): @classmethod def __prepare__(metaclass, name, bases): """Changes the internal dictionary to a :class:`bson.SON` object.""" return SON() def __new__(cls, name, bases, kwds): """Create a new instance by merging attribute names. Sets the ``_attr_order`` to be parent attributes + child attributes. """ local_attr_list = [a_name for a_name in kwds if isinstance(kwds[a_name], Item)] parent_attr_list = [] for b in bases: parent_attr_list.extend(b._attr_order) for name in local_attr_list: if name not in parent_attr_list: parent_attr_list.append(name) kwds['_attr_order'] = parent_attr_list return super(TemplateMeta, cls).__new__(cls, name, bases, kwds) The metaclass replaces the class-level __dict__ object with a bson.SON object. (Yes, we use Mongo a lot.) The SON object preserves key entry order information, much like Python’s native OrderedDict. The metaclass definition also builds an additional class-level attribute, _attr_order, which provides the complete list of attributes of this subclass of Template and all of its parent classes. The order will always start with the parent attributes first. Note that we don’t depend on the parents all providing an _attr_order attribute; we actually search each parent class to be sure we’ve found everything. The substitute() method of a Template collects all of the required data. We could produce the JSON data here, we prefer to wait until the output is requested. def substitute(self, sourceContainer, request, **kw): self._source = sourceContainer self._request = request.copy() self._request.update(kw) return self The parameters for building out the data come from three places: a sourceContainer which has all of the various configuration files, the initial request that specifies the details of how many nodes for the next release of “Cyclopean”, and any keyword overrides that might show up. The output comes when the template is emitted as a SON object that serializes in JSON notation. def to_dict(self): result = SON() for key in self._attr_order: item = getattr(self.__class__, key) value = item.get(self._source, self._request) if value is not None: result[key]= value return result All of the attribute-filling Item instances have a common get() method that does any calculation. This can also update any internal state for the item. The Template iterates through all of the Items, evaluating get(). The get() method of each Item object is given the configuration details. Instead of free-floating globals, the Template has a short list of tightly-defined configuration details; these are provided to each individual Item. This avoids relying on a (possibly vague) collection of global variables. Bonus! Since they’re objects, stateful calculations do not include the terrifying technique of updating global variables. State can be encapsulated in the Item instance. Unit testing works out nicely because each Item can be tested in isolation. This gives us something that’s highly testable and not significantly more complex than the naïve design. We can have a stable, simple Chef recipe. All of the lookups and calculations to prepare values for Chef are in our Python application. Specifically, they’re isolated in the definition of the Item subclasses and the Templates. The Value of Python There are two reasons why Python works for us: 1. Flexibility. 2. And flexibility. Firstly, we have the flexibility to modify the JSON documents that are used for Chef provisioning. The Chef scripts are tiresome to debug because each time we test one it takes — seemingly — forever to provision a node. The documents which are the input to the Chef recipe can be defined via unit tests and it takes under a second to run the test suite after making a change. Each new idea can be examined quickly. For example, consider a change to the way the enterprise allocated subnets. Yesterday, “Cyclopean” was on one subnet and life was good. Now that it’s becoming huge, it has to be moved and the databases split away from the web servers. The specifications for the subnet went from a simple Literal subclass of Item to a complex lookup based on environment and server purpose. We used to have this: class SubnetTemplate(Template): subnet_id = Literal('name of net') Now we have this: class Expanded_SubnetTemplate(Template): subnet_id = ResourceProfileField(Request('env'), Request('purpose'), 'Subnet') And yet, this change had no impact on the Chef recipe. It added a bunch of details in configuration files, and some additional lookups into those files. Changes we can design and unit test quickly. Secondly, we have the flexibility to integrate all of the provisioning steps into a single, unified framework. Much of the work is done through RESTful API’s. Using Python 3 and the new urllib makes this relatively simple. Additional libraries for different cloud vendors fit the Python world-view of extension modules to solve unique problems. We use a Command design pattern for this. Each step in the build process is a subclass of NodeCommand. class NodeCommand: """Abstract superclass for all commands related to building a node. """ def __init__(self): self.logger = logging.getLogger(self.__class__.__name__) def __repr__(self): return self.__class__.__name__ def execute(self, configuration, build, node_number): """Executes command, returns a dictionary with 'status', 'log'. Sources for some parameters:: build_id = build[‘_id’] node = build[‘nodes’][node_number] :param configuration: Global configuration :param build: overall :class:`dbbuilder.model.Build` document :param node_number: number for the node within the sequence of nodes :returns: dictionary with final status information plus any additional details created by this command. """ raise NotImplementedError One of the most important subclasses of NodeCommand is ChefCommand, which executes the chef provisioning script with all of the right parameters. Using multiple Command instances means that we can — via simple imports — wrap a lot of features into a high-level Python script. And the integration doesn’t stop there. The provisioning automation engine is made available via a Flask container. The import statement lets a Flask container provide the complex batch script capabilities to any internal customer who can write a curl request or a short Python script. Python To The Rescue That Feeling when your application is so big — and so important — that you just know there’s going to be a dramatic expansion in the storage needs sometime soon… We think we’ve found a way to provide advanced TechOps services directly to the lines of business as code packages, as well as RESTful web services. And we think that Python is an integral part of meeting the business need for building NoSQL databases at scale. We tried to use Chef directly, but we wanted more flexibility than seemed appropriate for the tool. The idea of constantly tweaking the recipes didn’t feel like the best way to have a tool that would reliably recreate any server. We tried to create Chef parameters using relatively naïve Python, but this lead to too many global variables and too many explicit parameters to functions. It become a unit testing nightmare. After catching our breath, we realized that a declarative style of programming was what we needed. We didn’t invent a new DSL, we merely adapted Python’s existing syntax to our needs. A simple class structure and a metaclass definition gave us everything we needed to build the configuration parameter files we needed. Now, we can create vast server farms for the next release of “Cyclopean.” For more on APIs, open source, community events, and developer culture at Capital One, visit DevExchange, our one-stop developer portal.
https://medium.com/capital-one-tech/automating-nosql-database-builds-a-python-to-the-rescue-story-that-never-gets-old-1d9adbcf6792?source=collection_category---4------6-----------------------
CC-MAIN-2019-47
refinedweb
2,719
56.15
There's a lot of good advice on how to interview here at the Monestary. Just a quick search turns up: I have, instead, bad advice, more in the vein of How's your Perl? and How's your Perl? (II) The last time I used these questions, I asked a coworker who sat in on the interview about them. He responded by saying that if someone gets them right, you know they know Perl really well. If someone gets them wrong, however, you can't tell whether they know Perl well or not. I agree, so I'm more or less retiring them here. How are these two different? $self->foo(); foo($self); [download] (Assume for this problem that $self is a blessed object of some kind.) ($x) = foo(); $x = foo(); [download] for ( @x ) { foo( $_ ) } map { foo( $_ ) } @x; [download] What does this output? sub baz { return 11 unless shift } print 'baz(5): ', baz(5), "\n"; print 'baz(0): ', baz(0), "\n"; [download] my $FOO = 'foo'; sub bar { $_[0] = 'bar' } print $FOO, "\n"; print bar( $FOO ), "\n"; print $FOO, "\n"; [download] for my $s ( qw( a b c ) ) { $s =~ s/$/\n/ && print $s } [download] Yes, I really did ask these questions of a candidate. In my defense, I prefaced my questions by saying that I was not very concerned with whether he got the answers right or not. These questions were meant to be "conversation pieces." Regarding the last question and different Perl versions: Update: corrected version number. lodin Maybe you need something even older? I'm quite surprised. I used ActivePerl 5.6.0 back then, but I don't think that the distribution would make any difference. Maybe I'm just mixing this up with something closely related. perl561delta agrees with me though that ... Could you provide B::Deparse output of your code for 5.6.0 and 5.6.1? B::Deparse had plenty issues of its own back then, but it will probably be interesting. I'd count that as right, Jenda I think your answer is correct. My own answer looks at it from the perspective of where you might notice a difference in a real program. I think your explanation says more about how that difference arises (i.e., how it all works). if someone gets them wrong, however, you can't tell whether they know Perl well or not I'd say the first two are important to know. The next two fall into the "I wouldn't do that, therefore that code is suspect. I'm not sure what it does, but documentation needs to be added to explain what it's doing and/or it needs to be refactored." The last one is just plain unimportant. The code dies quite loudly and reproduceably. A better discussion would be how to fix it. I wouldn't do that, therefore that code is suspect. Those two were the map vs. for and return 11 unless shift... The last one is just plain unimportant. I'd leave the interview with the impression that you're just trying to show the others on the interview panel how much better you were than the applicants Mmmm .. and they'd probably be OK with that, because how you answer the answer (or walk out of the room) is part of your answer. An interview happens on many levels -- there's the basic, "Hi, How are you, ..." level, there's the technical level, and there's also the meta-technical level. For me, the meta-technical level is the most interesting -- sure, you know how something clever works, but can you explain it to someone so that they understand? And why was it necessary to do it that way? Can you explain your thought processes out loud as you go, so that your interviewers 'get' how you approach a problem? (In my most recent interview, I proposed a solution to a regex problem, was asked to explain it, started my first sentence, said out loud -- "Wait -- that won't work", paused, then proposed a second, different solution. Apparently, that approach works.) This meta-behavior also helps them understand how you may well behave when you get stressed our doing too many things -- My response used to be to bark at people (don't get into that habit -- it upsets them), but now I look them in the eye and say "I'm in the middle of an emergency right now -- is your problem more urgent?" and wait for them to explain. I make a point to follow up a few minutes later, once my emergency is over, and deal with their emergency. I really don't think anyone gets the boot as a result of a code review. It has to be a combination of many factors, all pointing to the breakdown of the employer/employee relationship. There shouldn't be any 'tricks' involved in interviews -- it's an exploration into whether there's a the basis for a good relationship, based on mutual compatibility. But if someone has (unwisely) labeled themselves as a 'Perl guru', I guess they should expect a few of these tough questions. Alex / talexb / Toronto "Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds Did you read the whole post? In the interview, the questions were posed as starting points for discussion, not as a pass/fail trick quiz. And that completely changes how I for one react to them. I'd say there are plenty of valid reasons for finding out what a candidate thinks about code like these examples! My reaction to an interview along these lines would be: I'd make my decision on whether it was a place I wanted to work based on how the discussion went, not the questions themselves; and I'd hope the interviewer would judge me likewise. Pass. Though I was not too sure about the "for x map" one. Update: erm. Oops. Thanks zby, it was not supposed to be there, I just added it when testing and forgot to remove it again. I bet a lot of people would respond that it would print three lines with formatted dates 1970-01-01 with slightly different times. Regardless of whether the function returns a list or an array... Can you show an example of a function that returns an array? (Not an array reference.) sub function { return 'an array'; } [download] Cheers - L~R Can you show an example of a function that returns an array? While you haven't said so, I'm guessing that what you have on your mind is something like what you said in Re^4: returning tied array, which is to say: You can only return scalars and lists and (nothing) from subroutines. You can't return arrays or hashes directly, only as lists or references. I suspect this statement is more meaningful to perl programmers than to Perl programmers. I haven't read perlguts, let alone perl source, but I'm guessing that under the covers somewhere, in the sea of C, it's really true that nothing can escape a sub besides a scalar, a list, or Nothing. That having been said, please consider: sub get_list { return ( 4, 5, 6 ); } sub get_array { @x = ( 4, 5, 6 ); return @x; } [download] get_list and get_array return the same collection of values in different ways. They do different things in scalar context. This is why I think there's a difference between returning a list and returning an array—they behave differently. One might say that they both do the same thing in scalar context—return a scalar (neither list, nor array). I think these points of view are looking at the sub from different sides of the return. This is the difference between "imply" and "infer". It's the difference between what one says and what another hears. It's the difference between expression and interpretation. If I scream "value" into the Void, have I still said something? I think one could correctly say that sub { @_ } returns an array. It always returns an array regardless of context. Its caller may receive from it a scalar or a list or Nothing, depending on the context it provides, but what the sub itself does is the same every time. I have in my head a call like this: ... = get_stuff(); [download] Inside get_stuff somewhere there's a return with some "stuff" after it. I'm guessing that perl takes that "stuff" and turns it into something that makes sense to whatever "..." is, and it does that before the "stuff" gets out of get_stuff. So what actually comes out of the sub can only be a scalar, a list, or Nothing. I could conceive of it being implemented differently. It could pass out of the sub whatever the sub "said" it wanted to return, and then coerce it into the appropriate type once it got there. If that were the case, would we still say that subs can only return scalars, lists, or Nothing? Would they really behave any differently? More to the point, how is any of this distinction relevant to Perl programmers? This reads to me like it is motivated by the all-too-common and deceptively flawed meme of "an array in scalar context gives its size while a list in scalar context returns its last element". I've seen that used to justify so very many flawed conclusions. You can also use it to make flawed justifications to quite a few "correct" conclusions, but that just demonstrates how the flaws in that meme are deceptive (which is probably part of why it is still "all too common"). The subroutine does /not/ return an array that then decides to give its size when it finds itself in a scalar context. There are many other ways that thinking "this sub returns an array" will mislead people. So it is better to divorce yourself from both of these flawed ways of thinking about context in Perl. And, no, my objections are not based on some secret knowledge on how Perl works internally. They are based on repeatedly seeing people make mistakes about how Perl scripts would behave based on these ideas. There are lots of cases where these ideas give the right answer. But the cases where they lead to the wrong answers are surely more important. - tye I think one could correctly say that sub { @_ } returns an array. It always returns an array regardless of context. Try pushing on to the "array" returned from your sub then: sub return_array { @_ } my $new_count = push (return_array(qw( foo bar ))), 'baz'; [download] (There's an even subtler trick related to a fundamental truth about the semantics of Perl 5 in that example, which I only realized after I chose it.) I'm guessing that under the covers somewhere, in the sea of C, it's really true that nothing can escape a sub besides a scalar, a list, or Nothing. The implementation is what it is to support the semantics of Perl-with-an-uppercase-P. I'm not interested in an ontological debate as to which came first, but the internals could change if p5p decided that these language semantics needed to change. More to the point, how is any of this distinction relevant to Perl programmers? I find that correctly understanding the semantics of a programming language has a positive effect on the quality and correctness of code written in that language. I hate these type of technical questions... I find they are just dumb! NO! There is more important skills than knowing the subtle differences between coding implementations( this is especially true when you have an interpreter to check your work). What makes a good programmer? How about, To me these are harder to evaluate but drastically more important. It is amazing the number of problems you can avoid with a good approach. A deep knowledge of a languages quirks and behaviors will not take you very far. I have had horrible interviews with people that decide that knowing this intimate behavior is considered a sign of a good programmer. These people also end up writing 600 line while() loops. that no one can understand. So I have little respect for individuals who rely upon these sort of evaluations.. Nice questions. Regarding the third question (for and map), I didn't know the answer you gave. However, there's another difference from the one you mentioned. -sam Rather than just asking the candidate a few questions, I prefer to give them a task and a set time to do it in. The tasks that we use here (obviously I can't tell you what they are) test that he has sufficient clue about data structures, algorithms, and the language. We do *not* expect anyone to finish any of them, but can get a great deal of information by seeing how far they got and how they approached the problem. The code they write can often lead to some quite interesting discussions. If and when you do that, always give the candidate a collection of language reference-texts or free-access to a web equivalent. Be sure to emphasize to them that they are free to use any of those sources entirely without penalty. If they feel that they can't do the exercise and instead want to explain to you the approach that they would take, let them do it ... without penalty nor prejudice. A good coder is fine, but a good conceptual designer who can present his or her thoughts and ideas to you in a cohesive and understandable way is infinitely better. Another approach is to present a candidate with a block of code and ask them to explain, in their own words, what it's doing and perhaps what its data-structures look like. Ask them if they might have any comments or suggestions about the code. The code that you select for such a purpose should be the clearest, least-obscure code that you can find. During all the community-college courses I have ever taught, students were allowed to have a hand-prepared “cheat sheet” with them during the exams. They turned-in a copy of those cheat-sheets with the exam. You could see their depth-of-understanding from the way in which they prepared that material, and I notice that the very best sheets were rarely used during the test. Another important courtesy that I suggest, in these days of e-mail, is to send the candidate a detailed description of exactly what you intend for them to do during the interview. Consider sending them a preliminary e-mailed interview, not from Brain-whoeveritis, asking them to return their responses via reply. I'd have no problem at all telling anyone generally “what they are,” since each ‘exam’ when I actually sent it out would be unique. I'm not trying to test a candidate's ability to react to surprises, and I don't want to re-create grammar school with all of its anxieties. Oh. I think that I, too, would join the “walk out of the room” group. The mere fact that I was being asked such questions would tell me a great deal about the organization, including the fact that I would not want to work there. I've been programming for ... well, for a very long time now ... and “picking up a new language” is frankly the work of a long weekend, at most. The task of understanding an obscure language-construct is the work of fifteen-minutes on Google. ... but the experience that enables me to know not to write such code in the first place, and to know to be repelled by it and to eliminate it (like kudzu) wherever it may be found, has taken ... well, a very long time. Therefore, I frankly do not want to work for an organization that prizes its developers' knowledge of arcane language-lore. I don't want to have to deal with their code-base or with the rash of avoidable bugs that I know it will contain. I don't want to plunge into a nest of competing egoes, because in such a nest there will be neither partnership nor communication. This will not be “a healthy place to work.” Instead, it will be a constantly-abrasive one that will grind you down, and life's too short for that. The best thing to do with such places (and they are legion...) is to avoid them at all costs. The questions that you are asked during even the very first stages of the hiring process will tell you a great deal about what that organization values, what qualities it holds in high esteem, and how it defines its worth within the business organization in which it is situated. A company's interview process is a bright window straight into the personality and temperament of a fairly high-level manager whom you may never get to meet. They will reveal the organization's confidence (or lack thereof) in itself, and may illuminate the nature of the political image-battles which the organization fights. I say again: interview questions are a magic mirror. A workgroup that peppers its candidates with obscure questions lacks confidence in itself, and therefore will lack confidence in you even if your name is Larry Wall. A workgroup that asks how you feel about teamwork and long-hours isn't a cohesive and well-managed team and pays for it with long hours. Per contra, a workgroup that talks about company-paid employee training early in the interview process is probably a well-run group that is on top of its game (as it should be), and confident-enough about staying there to pay attention to its members' professional growth and personal well-being. Your reactions to being asked such questions will likewise tell you a lot about yourself. If you find that you have a visceral negative-reaction to it, do not ignore your ‘gut,’ no matter how badly you (think that you) want the job! It's tough to walk away from an interview, much less a firm offer, especially when you don't have another offer in-the-wings. But sometimes that's what you have to do. You want to “get to the ‘yes,’” but it must be the right “yes,’ and the right one might not be the first one. If you are not satisfied with your job right now ... if it did not turn out to be what you expected it to be ... then unfortunately, you made a poor selection, too. Simply put, testing helps find out if the candidate is fibbing about their skills or not. And yes, a lot of people lie (or exaggerate if you wanna be nice about it). I know that a fair number of candidates are fibbing lying about their past experience. They have to, as long as gatekeepers filter-out resumes based on the field-names they put into a “skills” array and the numeric value of that field. The best solution that I have found for this phenomenon is to talk about soft skills in the job-requisition, and hope that enough of it makes it through the HR-gauntlet to be useful. Describe what the candidate will be responsible for, not in programming-terms but instead in business-terms. A problem that you will very-frequently encounter is that candidates simply don't have “business” skills to begin with. Nothing in their formal education (that they might well have spent tens of thousands of dollars for) has prepared them for this. So they have studied “wrenches” for years, and they've maybe even torn-apart and rebuilt an engine, but if you make the mistake of asking them an abstract question you get a blank stare. But when I'm interviewing, it's those abstract questions that I want to get answers to. One of my favorites: “In your opinion, what makes a Truly Great piece of software, and why?” Notice that there is no right answer. That throws a lot of people... I wish it didn't, and I don't mean for it to. It's another chapter of the story of folks who get out of school with a perfectly-honed ability to take tests, and no practical knowledge whatever. That's a failing of our educational and training system, not of those people. Let me put it this way: “around here, we don't ‘write programs in Perl.’ Well, that's what we Little-D-Do. What we Big-D-Do is to build solutions to business problems for people who, quite frankly, don't want to give a damm about computers except to use them. We intend for them to find that our solutions are technically flawless (“but of course”) ... and to find that our solutions are great.” I'll omit all mention of what programming languages we are using, if I can. The folks who don't particularly care what language we're using are the ones I want to talk to. The ones who know how to design-and-build Great Software... in anything. It can, of course, be problematic to get these things through “the HR gauntlet,” and yet you have to work with these guys and do things their way, because they're the ones who make it their business to keep you from getting sued. “Hell hath no fury like a lover loser-candidate spurned...” I think the value of knowing arcane language-lore is not in its utility when writing but in its utility when debugging and refactoring (i.e., when reading). Yes, you can look up some obscure construct and figure out what it does fairly quickly. On the other hand, some constructs do not appear to be obscure but can have unexpected features anyway (the map vs. for example, for instance). Most (if not all) of the places I've worked have somewhere some old scary code written by someone who wasn't very experienced at the time. I want people who can read it and know what it really does, not just what it looks like it's doing or what the comments say it's doing. Knowing the arcane can also reveal a passion for the subject. Pretty much every conversation of strange constructions or obfuscated code that I've been in has included someone saying, "but writing that would be a bad idea anyway." If I'm talking to a candidate who doesn't say that, it makes me wonder. If I ever talk to a candidate who says, "I'll have to use that feature," that's almost certainly disqual). Yes No A crypto-what? Results (167 votes), past polls
http://www.perlmonks.org/index.pl/jacques?node_id=667087
CC-MAIN-2014-10
refinedweb
3,814
70.73
Tax Have a Tax Question? Ask a Tax Expert Join the 9 million people who found a smarter way to get Expert help Recent Dividends questions I sold my half of a Condo in Fl to my brother. He is sole owner now. Will I have to pay tax of any kind when I fill out my income tax.I have been claiming 1/2 for over ten years. On income tax, our accountant has used depreciated value.After paying off Mortgage, my half is $40,000 Read more JD, MBA, CFP, CRPS Doctoral Degree My elderly dad has been exempt from paying income taxes because his earnings were small. He passed away. He did not have investments and made no interest on his only checking account. Do I still have to pay taxes on basically nothing and which form would I have to use to file if I have to?JA: The Accountant will know how to help. Is there anything else important you think the Accountant should know?Customer: The only income was less than 15K SSI. He had no retirement, pension, 401K or IRA. Since he was officially exempt from filing taxes, do I still file anyways? Tax advisor and Enrolled Agent Bachelor's Degree I have a closely held shareholder that sold stock in his corporation; however, he also was allowed to keep the cash and one vehicle. How should that be handled on the corporate books and on his personal return? Can I count the cost of sending money from my wife's Peruvian business to the US as a fee to lower taxes? President Ph.D. I recently made an LLC.I am the owner and the only one working right now.I have a person that will join me as a 1099 soon.My question is,Should my company be an LLC taxed as an S -Corp or a C-corp?Later on my wife will join us also,when needed. Hello. This year I sold my shares in my family ranch (S-Corp) back to the corporation for $152,000. How will this affect my taxes? This amount is approximately 65% of what the price might be on the market. helloJA: Hello. How can we help?Customer: I have some question about forming a LLCJA: The Accountant will know how to help. Please tell me more, so we can help you best.Customer: I have some stock investment. In the future, I would like to form a LLC to protect the investment and at the same time to deduct some of the cost and expense for the investment. but I know that there is a rule for personal holding company taxJA: Is there anything else important you think the Accountant should know?Customer: I wonder if I setup a LLC but elect as through tax not S-corporate. but it help to avoid that PHC tax? Owner Master's Degree An S Corp owner has received more distributions than retained earnings for income tax purposes which forces the balance sheet to be out of balance because the return (in Turbo Tax) won't accept a negative retained earnings amount. Where will I report the difference to make my balance sheet (and retained earnings) balance?JA: The Accountant will know how to help. Is there anything else important you think the Accountant should know?Customer: Not that I can think of. Certified Public Accountant Masters I inherited 50k in stocks when my dad died in 2014 which I cashed in shortly after receiving them. I know there is no capital gains tax as they reset at the date for which I got them. However, in Florida, do I have to pay income tax if I cash them in?
http://www.justanswer.com/topics-dividends/
CC-MAIN-2016-40
refinedweb
625
73.88
MyButton is a simple .NET class for WinForms that helps in setting an image, drawing a rectangle while on focus, and taking care of the text of the button. Recently I was working on a project where I had to use images for buttons and rounded textboxes. Images for a button seemed to be simple but in my UpdateLayeredWindow, the default button component wasn't enough for all aspects. So I was using a third party library (Krypton Toolkit). That made my work very simple and easy. At the time of final version of the project, I thought to reduce the package size and the only thing I could eliminate was the third part library. I started developing a custom button and gradually got all the things working as was Krypton's button and my requirements. I had to do RnD on different sites for different tasks to get it working. So I thought why not share the whole thing in one place and help others who are kind of looking for the same or similar. UpdateLayeredWindow First we will need an image for the button background. You can pick up any image of any shape and/or size. The image I use is: Add the above image to the project. Create a component class named MyButton. Right click on your project, Add->Component, select Component Class, and name it MyButton. You should see the screen with the MyButton.cs file open as: MyButton Click on "Click here to switch to code view", to switch to code view and actually start coding. Inherit your class from Button rather than the Component class. The initial code should look like this: Button Component using System; using System.Collections.Generic; using System.ComponentModel; using System.Diagnostics; using System.Linq; using System.Text; using System.Windows.Forms; using System.Drawing; namespace MyComponentsPrj.components // Your namescape an be anything. { public partial class MyButton : Button { public MyButton() { InitializeComponent(); } public MyButton(IContainer container) { container.Add(this); InitializeComponent(); } } } With the above code, we just have a newly developed component called MyButton of type Button. Yet, it doesn't have any special features. To add special features, first go to the Design mode of the MyButton class. BackColor Transparent BackgroundImage Hand Cursor UseVisualStyleBackColor False FlatAppearance BorderColor None BorderSize MouseDownBackColor MouseOverBackColor FlatStyle Flat Note: If you have a custom color set for any Color property like ForeColor, BorderColor, etc., in Design mode then that won't be supportive. So it is better to define them in the constructor or as a separate method and call the method in both constructors. Like I call the below init(); in both the constructors: ForeColor init(); private void init() { base.ForeColor = Color.FromArgb(0, 255, 254, 255); base.FlatAppearance.BorderColor = Color.FromArgb(0, 255, 255, 254); // Transparent border } With the above settings, the button will have a background, won't have any border around it, and all basic settings are set. If we don't add FlatAppearance.BorderColor then if our button has focus and some other application is active, then our button will look like this: FlatAppearance.BorderColor Now comes that solid inner border on the button when it has focus by navigating through the Tab key or otherwise. That bothers me. Well, I have two options: either don't support focus itself or have another sort of border. To remove the focus border, I override the following method in my class: /// <summary> /// Avoids the inner solid rectangle shown on focus /// </summary> protected override bool ShowFocusCues { get { return false; } } With the above code added, the button won't show any indication when it has focus. Let's get the indication of focus by showing a dotted border instead of a solid line. For that, override onPaint() and implement the below code: onPaint() protected override void OnPaint(PaintEventArgs pevent) { base.OnPaint(pevent); // We want to draw dotted border only when the button has focus, not otherwise if (base.ContainsFocus) { // Draw inner dotted rectangle when button is on focus Pen pen = new Pen(Color.Gray, 10); Point p = base.Location; pen.DashStyle = System.Drawing.Drawing2D.DashStyle.Dot; Rectangle rectangle = new Rectangle(4, 4, Size.Width - 8, Size.Height - 8); ControlPaint.DrawFocusRectangle(pevent.Graphics, rectangle); pen.Dispose(); } } Hmmm, looks cool. Now we can also manage the focus issue and get an excellent inner dotted border when on focus. But, one more thing is bothering - when I click the button, the text turns to a Gray shade. This is Windows style, has no default property to set/change it. I want the ForeColor to be as it is and not change to gray or any other color. Different PCs with different themes may also affect this. I want my button to behave the same on each and every PC. Let's look at how to handle that issue. To achieve the goal, we will have to print the text manually and set the Text property to "". Create a property displayText which will be used in place of the Text property. Create a member variable textColor of type Color. Text displayText textColor Color private Color textColor = Color.White; /// <summary> /// Text used to display the contents rather than using Text property. /// </summary> public string displayText { get; set; } Add the following code at the bottom of onPaint(): // Draw the string to screen // Find the size of the text SizeF sf = pevent.Graphics.MeasureString(displayText, this.Font, this.Width); Point ThePoint = new Point(); // Get the center location to print the text ThePoint.X = (int)((this.Width / 2) - (sf.Width / 2)); ThePoint.Y = (int)((this.Height / 2) - (sf.Height / 2)); // Draw the text //pevent.Graphics.DrawString(displayText, Font, new SolidBrush(textColor), ThePoint); TextRenderer.DrawText(pevent.Graphics, displayText, Font, ThePoint, textColor); this.Text = ""; Add ForeColorChanged event and add the code: ForeColorChanged /// <summary> /// Whenever ForeColor is changed, the Alpha for the textColor // will also be changed and thus updated. /// </summary> private void MyButton_ForeColorChanged(object sender, EventArgs e) { this.textColor = Color.FromArgb(255, ForeColor.R, ForeColor.G, ForeColor.B); } Wondering what is textColor doing and why it is used? Remember, I had said earlier that the custom colors are not completely supportive in setting any color. See the difference between the base.ForeColor and textColor settings. In base.ForeColor the Opacity is set to 0 whereas in textColor, the Opacity is set to 255. If you use just ForeColor to print text, then you might not see anything. Hence we need to change the opacity. With the above event, whatever the ForeColor is set to, our text will be printed properly without any doubt. base.ForeColor Opacity Why have I commented DrawString and used DrawText ? With DrawString, there was no spacing in between characters of the text. Using DrawText solved it. This may be due to the font and/or system. The Font that I have used is "Arial MT Bold". With the same settings on another system, DrawString had no issues whereas on my system I had. So you can use whichever suits you. DrawString DrawText Font With this, the MyButton button component is complete. Rebuild your project and it is ready for use from the ToolBox. How to use it? Open a Form, drag the MyButton from ToolBox. If required, change properties else just add a text to display for the displayText property. Save it and execute your project. You can see the look and feel of the button remains the same when clicked or not. The dotted lined inner border appears when it has focus. Get rid of all other default borders. Form You can see how a single and easily developed component helped us override the default Windows style for the button and get the look and feel we wanted for our UI. The files for the MyButton class are included in the zip. Enjoy playing.
http://www.codeproject.com/Articles/378818/Custom-Button-Issues-with-Focus-Border-Text-Color?fid=1710231&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None
CC-MAIN-2014-35
refinedweb
1,295
59.3
BBC micro:bit Bluetooth Mouse & Keyboard Introduction The built-in bluetooth does not currently work with MicroPython. There are some bluetooth breakout boards that can be used for this though. This project uses Adafruit's Bluefruit EZ-Key HID breakout to send keyboard and mouse commands over bluetooth. You will need a bluetooth enabled PC or bluetooth dongle to test this. This isn't a cheap component. It costs £15-20. It has a lot of features, is easy to communicate with and tolerates a range of different voltages for input. It also works without a micro-controller with built-in, reprogrammable input pins. I had one of these already and wanted to see if I could get the UART functions of the micro:bit working. Circuit Not much to connect here. Plug in your micro:bit now. Press the button on the Bluefruit and pair it with your computer. Programming - Keyboard You can send strings of characters and they will get sent to the computer as keyboard input. from microbit import * uart.init(baudrate=9600, bits=8, parity=None, stop=1,tx=pin0) while True: b=0 dx = 0 dy = 0 if button_a.was_pressed(): uart.write('Button A was pressed') elif button_b.was_pressed(): uart.write('Button B was pressed') sleep(50) There are non-printing characters. This program sends backspace and enter key presses to the PC when the A and B buttons are pressed. from microbit import * uart.init(baudrate=9600, bits=8, parity=None, stop=1, tx=pin0) while True: if button_a.was_pressed(): uart.write(bytes([0x08])) elif button_b.was_pressed(): uart.write(bytes([0x0d])) sleep(50) Here is a table of the other characters that you can send. Programming - Mouse This isn't so easy to get right with the accelerometer - an analogue joystick works a lot better for this. It shows you how to send mouse commands though, from microbit import * def MouseCommand(buttons, mousex, mousey): d = bytes([ 0xFD,0x00,0x03, buttons,mousex,mousey, 0x00,0x00,0x00 ]) uart.write(d) return uart.init(baudrate=9600, bits=8, parity=None, stop=1, tx=pin0) while True: b=0 dx = 0 dy = 0 if button_a.was_pressed(): b = 0x01 elif button_b.was_pressed(): b = 0x02 if abs(accelerometer.get_x())>200: dx = accelerometer.get_x() // 40 if abs(accelerometer.get_y())>200: dy = accelerometer.get_y() // 40 MouseCommand(b,dx,dy) sleep(10) Challenges - Make a touch controller for a game. You have 3 touch pins, enough for left-right-fire or left-right-jump. - You can send the keyboard signals when you like. Sending random text to a sibling's PC at random intervals is quite possible. - You can use this with phones too. Pehaps you might make a remote control. Search for Adafruit's guide to the product and you'll be able to work out how to send the multimedia commands.
http://www.multiwingspan.co.uk/micro.php?page=blue
CC-MAIN-2019-09
refinedweb
470
69.58
I need help writing a program with this loop. I do not need help with the math part of it. The question is, A perfect # is a # which is the sum of all its divisors except itself. Six is the first perfect #; the only #s which divide 6 evenly are 1,2,3,6 and 6=1+2+3. An abundant # is one which is one less than the sum of its divisors (12 <1+2+3+4+6); a deficient # is greater than the sum of its divisors (9>1+3). Write a complete 'C' program which classifies each of the first N integers (where N is entered by the user) as either perfect, abundant, or deficient. The output should be formatted so that the program generates a table. Here is what I have so far.... # include <stdio.h> main() { int count, abundant, perfect, deficient, sum=0, num; printf("Enter a number\n"); scanf("%d", &num); printf("ABUNDANT PERFECT DEFICIENT\n"); printf("-------- ------- ---------\n"); count=1; while (count <= 20) { printf("%d ",count); ++count; } for (count=1; count <= 20; count++); { sum=0; I cannot figure out what to do after this. I am stuck. Please help in anyway possible. I am not asking for anyone to write the program for me. Thanks in advance, Kristina
http://cboard.cprogramming.com/c-programming/9815-help-loops-please.html
CC-MAIN-2014-52
refinedweb
214
81.93
18358/client-id-in-bitstamp-s-api In Bitstamp's API documentation, a client_id is mentioned. It is used to generate a signature. But I couldn't find how I can get that client_id. Can someone tell me where I can find it? You can use this Ruby code: require 'open-uri' require 'json' require 'base64' require 'openssl' require 'hmac-sha2' require 'net/http' require 'net/https' require 'uri' def bitstamp_private_request(method, attrs = {}) secret = "xxx" key = "xxx" client_id = "xxx" nonce = nonce_generator message = nonce + client_id + key signature = HMAC::SHA256.hexdigest(secret, message).upcase url = URI.parse("{method}/") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true data = { nonce: nonce, key: key, signature: signature } data.merge!(attrs) data = data.map { |k,v| "#{k}=#{v}"}.join('&') headers = { 'Content-Type' => 'application/x-www-form-urlencoded' } resp = http.post(url.path, data, headers) console_log "{method}/" resp.body end def nonce_generator (Time.now.to_f*1000).to_i.to_s end By default, the docs are at your ...READ MORE You should store and keep the private ...READ MORE There is an error in that document. ...READ MORE The following code should help: $address = null; You can create a list and add ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/18358/client-id-in-bitstamp-s-api
CC-MAIN-2020-10
refinedweb
204
54.08
import "github.com/docker/notary/trustpinning" MatchCNToGun checks that the common name in a cert is valid for the given gun. This allows wildcards as suffixes, e.g. `namespace/*` func ValidateRoot(prevRoot *data.SignedRoot, root *data.Signed, gun data.GUN, trustPinning TrustPinConfig) (*data.SignedRoot, error) ValidateRoot receives a new root, validates its correctness and attempts to do root key rotation if needed. First we check if we have any trusted certificates for a particular GUN in a previous root, if we have one. If the previous root is not nil and we find certificates for this GUN, we've already seen this repository before, and have a list of trusted certificates for it. In this case, we use this list of certificates to attempt to validate this root file. If the previous validation succeeds, we check the integrity of the root by making sure that it is validated by itself. This means that we will attempt to validate the root data with the certificates that are included in the root keys themselves. However, if we do not have any current trusted certificates for this GUN, we check if there are any pinned certificates specified in the trust_pinning section of the notary client config. If this section specifies a Certs section with this GUN, we attempt to validate that the certificates present in the downloaded root file match the pinned ID. If the Certs section is empty for this GUN, we check if the trust_pinning section specifies a CA section specified in the config for this GUN. If so, we check that the specified CA is valid and has signed a certificate included in the downloaded root file. The specified CA can be a prefix for this GUN. If both the Certs and CA configs do not match this GUN, we fall back to the TOFU section in the config: if true, we trust certificates specified in the root for this GUN. If later we see a different certificate for that certificate, we return an ErrValidationFailed error. Note that since we only allow trust data to be downloaded over an HTTPS channel we are using the current public PKI to validate the first download of the certificate adding an extra layer of security over the normal (SSH style) trust model. We shall call this: TOFUS. Validation failure at any step will result in an ErrValidationFailed error. type CertChecker func(leafCert *x509.Certificate, intCerts []*x509.Certificate) bool CertChecker is a function type that will be used to check leaf certs against pinned trust func NewTrustPinChecker(trustPinConfig TrustPinConfig, gun data.GUN, firstBootstrap bool) (CertChecker, error) NewTrustPinChecker returns a new certChecker function from a TrustPinConfig for a GUN ErrRootRotationFail is returned when we fail to do a full root key rotation by either failing to add the new root certificate, or delete the old ones func (err ErrRootRotationFail) Error() string ErrRootRotationFail is returned when we fail to do a full root key rotation by either failing to add the new root certificate, or delete the old ones ErrValidationFail is returned when there is no valid trusted certificates being served inside of the roots.json func (err ErrValidationFail) Error() string ErrValidationFail is returned when there is no valid trusted certificates being served inside of the roots.json type TrustPinConfig struct { // CA maps a GUN prefix to file paths containing the root CA. // This file can contain multiple root certificates, bundled in separate PEM blocks. CA map[string]string // Certs maps a GUN to a list of certificate IDs Certs map[string][]string // DisableTOFU, when true, disables "Trust On First Use" of new key data // This is false by default, which means new key data will always be trusted the first time it is seen. DisableTOFU bool } TrustPinConfig represents the configuration under the trust_pinning section of the config file This struct represents the preferred way to bootstrap trust for this repository This is fully optional. If left at the default, uninitialized value Notary will use TOFU over HTTPS. You can use this to provide certificates or a CA to pin to as a root of trust for a GUN. These are used with the following precedence: 1. Certs 2. CA 3. TOFUS (TOFU over HTTPS) Only one trust pinning option will be used to validate a particular GUN. Package trustpinning imports 8 packages (graph) and is imported by 186 packages. Updated 2018-02-15. Refresh now. Tools for package owners.
https://godoc.org/github.com/docker/notary/trustpinning
CC-MAIN-2019-47
refinedweb
734
60.45
#include "clang/Frontend/Utils.h" #include "clang/Basic/FileManager.h" #include "clang/Basic/SourceManager.h" #include "clang/Frontend/DependencyOutputOptions.h" #include "clang/Frontend/FrontendDiagnostic.h" #include "clang/Lex/DirectoryLookup.h" #include "clang/Lex/ModuleMap.h" #include "clang/Lex/PPCallbacks.h" #include "clang/Lex/Preprocessor.h" #include "clang/Serialization/ASTReader.h" #include "llvm/ADT/StringSet.h" #include "llvm/ADT/StringSwitch.h" #include "llvm/Support/FileSystem.h" #include "llvm/Support/Path.h" #include "llvm/Support/raw_ostream.h" Go to the source code of this file. Definition at line 125 of file DependencyFile.cpp. Referenced by clang::DependencyFileGenerator::AttachToASTReader(), and clang::DependencyCollector::sawDependency(). Print the filename, with escaping or quoting that accommodates the three most likely tools that use dependency files: GNU Make, BSD Make, and NMake/Jom. BSD Make is the simplest case: It does no escaping at all. This means characters that are normally delimiters, i.e. space and # (the comment character) simply aren't supported in filenames. GNU Make does allow space and # in filenames, but to avoid being treated as a delimiter or comment, these must be escaped with a backslash. Because backslash is itself the escape character, if a backslash appears in a filename, it should be escaped as well. (As a special case, $ is escaped as $$, which is the normal Make way to handle the $ character.) For compatibility with BSD Make and historical practice, if GNU Make un-escapes characters in a filename but doesn't find a match, it will retry with the unmodified original string. GCC tries to accommodate both Make formats by escaping any space or # characters in the original filename, but not escaping backslashes. The apparent intent is so that filenames with backslashes will be handled correctly by BSD Make, and by GNU Make in its fallback mode of using the unmodified original string; filenames with # or space characters aren't supported by BSD Make at all, but will be handled correctly by GNU Make due to the escaping. A corner case that GCC gets only partly right is when the original filename has a backslash immediately followed by space or #. GNU Make would expect this backslash to be escaped; however GCC escapes the original backslash only when followed by space, not #. It will therefore take a dependency from a directive such as #include "a\ b\#c.h" and emit it as a\\ b\#c.h which GNU Make will interpret as a\ b\ followed by a comment. Failing to find this file, it will fall back to the original string, which probably doesn't exist either; in any case it won't find a\ b#c.h which is the actual filename specified by the include directive. Clang does what GCC does, rather than what GNU Make expects. NMake/Jom has a different set of scary characters, but wraps filespecs in double-quotes to avoid misinterpreting them; see for NMake info, for Windows file-naming info. Definition at line 400 of file DependencyFile.cpp. References clang::Make, clang::NMake, and clang::Target.
http://clang.llvm.org/doxygen/DependencyFile_8cpp.html
CC-MAIN-2019-22
refinedweb
500
51.55
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Create new Many2One records in @api.onchange method (2nd try) Hello, I am new to Odoo and I struggle with the new Odoo API. I am trying to create new records on a Many2One field in an @api.onchange method. It is a custom module so I will simplify by providing a made up example. Say that I am building a new Order that has Lines. The lines are defined via a Many2One field called 'line': class Order(models.Model): name="mymodule.order" line = fields.One2many('mymodule.line', inverse_name="order_id", string="Lines") partner_id = fields..... ..... I want to generate new lines as soon as the user selects or changes the partner_id of the order. To do so, I have implemented an @api.onchance method that looks somewhat like this: @api.onchange('partner_id') def onchange_equipment_type_id(self): """ Updates the order lines""" if self.line: self.line.unlink() for val in self.bogus_values: #some bogus list that supposedly tells me how many lines I need to create self.line.create({'name':val.name, 'order_id':self.id, ....}) The above approach does not work. First of all, as far as I understand CREATE will actually create records to the DB. I do not want that. I simply want to add some order lines as if I clicked few times on 'Add an item' in a table. The actual creation of the lines shall happen when the user clicks the Save button on the order. So what method shall I use? How can I 'add' records to the line field without writing to the DB at this point? The second problem that I face is that self.id is NewId. Now as per my understanding this is normal for New records. But I actually get self.id as NewID even for entries that have been saved and I open them in edit mode. Anyways, for the case when the order has not been saved yet, I still need to add lines. So my major issue is how to add lines to my Many2One field as part of my onchange method without triggering a DB write. Let me reiterate again - I want to create a Many2One not as part of the 'parent' object create method. I want to do this as an onchange event. I want to generate the lines as soon as I change the partner. Imagine that for each partner there is a predefined set of lines, for example based on favorite products. So as soon as the one changes the partner id, a new list of favorite products will be retrieved (for the selected partner), the existing order lines must be deleted and new lines (for the new product set) shall be generated. All this shall happen prior to the user hitting the Save button on the order object. Actually the user can change partners several times priori to saving the order. I hope that my question makes some sense. Thank you in advance for your help. Hi, I solved this problem some time in the following manner: @api.onchange('partner_id') def _onchange_equipment(self): lines =[] for val in self.bogus_values: line_item = { 'attr1': val.name, 'attr2': val.a1, ... } lines += [line_item] self.update({'line': lines}) In my very humble oppinion this works better than the proposal of Stefano (at least for my scenario). After all, the user can change the partner many time prior to saving the order. The create will probably insert new records in the DB. We do not want this to happen each time the user changes the partner. Stefano, please correct me if I am wrong and thank you for your response. Do not use create(). Use new() instead. It's also more object-oriented than update(). In your case, replace self.line.create({'name':val.name, 'order_id':self.id, ....}) with self.line |= self.line.new({'name':val.name ....}) It should be documented IMHO. I opened bug Try this, it will work @api.onchange('partner_id') def onchange_equipment_type_id(self): """ Updates the order lines"" for val in self.bogus_values: #some bogus list that supposedly tells me how many lines I need to create self.line = [((0, 0, {'name':val.name, 'order_id':self.id, ....}))] I need something similar. I have to fill BoM list on selecting product. @api.onchange('product_id') def onchange_product_id(self): """ Updates the BOM lines""" if self.line: for val in self.bom_template: #value on a BOM template that cointain products self.line.create({'name':val.name, 'order_id':self.id, ....})
https://www.odoo.com/forum/help-1/question/create-new-many2one-records-in-api-onchange-method-2nd-try-70212
CC-MAIN-2017-04
refinedweb
765
68.87
Sphinx is one of the most famous documentation generator out there and we can also customize sphinx to match the needs of the yaydoc automatic documentation generator we are building at FOSSASIA. Sphinx comes with lots of themes and you can also create your own theme. This blog will guide you on how to set your own custom theme and how to make use of sphnix-quickstart tool that allows you to create a boilerplate in a few seconds. In yaydoc, we have a feature of generating documentation from markdown. So what you have to do is to modify conf.py to generate documentation from markdown. Therefore, I modified the bash script to add the necessary parser to conf.py but my co-contributor came with a better idea of solving the problem by creating a template file and specifying the path of template files to the sphinx-quickstart using the ‘t’ flag. Below are the steps on how you can create your own sphinx template. The command for initializing the basic template is as follows: pip install sphinx sphinx-quickstart After completing the above step, it’ll ask you a series of questions. Your basic template will be created but you can customize the generated files by providing your own custom templates and ask sphinx to generate a boilerplate from our customized template. Sphinx uses jinja for templating. To know more about jinja check this link. Let’s start creating our own customized template. Basic files needed to create a new sphinx template are as follows: - Makefile.new_t - Makefile_t - conf.py_t - make.bat.new_t - make.bat_t - master_doc.rst_t conf.py_t contains all the configuration for documentation generation. Let’s say if you have to generate documentation from markdown file you will have to add recommonmark parser. Instead of adding the parser after boiler plate generation you can simply add it in the template beforehand. from recommonmark.parser import CommonMarkParser With the help of jinja templating we can create boiler plate according to our business logic . For example, if you want to hard code copyright you can do it simply by changing the conf.py_t master_doc.rst_t will be having the default index page generated by sphinx . You can edit that also according to your need. Remaining files are basic makefile for sphinx, no need of altering them. You can see the example snippets in yaydoc repository. After you are done with your templating, you can generate boilerplate using -t flag by specifying the folder. sphnix-quickstart -t <template folder path>
http://blog.fossasia.org/creating-a-custom-theme-template-in-sphinx-for-the-yaydoc-automatic-documentation-generator/
CC-MAIN-2017-30
refinedweb
421
56.35
. AutoMapper AutoMapper is powerful object to object mapper that is able to do also smart and complex mappings between objects. Also you can modify existing and define your own mappings. Although it is possible to write way faster lightweight mappers still AutoMapper offers very good performance considering all the nice features it provides. What’s most important – AutoMapper is easy to use and it fits perfect to the context of this posting. Why mapping? To those who have no idea about the problem scope I explain a little bit why mapping between domain classes and application specific models or DTO-s is needed. Often domain classes have complex dependencies between each other and they may also have complex dependencies with their technical environment. Domain classes may have cyclical references that makes it very hard to serialize them to text based formats. And domain classes may be hard to create. By example, if you are using vendor offered powerful grid component then this component may want to serialize its data source so it can use it on client side to provide quick sorting and filtering of grid data. Moving from server to client means serialization to JSON or XML. If our domain objects have cyclical references (and it is normal they have) then we are in trouble. We have to use something lighter and less powerful, so we use DTO-s and models. If you go through all complexities mentioned before you will find more issues with using domain classes as models. As we have to use lightweight models we need mappings between domain classes and models. Base mapper Instead of writing mapper for each type-pair mappings you can avoid writing many lines of repeating code when using AutoMapper. Here is my base class for mappers. public abstract class BaseMapper<T, U> where T : BaseEntity where U : BaseDto, new() { protected IMappingExpression<U, T> DtoToDomainMapping { get; private set; } protected IMappingExpression<T, U> DomainToDtoMapping { get; private set; } public BaseMapper() { DomainToDtoMapping = Mapper.CreateMap<T, U>(); var mex = Mapper.CreateMap<U, T>() .ForMember(m => m.Id, m => m.Ignore()); var refProperties = from p in typeof(T).GetProperties() where p.PropertyType.BaseType == typeof(BaseEntity) select p; foreach (var prop in refProperties) { mex.ForMember(prop.Name, m => m.Ignore()); } Mapper.CreateMap<PagedResult<T>, PagedResult<U>>() .ForMember(m => m.Results, m => m.Ignore()); } public U MapToDto(T instance) { if (instance == null) return null; var dto = new U(); Mapper.Map(instance, dto); return dto; } public IList<U> MapToDtoList(IList<T> list) { if (list == null) return new List<U>(); var dtoList = new List<U>(); Mapper.Map(list, dtoList); return dtoList; } public PagedResult<U> MapToDtoPagedResult(PagedResult<T> pagedResult) { if (pagedResult == null) return null; var dtoResult = new PagedResult<U>(); Mapper.Map(pagedResult, dtoResult); Mapper.Map(pagedResult.Results, dtoResult.Results); return dtoResult; } public void MapFromDto(U dto, T instance) { Mapper.Map(dto, instance); } } It does all the dirty work and in most cases it provides all functionality I need for type-pair mapping. In constructor I define mappings for domain class to model and model to domain class mapping. Also I define mapping for PagedResult – this is the class I use for paged results. If inheriting classes need to modify mappings then they can access protected properties. Also notice how I play with domain base class: the code avoids situations where AutoMapper may overwrite ID-s and properties that extend domain base class. When you start using mapping then you very soon find out how bad mess AutoMapper can create if you don’t use it carefully. Methods of mapper base: - MapToDto – takes domain object and returns mapped DTO. - MapToDtoList – takes list of domain objects and returns list of DTO-s. - MapToDtoPagedResult – takes paged result with domain objects and returns paged result with DTO-s. - MapFromDto – maps DTO properties to domain object. If you need more mapping helpers you can upgrade my class with your own code. Example To give you better idea about how to extend my base class here is the example. public class FillLevelMapper : BaseMapper<FillLevel, FillLevelDto> { public FillLevelMapper() { DomainToDtoMapping.ForMember( l => l.Grade, m => m.MapFrom(l => l.Grade.GradeNo) ); } } Mapper classes extend from BaseMapper and add their specifics to mappings that base mapper doesn’t provide. Conclusion Mapping is also one repeating patterns in many systems. After building some mappers from zero you start recognizing parts they have in common. I was able to separate common operations of my mappers to base class using generics and AutoMapper. Mapper classes are very thin and therefore also way easier to test. AutoMapper makes a lot of dirty work for me that is otherwise time consuming to code. Of course, by all it’s power you must use AutoMapper carefully so it doesn’t do too much work. View Comments (5) Needs some more usage examples, like perhaps show a sample mvc controller using your mapping. AutoMapping saving time is a myth! Great article. I'm looking at creating a version of this so that I can initialize all BaseMapper types into IoC. One thing I did notice is that DtoToDomainMapping is never set. After the line foreach (var prop in refProperties) { mex.ForMember(prop.Name, m => m.Ignore()); } DtoToDomainMapping = mex is required to set DtoToDomainMapping I'm wondering, will this still work with the current version of Automapper? It should work with current AutoMapper too. As Mapper.Map() is deprecated and removed from AutoMapper we must use mapping profiles. It's possible to use this class as a base class for mapping profiles. We have to extend it from Profile class then.
https://gunnarpeipman.com/automapper-domain-classes-models/amp/
CC-MAIN-2022-40
refinedweb
923
57.57
AMP/Acore Model Design Contents - 1 Introduction - 2 About AMF and MetaABM - 3 Participating - 4 Schedule - 5 User Experiences - 6 Design - 6.1 General - 6.2 Structure - 6.3 Behavior - 6.4 Functions - 7 Discussion Introduction As part of the move to Eclipse we've been planning to transition the "metaabm" meta-model to "acore". Acore will have significant changes based on everyone's experience using MetaABM over the last couple of years. This page is for community discussion about those changes. Note that this document scope is for discussion of the core meta-model itself, not editors, generators, etc.. except as the model design directly affects those things. We could open up more pages for these if there is interest. About AMF and MetaABM AMF currently includes MetaABM, so if you want to explore MetaABM, it's best to install AMP, not MetaABM. Please see [Here] for a guide to installing AMP. It is not neccessary to have actually used the meta-model to participate, but it couldn't hurt! Please see this doc for more on the MetaABM/ Acore MetaModel itself. Participating The AMP project is actively soliciting feedback on those changes. Even better than feedback would be active participation in the design process. Things will be moving pretty rapidly so now is the time to get involved! Design discussions will take place at the following places. Design Feedback and Ideas This page is an active design document. All you have to do to edit this page is get an Eclipse bugzilla account here: Please feel free to add your own thoughts, and to revise or add categories, etc... And, just saying "this part didn't work really well", or "I'm not sure what you were thinking with.." is really valuable too. Think of this as brainstorming now, so you can throw ideas at the wall and we'll see what sticks. Just a few guidelines. Please follow general wiki talk etiquette, i.e. don't erase other's comments until it is a good time to condense. We will be churning and refining the doc overtime things so things will get distilled and we can archive discussions. Please end any comments with the following sequence so that your name and the time show up on the wiki: ~~---- See [here] for more. General Discussion and Help That should go to newsgroup: [1] Again, you can get access to the newsgroups by getting a bugzilla account == easy. Implementation and technical issues If it is really nitty-gritty: Actual Requirements As we refine things, we'll add them to the bugzilla. If there is a change or improvement that you really want to make sure gets into the Acore release, then you should file a bug report to the AMP AMF component for it. Really, feel free, it's always nice to get reports that come from outside of the actual project! Please also add it as a dependency to Schedule I'm really pushing to get this out by the end of the year. I think it's important to get on to the release train and go to 1.0 status so I don't want to delay things more than necessary. Thoughts?--Milesparker.gmail.com 00:31, 4 November 2009 (UTC) User Experiences This is a place to put any thoughts that don't fit into specific design issues. Things We Like Things We Don't Like - The hierarchical editor really is kind of awkward for editing behavior. Should we try to make behavior definition more hierarchical? --Milesparker.gmail.com 21:38, 4 November 2009 (UTC) Design General There are a lot of dependencies, some of which (such as in gen code) aren't really obvious, so we should balance changes against that. We also need to be sure that there is a clear mapping from MetaABM to Acore constructs. There are a number of design features that are in place largely as a result of original Repast Simphony integration that we should pull out. The overall design requirements are below. Let's try to capture anything in current model that doesn't meet them. - Ability to construct any (cannonical, imagined..?) ABM model with an Acore model and no other artifacts. - Extendible - Modifications to one part of model have low coupling to other components. (For example, we want to be able to change the type of a space without having to change everything that refers to that space. - Transparant - ... Naming - Change from current class name pattern of S for structure, A for Actions, F for Functions and I for pure abstract to an "A" for everything to match with Ecore design. - The package namespace would obviously change, from org.metaabm.** to org.eclipse.amf.acore.** Structure General Some specific proposed changes: - "SContext" -> ~"AEnsemble" - "SProjection" -> "ASpace" - "ASpace" and "AAgent" both inherit from "AScape" (not "Ascape" :)) - AAttribute would have a new "derived" flag that would specify the value as mutable but not modifiable through behavior. - Probably some cleanup of the IID and SNamed stuff as well as taking a look at SImplementation -- that seems a bit unnecessarily complicated. - Perhaps provide a more privileged way of specifying non-model (library) resources. Inheritance and Composition The most radical changes are actually to the basic composition structure. We need to figure out a general way to handle both of these in a way that supports ad hoc model composition but retains much of the elegance and simplicity of past models. I think that there are some flaws in the Simphony Context model that I discussed in NAACSOS talk. I'm going to try to get that online. I'm proposing that we need a model that provides an ensemble structure that allows much more control over "scale", "scope" and "space". What we want to support here is the seamless meshing, composition and filtering of models from any domain, scale, methodology or granularity. A somewhat tall order! --Milesparker.gmail.com 18:33, 4 November 2009 (UTC) There isn't any support for inheritance at all currently, as the naive approach of modeling the Java pattern doesn't fit with what we need to do. Hopefully the scoping mechanism will resolve those issues...being a bit vague here..but the idea is that we can move beyond raw inheritance to an ad hoc assembly of agent "facets" both through explicitly defining Agent Facets (like an abstract base class or a strategy pattern) that can be assembled into runtime agents through a scoping mechanism. Question: Is there a difference in kind between such a facet and simply supporting (multi!) inheritance? I think there may be, for example for the following use case: Scope, Scale, Type/Kind and Space What we need is a good way to define arbitrary semantic structures with an easily grokked one-line syntax. Here is a use case (Note how similar to the one for inheritance and composition, as they are both different aspects of the same issue: So here is what we know about Tim's model: - He's at Pennsyltucky (provenance) - He's studying marine birds (type and scale) - His model is in Antarctica (scale and scope) - His model has social interaction (scope) - His model has demographics (i.e. eat, reproduce, die..) And Sally's: - She's at BTI - Her model is about economic actors - Her model is really notional and could apply to many different scales, but maybe it makes sense to think of human scale? - Her model is about utility optimization (methodology) Now, at what granularity and using what mechanism do we provide scoping information across scales and types? I'm sure that there is relevant work out there. Please share it along with any ideas for the most flexible approach to this. Space Graphs / Networks Should these be called graphs, networks, or relations? Allow Edge Data / Agents This started out just thinking about providing weighted graph support. But it would be a lot more general and in some way's easier to implement to simply allow agents to be assigned to graph edges as well as nodes. Is this too confusing / weird an approach? Construction We had space construction types defined as attributes of graphs. These should really be moved out of there and into a build graph action specialization. Behavior New Features Because of timeline, we should try to keep scope of new features down, remembering that it is much easier to add things than to change existing features. But this would still be a good time to think through these issues, especially as we may want to drive some things into main design. Derived Rule Add a "derived" rule, which allows one to create attributes that are completely derived from rules. IFF an attribute is derived can it be modified in a ADervived root action, and it cannot be changed anywhere else. This would clean up a lot of code and make it much more modular and it would also allow users to easily create statistics and other measures. Support for Systems Dynamics There has been a lot of discussion about this. I wonder if we're ready to implement some specific proposals, for example explicit support for sources and sinks. Better Support for Discrete Events / Dynamic Scheduling) It's not clear that the existing scheduling mechanism gives us the flexibility we want. Ed MacKerrow has mentioned the need for some kind of explicit schedule creation mechanism so we'll use his example. State Machine Do we want more explicit support for state machines? (Ala AnyLogic?) Or is this really more of an editor issue? IOTW, are there things that we can't do easily now? Support for Equation Models (ODE/PDE) With the current model it should be possible to infer potential equation models from model contents. Is there any additional meta-data that would be helpful in that regard? Support Loop Structures / Recursion In declarative languages, loops are commonly achieved by admitting (some form of) recursion, but if I understood, recursion is not possible with metaABM rules. --Francesco Parisi Please note that the issue here is not about working with collections, i.e. the ubiquitous but problematic and unnecessary "for each" construct. Actions against collections are handled transparently by the query framework. For rest of discussion, see this web forum thread. Yes, we should definitely try to figure out how to achieve the result without distorting the overall Acore Action design approach. Changes to Select, Query and Logic Currently the ASelect, AQuery and ALogic is complex and the semantics are somewhat poorly defined. In practice it has been difficult for users to understand and difficult to generate code for. On the other hand, there are subtle and important things that the design allows you to do that we'd like to keep. Here are some random thoughts.. For example, consider the following: S1 / \ Q1 Q2 \ / & Q3 In the above case, we do a select against space and agent in S1 using Q1 and Q2 as terms. Then we check if the results of that selection meet Q3. A potentially confusing thing is that for the fowllowing case: S1 / \ \ Q1 Q2 Q3 \ / / & / \ / | (AAll) Q4 The select is ((Q1 & Q2) | Q3) and then Q4 is applied to the result of that selection. Basically, AAll and AAny serve as search terminals. This get's even more confusing when we add in ANone (Negation). We perhaps need to make this clearer. Here are some potential approaches: I'm thinking the best idea is to morph AAll into an AEnd node which indicates a set of conditions and just getting rid of AAny, but I'm not sure. It might be really nice and clean to just bundle them up into a set of Selects and Queries with multiple and entries. We need to make sure that any solution allows full and simple equivalence to the General Purpose Language else construct. I think we can do that with Negation but we need to be sure that will work. These are critical issues, so we really need feedback. Functions User Tools for Defining Functions Right now, we're manually providing the core libraries, though it's really pretty easy to define these, it would be nice to have some more automated hooks that allowed users to suck up a Java class file with a bunch of static methods in it and use them as methods. We actually have the code, it just needs to be integrated into the UI. We should also look at some use cases of this whole process along with examples to see if there is a way to make this whole process more visible. Discussion Start a section here for anything you want to discuss that doesn't fit in above...
http://wiki.eclipse.org/AMP/Acore_Model_Design
CC-MAIN-2020-29
refinedweb
2,118
62.07
Globalizing your Windows Phone 7 Application will allow you to reach a much wider audience. With the approach Microsoft is taking in there marketplace, this could mean the difference between your app making $500 or potentially > $1000 a month. Globalizing your application is actually fairly simple but is much easier if you do from the start. Globalization is much more than just translating the strings in your application, in addition you should also take a look at Localization Best Practices for Windows Phone for additional pointers. This quick tip will just show you how to setup your project so you can start using String Resources rather than just putting the text the for the TextBoxes in your application. Do this when you start your application, not at the tail end. Hear me now, believe later :) Our goal is to eliminate this: <TextBlock Text="HelloWorld" /> And pull that string from here: Here are the simple steps: - Add a resource file to your project. - I usually drag under the folder Properties, it just seems at home there. - Open the resource file and add name value pair, the name goes in the first column and will be used to refer to your resource in XAML, the value is the default languages string. Then change the Access Modifier to Public: Next create a class that will create a static instance of the resources within your application. The static class should be similar to: using System; using ResourceSample.Properties; namespace ResourceSample { public class StringResources { private static AppResources _resources; public static AppResources LocalizedResources { get { if (_resources == null) _resources = new AppResources(); return _resources; } } } } Then we need to create a static resource for this class in the App.xaml file. Finally instead of this: <TextBlock Text="HelloWorld" /> You can do this: Once you have all the text for your application in a resource file, you can package up and deploy additional as described here. Bonus Tip: Click here for how to localize the application title for your Windows Phone 7 application. -twb I think there is a mistake in the XAML code example. The binding path should should be Path=LocalizedResources.HelloWorld not Path=Strings.HelloWorld
http://www.thewolfbytes.com/2010/10/windows-phone-7-quick-tip-15-five.html
CC-MAIN-2021-21
refinedweb
358
59.03
Details - Type: Sub-task - Status: Resolved - Priority: Major - Resolution: Fixed - Affects Version/s: None - Fix Version/s: 3.0.0-alpha2, YARN-5355 - Component/s: timelinereader - Labels:None - Hadoop Flags:Reviewed Description. Issue Links Activity - All - Work Log - History - Activity - Transitions Number 4 I think should be avoided. This would lead to full table scan and EntityTable can grow quite large. Sorry not really a full table scan. But a large scan. Rohith Sharma K S, do you have any use case for number 4 ? Looking into code base for reader for supporting these REST end points. I see that ApplicationTable schema is defined with row key clusterId!userName!flowName!flowRunId!AppId. - /apps without userId and flowName should be able to read. Currently it is mandatory that to read apps to provide userId and flowName. - /apps/{app-id}/app-attempts can be redirect to existing getEntity for YARN-APPLICATIOIN-ENTITY. - /apps/{app-id}/app-attempt/{app-attempt-id}/containers. As per design, YARN_APPLICATION_ATTEMPT and YARN_CONTAINERS are stored as general entities. Reading will be same as others user entities. I think there should be another hbase table that stores YARN specific entities. Otherwise it is big overhead for clients to parse output to know about its parent entity. Other option is publish the entities as string rather than object. The below one is info value holds as object. "SYSTEM_INFO_PARENT_ENTITY": { "type": "YARN_APPLICATION", "id": "application_1471931266232_0024" } This can split into below which is very much useful in filtering. "SYSTEM_INFO_PARENT_ENTITY_TYPE": "YARN_APPLICATION" "SYSTEM_INFO_PARENT_ENTITY_ID":"application_1471931266232_0024" Copying my comment over from YARN-5571. If we have to support fetching all the apps within a cluster, upto a limit, this currently is not advisable in HBase implementation as it would lead to a full table scan or a very large scan. Because the API guarantees returning entities in a descending order by created time. This would mean that we will go through each row to determine the result set. It is about taking one of the choices. Do we want to drill down to apps via flows or list all the apps in a cluster. We can discuss further in the team on this, depending on the use case. And see if we can have some workaround. If merely list of apps within a cluster is required(without any other information) we can probably serve this from app to flow table. It is required to get all the apps in cluster. REST client will ask for limit and fromId.? If merely list of apps within a cluster is required(without any other information) we can probably serve this from app to flow table. Not only app id's but also other information required. And currently the way I see is need to get all the flows first and for each flows, retrieve apps. This is something where client need to lot of REST calls constructing for each flows. Attached a patch for supporting legacy API i.e retrieving containers for an attempt-id i.e /apps/{appid}/appattempts/{appattemptid}/containers. Basically, REST calls to getContainers method construct an infofilters internally and retrieve the entities. One potential problem of returning all apps in a cluster/containers in an app attempt is that they may easily cause severe server slowness if not used properly. In the past this became a great pain when Apache Ambari called the default endpoint of AHS, when the total number of applications is extremely large. I agree it is quite desirable to have the ability to "list all", but at the same time we may want to consider restrict the number of returned entities unless specified by the caller, or support paging, or both?? Currently not supported. What's the use case for this by the way ? This can be supported easily though, within the scope of a flow. Ah I just checked the v0 patch, the current approach to keep default limit to be 100 unless specified seems fine. Is there a way to specify "unlimit" (although not encouraged)? Thanks! Is there a way to specify "unlimit" Not as such. Max value of long can be specified. Will return everything Thanks! That's fine. Once we keep this behavior consistent we're good. Currently not supported. What's the use case for this by the way ? This is required for pagination. And more importantly fromId is must for support for pagination. And also limit & fromId should be there in all the REST end points. Attached the patch for /containers and /appattempts. These are similar to retrieving an general entity. So full table scan do not happen. Tested the APIs in real cluster and verified. Thanks Rohith Sharma K S! I took a deeper look into the patch. The overall approach LGTM. One concern I have is, shall we expose the semantics of "containers" and "appattempts" in timeline reader API level? To me, the timeline API is independent from the actual use cases: it merely provides concrete ways to retrieve timeline entities. Similar to the past AHS APIs, maybe we'd like to abstract a separate layer of APIs for YARN to read data from timeline service? Of course we can integrate the new "AHS REST API" layer in the timeline reader server, but code-wise shall we separate the new APIs from TimelineReaderWebService? BTW, I set this JIRA to be patch available so that Jenkins can find the patch and test it. If you want something otherwise please feel free to change it back. Thanks! ? Similar to the past AHS APIs, maybe we'd like to abstract a separate layer of APIs for YARN to read data from timeline service? Sorry I could not get it. OK let me clarify: IMO the reader API of YARN timeline service should focus on serving timeline entities according to caller's request, but not on how to serve YARN specific use cases. To the storage layer of timeline service, requesting "container info" should be similar to requesting distributed shell application information or Tez job information. I noticed that in this patch, we're passing some predefined constants, like: String entityType = TimelineEntityType.YARN_CONTAINER.toString(); This will query for a specific type of timeline entities. We may want to provide a different endpoint (like /ws/v2/applicationhistory) to support this YARN specific use case. In v1, we have AHSWebServices to support YARN specific application history information. Maybe we would like to keep the same way? This is my own (and subjective) idea. Feel free to let me know if you noticed some critical things I'm missing... Thanks! Separating out YARN specific details is good idea similar to v1. Here is my vote will be 50-50 for this approach. In v1, entities were fully separated out from yarn specific details. But in v2, Apart from the entities, Query Apps for a Flow and Query Apps for a Flow Run and other details are in TimelineReaderWebService. These are belongs to YARN specific details nevertheless of any underlying storage schema. All the entities are published under application scope which makes decision harder for devs to adding a new REST YARN specific end points. From the user perspective, I want to share you that say for retrieving all the apps with flow/flowrun uses path /ws/v2/timeline, but for retrieving attempts uses path /ws/v2/applicationhistory would lead to big question for users why there are 2 different Path for same application details!!!. May be we can takes other folks thoughts too on this. I've got offline discussions with several folks on if we should have concepts like "app-attempt" and "container". From a designer's perspective, app-attempts and containers should not be included in timeline APIs, but from YARN users perspective, requesting app-attempt and container level information seems to be very natural operations, especially since both concepts are top level concepts in YARN. So I'm relatively fine with having terms like "containers" and "app-attempts" exposed in timeline APIs, but we may want to be very careful to not to give an impression that attempts and containers are on the hierarchical order as flows and flowruns. So how about having two different hierarchical orders: Order 1, native timeline order: cluster, user, flow, flow-run, application, entity Order 2, YARN application order: application, app-attempt, container Once we're not mixing the two orders in APIs, the logic should be clear. Thoughts? OK I revisited the patch and our reader REST APIs. Right now we have 26 APIs for reader server. I tried to organize them in a way easier to understand: 0. cluster activity (2) - /flows/ - /clusters/{clusterid}/flows/ 1. (cluster - )user - flow - run - app - entity_type - entity_id sequence 1.1 hierarchical (12) - /clusters/{clusterid}/users/{userid}/flows/{flowname}/runs/{flowrunid}/apps/ - /clusters/{clusterid}/users/{userid}/flows/{flowname}/runs/{flowrunid}/ - /clusters/{clusterid}/users/{userid}/flows/{flowname}/runs/ - /users/{userid}/flows/{flowname}/runs/{flowrunid}/apps/ - /users/{userid}/flows/{flowname}/runs/{flowrunid}/ - /users/{userid}/flows/{flowname}/runs/ (user, flow, run information omitted since an application is unique in a cluster) - /clusters/{clusterid}/apps/{appid}/entities/{entitytype}/{entityid}/ - /clusters/{clusterid}/apps/{appid}/entities/{entitytype} - /apps/{appid}/entities/{entitytype}/{entityid}/ - /apps/{appid}/entities/{entitytype} - /clusters/{clusterid}/users/{userid}/flows/{flowname}/apps/ (looks weird, jumping levels) - /users/{userid}/flows/{flowname}/apps/ (looks weird, jumping levels) 1.2 uid (6) - /flow-uid/{uid}/runs/ - /run-uid/{uid}/ - /run-uid/{uid}/apps - /app-uid/{uid}/ - /app-uid/{uid}/entities/{entitytype} (entity type looks weird) - /entity-uid/{uid}/}/ So the new addition looks fine to me. Do we want to reorganize the code in a way consistent with this list? Right now the code seems to be a little bit messy. We can do it in this JIRA, or we can open a new JIRA to reorganize these APIs and discuss the endpoints that marked as weird? Thanks! Sequence 2 in this list seems to be replacing the majority part of the old AHS APIs, except for container logs. We may use them as a starting point to support AHS-like use cases in timeline v2. So the new addition looks fine to me. Looks fine to me as well. Do we want to reorganize the code in a way consistent with this list? Well the current organization is based on what we are retrieving. That is, all endpoints for fetching entities are together, for fetching apps are together and so on. We can follow approach suggested by you as well. I do not have a strong opinion on either. So I will leave it as it is. Lets see what others think. discuss the endpoints that marked as weird These endpoints were added to get all apps belonging to a flow so we skip the flow run section. There were use cases to fetch all apps within a flow in case run id is not known. Refer to Vrushali C's comment on YARN-3864 We also plan to list all apps for a user or queue in future as well. And based on use case of Rohith maybe list all apps within a cluster as well. However in my personal opinion that may not be necessary. You can check with New Web UI folks. We can follow approach suggested by you as well. I do not have a strong opinion on either. So I will leave it as it is. Lets see what others think. Fine with me. Once we have a clear view about the current rest end points and they're not confusing we're good. And based on use case of Rohith maybe list all apps within a cluster as well. However in my personal opinion that may not be necessary. Nice catch. Rohith Sharma K S if you feel we need this endpoint, please feel free to refresh the patch. Generally LGTM. I'll wait ~24 hrs and then commit the patch. In Li lu's REST endpoint list, there is one end point which is missed i.e listing all the apps for given flow name /users/{userid}/flows/{flowname}/apps/. This also use full API And based on use case of Rohith maybe list all apps within a cluster as well. This is one of the objective of this JIRA when I reported. But there is another end point from which apps can be retrieved i.e at flow name, the client can get all the flows in the cluster and for each flow, client can retrieve the apps using above specified path. So, let me confirm with Sunil does new UI has such use case to get all the apps per cluster. If so, I will refresh the patch. cc:/ Sunil G Yes. We are looking for a complete applications page where all applications which were running/completed had to be listed. For this purpose, I think we need the api as suggested by Rohith. Being said this, We will also be showing hierarchy from flows too. Once end use land on this applications page, various filters/views could be derived. Hence we could also cover or show details of flows. Thanks Sunil G for quick response. I will update patch accordingly. Along with below API's additionally 2 more API require I think. Thoughts?}/ New API's required. Thoughts? - /clusters/{clusterid}/apps/{appid}/appattempts/{appattemptid} - /clusters/{clusterid}/apps/{appid}/appattempts/{appattemptid}/containers/{contianer-id} are looking for a complete applications page where all applications which were running/completed had to be listed. For this purpose, I think we need the api as suggested by Rohith. Being said this, We will also be showing hierarchy from flows too. So you plan to have 2 app pages. One from a specific flow run and other a list of all the apps in a cluster. Right ? Rohith Sharma K S, how do you plan to support fetching all apps within a cluster ? Probably you can adopt the approach I had suggested. Because otherwise it would lead to full table scan. New API's required. Thoughts? We should have them for the sake of completeness. Thanks Rohith Sharma K S, the two proposed APIs make the app - attempt - container APIs more comprehensive. LGTM. are looking for a complete applications page where all applications which were running/completed had to be listed. This is something I'm not sure if we'd like to support on the timeline level. This information is pretty much available on the RM side via the state store. Why do we want to encourage an expensive operation on the timeline store for this data? Updated the patch adding 2 more REST endpoints. One small difference is there in the REST endpoints for container i.e instead of /clusters/{clusterid}/apps/{appid}/appattempts/{appattemptid}/containers/{contianer-id}, I have changed to /clusters/{clusterid}/apps/{appid}/containers/{contianer-id}. As of now /apps cab be dropped since apps can be retrieved using flows. Based on the use case from UI let us decide the /apps at cluster level.). Rohith, I think you're saying you are OK without having that API since we have a query that returns all apps for a flow RIght, I am fine with all apps for a flow. Seems like we're reaching much agreement on the general proposal of this JIRA. Rohith Sharma K S would you please update the patch so that we can move forward with the rest process? Thanks! Thanks Rohith Sharma K S for the patch. - - Javadoc will have to be updated I think otherwise we will get a -1 in QA report. Yes this will unnecessarily increase code size but then there is no other way out. - Update tests for app attempts too ? bq That said, these REST end points are enhancement of getEntitties and getEntity. So, do not want to change any of the query params associated with app-attempts and container. In future, we never know what metrics/configs/relationships would be stored in attempts/containers. Lets keep it as it, if user query configs/metrics then he gets result in empty. It indicates , there is no metrics or configs published. Javadoc will have to be updated I think otherwise we will get a -1 in QA report. Yes this will unnecessarily increase code size but then there is no other way out. This is one of point of concern for code readability. May be we can capture in document, but not in java class file. Thought? Update tests for app attempts too ? Let me update the tests and will update new patch. That said, these REST end points are enhancement of getEntitties and getEntity. So, do not want to change any of the query params associated with app-attempts and container. In future, we never know what metrics/configs/relationships would be stored in attempts/containers. Well the intention behind my comment was that we will document these REST endpoints separately. So for the user its a distinct endpoint. The fact that we in turn redirect this call to entity table is internal implementation. And if we indicate that you can apply config filters etc., it may indicate that we store them for containers, app attempts which we don't. And if there is something which we don't publish from RM/NM, I thought its pointless to have filter for it. Your thoughts on the same ? We currently do not restrict clients to publish even YARN specific entities so they can potentially publish configs for it but I remember we had a JIRA related to that i.e. do not allow arbitrary clients to publish YARN entities. I do not have a very strong opinion on this though. Let us see what others think. This is one of point of concern for code readability. May be we can capture in document, but not in java class file. Thought? We had initially decided we will add both javadoc and documentation. We however want to trim down the documentation a bit because some query params are repeated. Let us keep it consistent no matter what we do. We can probably add non-javadoc comments over REST methods so that developer can read them. As such REST endpoints are already captured in documentation for outside users of ATS. I remember Rohith Sharma K S you had a JIRA for it. Maybe we can handle it there if the team reaches a consensus on this. Sangjin Lee, your opinion on this ? That said, these REST end points are enhancement of getEntitties and getEntity. So, do not want to change any of the query params associated with app-attempts and container. In future, we never know what metrics/configs/relationships would be stored in attempts/containers. Regarding whether to retain things that are not needed for app attempts and so on, I don't have a strong opinion either way, but I might lean slightly towards leaving them in. Even with the existing queries for them today, we don't strongly reject them even if they are specified for app attempts but simply return empty data, right? Also, leaving them in might be slightly more future-proof. Again these are not strong preferences. in terms of documenting, let's just be consistent and add the javadoc. Being consistent is probably a good thing here. If we decide later that javadoc is superfluous, we could take them out in bulk. Rohith Sharma K S, does this need to go into trunk or can it stay in YARN-5355? If it is needed on trunk (which is the impression I got), then let's change it to a top level JIRA and create a patch based on the trunk. Once it is committed to the trunk, we can cherry-pick it onto YARN-5355 (and its associated branch-2 branch). I don't have a strong preference on those query parameters either, but I think adding them is totally fine. The latest patch generally LGTM. Rohith Sharma K S would you like to rebase it to trunk and fix the few "longer than 80 chars" warnings so that we can commit this? Thanks! Updated patch rebased against trunk. Delta changes from previous patch are - Added java doc for all the new REST endpoints - Added tests for verifying app-attempts and app-attempt. - Few checkstyles are fixed. Thanks Sangjin Lee Li Lu Varun Saxena for reviewing patch. On second thought, ultimate goal for YARN-5561 and YARN-5699 is to provide an easy way for YARN Web UI. It would be much helpful for YARN Web UI and also for the users who wants query containers/app-attempts from CLI if app-attempts/containers are in the form of ApplicationAttemptReport and ContainerReport format. Basically overall intention is to provide flexible REST API to users with which they can directly make use from response output rather than allowing them to parse the ATSv2 REST output. I think it would be good to take early decision rather than struggling at final stages which leads to many code changes in dependencies especialy YARN Web UI. Couple of approaches for providing a facility to convert TimelineEntity objects to respective YARN reports are - Launch a new web service i.e TimelineYARNEntityReaderWebServices along with TimelineReaderWebServices. This service would run as part of TimelienReader daemon. This new service is just an translator to converting TimelineEntity object to corresponding YARN reports such as ApplicationAttemptReport or ContainerReport. And this service would contains new YARN specific entities which are exposed REST end points in this patch. Note that these REST end points return type are corresponding reports like AppAttemptsInfo. This approach can be modified/enhanced to pulling out of ATSv2 to separate daemon service OR can be start new service in RM itself with application history. - For existing TimelineReaderWebServices, change the return type for newly added REST end points such as getAppAttempts/getAppAttempt/getContainers/getContainer with AppAttemptsInfo/AppAttemptInfo/ContainersInfo/ContainerInfo respectively. Cons to add in TimelienReader service is it becomes bit combination of ATS+YARN specific details. Thoughts.? This will bring down many of the JIRA like YARN-5561 YARN-5699 etc can be combined and avoided altogether . cc:/ Vinod Kumar Vavilapalli Similar to option 1 in terms of having separate endpoints is what we had in AHS/ATSv1 too i.e. in the form of AHSWebServices. That will clearly delink APIs'. What do we name it though ? ws/v2/applicationhistory ? We can further add things like serving container logs from this endpoint too. And this can act as one stop destination for fetching YARN specific history data. Could not get the intention behind pulling it out to a separate daemon though. Can you elaborate on that ? I'm generally OK with option 1, as it's quite consistent with my original thought (see the comment on Aug. 30). The only thing I would like to mention here is, we gave up this idea before because we thought when web UI requesting data, it needs to contact two different endpoints. I'm not sure if with current web UI use cases this problem is alleviated. This approach can be modified/enhanced to pulling out of ATSv2 to separate daemon service OR can be start new service in RM itself with application history. I'd incline not to go this far as of now. Sure, architecturally we can keep this flexibility, but could you elaborate more on the concrete use cases for the need of a separate daemon for now?. Could not get the intention behind pulling it out to a separate daemon though. Can you elaborate on that ?? Couple of more doubts just to be in sync, Does translation layer is with-in-the-reader service or separate ? What is the format of report object? Is this report object is general or only yarn entities? I was thinking of something like utility classes that can create and return specific report types. For example, ApplicationAttemptReport getApplicationAttemptReport(TimelineEntity appAttemptEntity); ContainerReport getContainerReport(TimelineEntity containerEntity); These classes can be (and probably would need to be) in the yarn common module so any REST client can invoke it to get a translated object back. Thanks for clarifications.. Though utility class is very useful for converting reports, one major concern is number of REST calls to be invoked by user. Especially in Web, this will become 2 REST calls which decrease the performance. Basically, me and Li lu was thinking to embedded TimelineYARNEntityReaderWebService to timelinereader daemon with REST path /ws/v2/applicationhistory. These REST end points would take care of retrieving entities from storage and convert to required YARN reports such as ApplicationAttemptReport or ContainerReport etc. Also, please note that what's contained in the current REST output would likely to be a superset of *Report; I agree that current *Report would be subset of TimelineEntity object. So , this we can solve by defining new *Report which compatible with ATSv2 either by extending current *reports since metrics are required. If any new fields published to ATS, then It is contract between publisher and reader that these information fields should added in *Report. New *report class is only wrapper over TimelineRntity object for YARN entities. And at any point of time, if user thinks he need more informations, he can query from ATSv2 which is always open. I was also thinking that we can have an endpoint like /ws/v2/applicationhistory. This can be used to not only serve app attempt.container reports but also serve use cases like serving aggregated logs of historical apps. We would need to serve aggregated logs from somewhere too. This will be useful for UI as well. Regarding metrics, well we can always extend ContainerInfo to carry metrics as well. Right ? During weekly sync up, we had discussion regarding for providing new REST URL i.e /ws/v2/applicationhistory. One of the point discussed is what is the advantage is having separate REST end points which returns is *Report format? It will become a another version existing data with different format. But ATSv2 TimelineEntity is super set of all where in user might required information which are not in *Report. So, if required information are published under first class entity info level like YARN-5699 doing, then it is more than sufficient. The consensus are - we publish the yarn entities important information at first class entity info level so that user can make use of these information to query using infofilters. - We will be going ahead with adding new REST end points(patch already attached i.e YARN-5561.03.patch) which are enhancement of existing getEntities. One question though. From where we will serve aggregated logs of historical apps from ? We do it currently from AHS. Log serving we can launch in separate REST namespace with in the TimelineReader. May it so called as YarnTimelineUtilService. Yeah what I meant is a separate endpoint may be required for things like that. Not sure what we name it though. Or we can just put everything under the hood of ws/v2/timeline. Anyways maybe a separate JIRA should be filed for that disucssion. Rohith Sharma K S, should we update documentation for this ? We can also do it in a separate JIRA though because some of the query params seem to be repeated in the documentation. Probably we can consolidate these query params at a single place in that JIRA and add these endpoints as well. Or you can add the endpoints here itself. As you wish. I would prefer to take it up as separate JIRA as consolidated documentation update. And also I suspect and pretty sure, there are couple of more changes will come which require to modify documentations. That should be fine. We can keep a JIRA open till next drop on trunk and update documentation via it just like we did before first drop on trunk. There can be multiple contributors to the JIRA. Rohith Sharma K S, could you kindly update v.03 of the patch to address several checkstyle issues (the ones other than the number of arguments violations)? Then we can go ahead with this. Thanks! Few nits. - "Return a set of application-attempts entities" should be "Return a set of application-attempt entities" in javadoc - Return a single application-attempt entity of the given Id => Return a single application-attempt entity for the given attempt Id in javadoc - Return a set of containers entities belongs to given application attempt => Return a set of container entities belonging to given application attempt in javadoc - Above getContainer method javadoc says " Return a single application-attempt entity of the given Id" which is incorrect. LGTM. Varun Saxena? A small nit again in javadoc. Sorry for nitpicking. "Return a set of containers belongs to given application attempt id" should be "Return a set of container entities belonging to given application attempt id" Other than that +1 from my side too. Updated patch fixing java doc. +1 on the latest patch. Sangjin Lee, I guess patch is fine for you as well. +1. Committing shortly. Committed the patch to trunk, YARN-5355 and YARN-5355-branch-2. Thanks Rohith Sharma K S for your contribution, and others for valuable feedback! SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10638 (See) YARN-5561. [Atsv2] : Support for ability to retrieve (sjlee: rev e9c4616b5e47e9c616799abc532269572ab24e6e) - (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestFileSystemTimelineReaderImpl.java - (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderWebServices.java - (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java + 1 on adding more rest endpoints. It makes things easier to query/script for. As such in the list above, perhaps 2 and 3 can be combined such that 3 becomes query params in 2. As discussed in today's call, this jira might help with YARN-4343.
https://issues.apache.org/jira/browse/YARN-5561
CC-MAIN-2017-17
refinedweb
4,994
57.06
Gosmore/Talk2008 Contents - 1 Trunk ways - 2 .pak file endianness - 3 Separate the program into 3 pieces - 4 Running it headless, outputting routepoints? - 5 Using gosmore with direct-fb (without X) - 6 Possible Bug? - 7 Segmentation fault while rebuild pak file - 8 BBOX rebuild problem with latest SVN version - 9 Floating point exception - 10 Roundabout support broken - 11 Translation - 12 Roads invisible - 13 Fatal Application Error - 14 mmap problem - 15 Bicycle routing oddities - 16 General comments - 17 Cycling over footways allowed? - 18 Routing through Parking Areas? - 19 Changelog, Version Number or Release Date? - 20 Gosmore gives up prematurely on long routes - 21 QUERY_STRING - 22 Pak-File generation on Windows platform - 23 Layer tag - 24 Options-File - 25 US Maps? - 26 track recording - 27 .osm file not saved if no GPS fix - 28 Key bindings - 29 Gosmore and gpsd - 30 where to ask this - 31 WinCE Gosmore installation. - 32 Adding City name behind road name in search list? - 33 Compiler (g++) warnings Trunk ways Are there any reasons why trunk, primary, secondary have the same weights in elemstyles.xml? Imho a faster route would go over a trunk way, instead of primary/secondary. This gives some weird routings in my areas where a new trunk road (110 kmhs straight) is constructed a few kms parallel to the old road (tagged as secondary), Because the old road goes trough villages, across roundabouts etc it is a much slower route, so definitely not the fastest route bot not the shortest. see . Also does gosmore uses the maxspeed account while calculating a route? --Skywave 10:38, 21 August 2008 (UTC) - Primary ways should also have a higher routing value in elemstyles.xml. Now it is the same as secondary roads --Skywave 16:58, 6 September 2008 (UTC) - maxspeed still needs to be fixed -- Nic 11:53, 10 September 2008 (UTC) .pak file endianness Is it the case that the format of the .pak file depends on the endianness of the processor? Someone mentioned something that seemed to imply this. Can this be fixed so that the .pak file is the same always? Rjmunro 20:12, 22 June 2007 (BST) - Yes, it needs to be fixed. Separate the program into 3 pieces I think that Gosmore's database system could be useful outside of Gosmore's interface. It would be great to write a mapnik backend that uses it, for example. Could gosmore be split into 3 parts - a library for accessing the database, a sample front end, and a program to make the .pak file (both of which would use the library)? Rjmunro 20:15, 22 June 2007 (BST) Running it headless, outputting routepoints? I wonder if it could be adjusted so it runs headlessly. I imagine a web based route planner, where the user defines a start and end point for a bicycle route. Passing both with arguments to gosmore and retrieving a list of waypoints (all direction changes or optional crossing turns only) would make it possible to display a route with waypoints which the user can still adjust. After that the user can request it as a file for his/her gps. Anyway, cool piece of software already. Playing with it is just fun :) --SlowRider 18:15, 10 July 2007 (BST) - Compile with make CFLAGS='-O2 -D HEADLESS'. It implements CGI : Run with the QUERY_STRING environment variable set to something like 'flat=-25.7786&flon=28.2906&tlat=-25.7681&tlon=28.2705&fast=1&v=motorcar' Error reporting is non existent at this stage. Using gosmore with direct-fb (without X) This is quite easy. Just change the line EXTRA=`pkg-config --cflags --libs gtk+-2.0 || echo -D HEADLESS` to lines EXTRA=`pkg-config --cflags --libs gtk+-directfb-2.0 || echo -D HEADLESS` EXTRA+=`pkg-config --cflags --libs gdk-directfb-2.0` in your Makefile. It will be very useful to adapt the window-size to your screen settings of framebuffer device. --Art1 16:29, 21 May 2008 (UTC) Possible Bug? Erm, following GPSd seems to me not having any effect? Could you explain, how it works? My GPSd is w orking correctly (tested with gpsdrive). Any hints? The program (svn-version and the versions of May 2008 sometimes crashing with double-free. I need some additional work (and time) to catch this... I would very happy if gosmore will be reach the stable state. Because it is easy adaptable to embedd ed devices and old laptops. --Art1 16:29, 21 May 2008 (UTC) - GPSd support have been re-enabled today. I'm not sure how useful the realtime route updates / audio instructions were, so I disabled it -- Nic 21:40, 23 May 2008 (UTC) Segmentation fault while rebuild pak file On FreeBSD when I try to rebuild pak file for any size osm data file allways get this error: 264 for (pairs = 0; pairs < PAIRS && s2grp < S2GROUP (0) + S2GROUPS; ) Segmentation fault (core dumped) What am I doing wrong ? Thanks for answer --Dido 18:13, 23 May 2008 (UTC) I noticed also a crash on Ubuntu 8.04 both with the ubuntu install as with compiling from the svn source. This happened with rebuild upon But I could make it work by changing the Makefile, changing CFLAGS=-O2 into CFLAGS=-g -O2 I did this to debug this problem with gdb, but "-g" also fixed the problem. If I remember correctly "-g" intializes undefined variables on 0, so it might be that gosmore uses undefined variables somewhere in its source-file. --Stephan 13:19, 13 June 2008 (UTC) with CFLAGS=-g -O2 I got same error ... Debug: 3724 Debug: 372 Debug: 3723 Debug: 3724 Debug: 372 Debug: 3723 Debug: 3724 Debug: 372 followed by segfault --Dido 12:52, 28 July 2008 (UTC) - The program used to go into an infinite loop after creating the window when compiled with gcc-4.x -O2, but that problem has now been fixed. The program will also cause a segfault when it cannot get the memory it wants during a rebuild, so you must have a good 230 MB free. (These 'Debug:' messages does not look familiar??) -- Nic 07:13, 2 August 2008 (UTC) BBOX rebuild problem with latest SVN version I tried the following command as described on the main page to build a .pak file for the America's on a 32bit machine: bzcat ../../planet-latest.osm.bz2 | nice ./gosmore rebuild -83 -30 83 179 But unfortunately Gosmore instantly fails with: Unable to open master.pak for bbox rebuild. Gosmore starts building the .pak file immediately when I remove the BBOX parameters though. --Lambertus 21:31, 27 June 2008 (UTC) - Did you first build the whole planet ? (3 hours) Did you then remember to mv gosmore.pak master.pak ? I would love to burn this file and the executables (1.3GB) to a directory on a DVD. I hope it runs responsively on the majority of PCs (Windows, Linux). In other words a 'live' CD that can be handed out at conferences. -- Nic 11:15, 28 June 2008 (UTC) - I have changed the way the pak file is generated: First split the planet file into two areas (Eurasia and America's) using Osmosis, then feed each area to a separate instance of Gosmore. The Eurasia area results in a pak file succesfully while the America's fails due to much data (32-bit limit). Splitting the America's into North and South won't help because the main body is the USA Tiger data which I don't like to split because that would break routing. So for 32-bit systems it seems impossible to do worldwide routing. Maybe an optional compiler switch to use a 64bit datatype at the expense of larger datafile size? --Lambertus 09:35, 7 August 2008 (UTC) Floating point exception I've updated the webbased routing service today with a new version of Gosmore today (svn up) and I must say that it returns many more requests succesfully then before (it used to go into a loop often). Great stuff! Unfortunately I did run into a problem with one request though where Gosmore exits with a 'floating point exception'. Can you have a look at this please? Please tell me what you want me to do to help... The commandline is: QUERY_STRING='flat=51.7967&flon=4.6715&tlat=52.37736&tlon=4.8838&fast=1&v=motorcar' nice ./gosmore using this i686 pak file. --Lambertus 15:51, 6 August 2008 (UTC) Roundabout support broken It appears that roundabout support is broken in svn version 2008-08-06. It will not route over rounabouts and makes giant detours to avoid them. --Lambertus 08:17, 7 August 2008 (UTC) - This problem was probably caused by using an old elemstyles.xml file with a new Gosmore version. Using a clean build solved it. --Lambertus 19:52, 8 August 2008 (UTC) Translation If you need a "Fastest Route" translation, why don't you need a "Shortest Route" and "Pedestrian Route" translation? --Lulu-Ann 16:14, 14 August 2008 (UTC) - If FastestRoute is off, the shortest route will be calculated. To get pedestrian mode, set Vehicle to 'foot'. Vehicle names are always in English. The user interface isn't very clean, but at least it works. -- Nic 20:46, 22 August 2008 (UTC) Roads invisible I downloaded the 2008-08-13 gosm_arm.exe. When I run it it doesn't show roads or road names but POIs appear ok. If I just replace the exe with an older version roads appear again. --Zorko 13:40, 15 August 2008 (UTC) - Executables and data files should be created / downloaded at the same time. The project is evolving rapidly a.t.m. -- Nic 20:48, 22 August 2008 (UTC) Fatal Application Error gosm_arm.exe crashes when using the pak faile for argentina, both from 2008-08-23, downloaded from your site --Zorko 13:56, 25 August 2008 (UTC) - Those are running fine on my GPS right now. Perhaps it will work if you delete the gosmore.opt file and try again. -- Nic 17:58, 27 August 2008 (UTC) mmap problem If i start gosmore on my T-Mobile MDA/HTC Himalaya (PXA264/400Mhz 128mb ram) it says: "mmap problem. Pak file too bid ?" but the pak filled only 200k. Its a bug, a misconfiguration or is my device not supported? Thanks for help Yvesf 19:58, 8 September 2008 (UTC) - using the official germany.zip gosmore.pak file gosmore starts and then exits without anything. - same here. normal windows ce pda. and the germany pak file. i tried a smaller city pak file and i could start it. --Xsteadfastx 11:21, 10 September 2008 (UTC) It's possible that some vendors removed support for the CreateFileMapping call and gosmore will not run on those. After many tests I found that 400MB is the limit for most devices. So Germany is now split into 2 overlapping regions. -- Nic 11:53, 10 September 2008 (UTC) Bicycle routing oddities I noticed that routing for bicycle will happily route up the wrong side of a dual carriageway! I suggest that oneway should be observed for bicycles! Oneway streets that can be cycled up the other way should be tagged as such. Daveemtb 12:28, 9 September 2008 (UTC) - Which tag ? I can't find it -- Nic 11:53, 10 September 2008 (UTC) It seems that oneway=yes is ignored for bicycle routes. Gosmore even routes through roundabout in the wrong direction if routing for bikes! I agree that bike routing should obey oneway restriction, or at least impose a severe penalty on doing so (you must dismount and are effectively a pedestrian) FedericoCozzi 12:58, 11 January 2009 (UTC) General comments The functionality seems impressive, but IMHO the user interface needs a lot of improvement, it's exceedingly cumbersome at the moment on windows mobile. I would be happy to make interface suggestions if it helps. One other area that could perhaps be improved is a neater way to record road names. One other big question - what license is this software made available under? I didn't see that on the wiki page. Anyway, I'm very grateful for a WM app that can display OSM data and use the GPS on my TyTN II though! Daveemtb 13:14, 9 September 2008 (UTC) - It's PD. I'm glad you find it useful and I'm please how fast development has gone the last few months. But I'm think of taking a break until 2009. -- Nic 11:53, 10 September 2008 (UTC) Cycling over footways allowed? Gosmore routes over footways when a cycle route is requested. Is this wanted behavior and should bicycle=no be added to footways where cyclists are not allowed? Personally I would prefer not to route cyclists using footways unless explicitly allowed with bicycle=yes. Example using Gosmore from Sept. 3rd. --Lambertus 16:58, 11 September 2008 (UTC) - For the shortest route, which your link requests, a cyclist can dismount and walk along the footway but has to avoid any steps. If bicycle=yes is present, real cycling is possible. Alv 05:37, 12 September 2008 (UTC) Routing through Parking Areas? At present Gosmore does not calculate routes though parking areas (or other closed ways such as parks). Would it be possible to allow calculations of routes when ways are attached to the perimeter of a parking (or the like) areas? As the parking area may have an irregular shape I guess the best way to represent this on a map would be to highlight the perimeter and ignore the additional distance that this part of the route would introduce. Example of a 'faulty' route --Mungewell 18:06, 15 September 2008 (UTC) - One thought I had whilst out mapping yesterday is that an area could be marked as routeable by adding the foot=yes or bike=yes tags, Gosmore could pick up on these and route across these areas only. --Mungewell 17:44, 25 September 2008 (UTC) Changelog, Version Number or Release Date? I can't find information about the progress in development of Gosmore. Is there an easy to read changelog? And where can I see what version I am using on my machine? --Stephan75 18:31, 7 October 2008 (UTC) - There is a changelog on the web at [1], but the precompiled Windows / WinCE binaries are currently a few weeks behind pending proper testing. The debian version is a lot more out dated. -- Nic 22:18, 9 October 2008 (UTC) Gosmore gives up prematurely on long routes Long routes often cut-off by Gosmore (JUMP) while breaking up the route in smaller portions shows that the underlying data is good enough. In effect it looks like Gosmore just gives up prematurely. Examples: 1, 2 --Lambertus 08:47, 17 October 2008 (UTC) - This patch to line 437 of libgosm.cpp should alleviate the problem. The drawback is worse performance for shorter routes. - dhashSize = Sqr ((tlon - flon) >> 16) + Sqr ((tlat - flat) >> 16) + 20; + dhashSize = Sqr ((tlon - flon) >> 15) + Sqr ((tlat - flat) >> 15) + 20; - This patch causes Gosmore to find the routes okay, but at an huge cpu time impact. So much so that the server on occasion is drowned in Gosmore processes all fighting for precious I/O bandwidth (maybe caused by not able to load the entire database in memory due to lack of available RAM). I have gone back to the latest SVN version, accepting any problems with very long routes that may occur. --Lambertus 16:42, 23 November 2008 (UTC) - Doesn't work for me: gosmore result: 3, other routing engine: 4 - Same Coordinates --M0nty 00:47, 22 November 2008 (UTC) QUERY_STRING How can i get the distance/length of a route via the QUERY_STRING variable? I get only the Route. (Linux). --M0nty 21:47, 10 November 2008 (UTC) - You will have to calculate the route length using the node coordinates yourself. --Lambertus 09:07, 11 November 2008 (UTC) - And the drive time? --M0nty 21:54, 14 November 2008 (UTC) - Is it possible to change the libgosm.cpp?, so it gives me the speedvalue back?--M0nty 20:04, 17 November 2008 (UTC) - I made a sample script in bash & python to do i !/bin/bash QUERY_STRING="flat=$1&flon=$2&tlat=$3&tlon=$4&fast=0&v=$5" export QUERY_STRING export LC_NUMERIC en_US gosmore | awk -F "," '{printf("%s %s\n",$1,$2)}' > tmp.dat sed -i -e 1,2d -e "s/^M//" -e '/^ *$/d' tmp.dat ./measure.py - /---------------------- measure.py --------------------------/ !/usr/bin/env python import sys import math import csv def measure(coord1, coord2): lat1 = float(coord1[0]) lon1 = float(coord1[1]) lat2 = float(coord2[0]) lon2 = float(coord2[1]) R = 6378.137 dLat = (lat2 - lat1) * math.pi / 180 dLon = (lon2 - lon1) * math.pi / 180 a = math.sin(dLat/2) * math.sin(dLat/2) + math.cos(lat1 * math.pi / 180) * math.cos(lat2 * math.pi / 180) * math.sin(dLon/2) * math.sin(dLon/2) c = 2 * math.atan2(math.sqrt(a),math.sqrt(1-a)) d = R * c return d * 1000 total = 0 with open("tmp.dat") as f: c = csv.reader(f, delimiter=' ', skipinitialspace=True) coord1 = c.next(); for coord2 in c: total += measure(coord1, coord2) coord1 = coord2 print total - /---------------------- measure.py --------------------------/ Pak-File generation on Windows platform The recommended command "gosmore -rebuild" does not work under Windows even though the program itself tells you to do so. We have checked the source code: The required function is commented out. A solution for this issue is to run it on the Linux platform just for this purpose. - note: if you are generating a pak file on Linux which you wish to transfer to a windows machine you will currently need to run revision 10694 in order to correctly display road names. Dmgroom 21:07, 4 January 2009 (UTC) Layer tag I think there is a bug in gosmore when handling "landuse" objects (and possibly other area type objects) which have a "layer" tag set. Roads and street in residential areas which are "landuse=residential" but without "layer=x" are shown fine in gosmore. Residential areas with e.g. "layer=-5" seem to be rendered on top of the streets, so that the streets cannot be seen. --R kleineisel 08:06, 20 January 2009 (UTC) Options-File The options-file is saved at \My Documents\gosmore.opt. Also the logged GPX-Files are saved there. But, theres a big problem with this folder. It will be deletet at each restart on some navi-units. Also on my navi-unit. Is it possible to make this Ini-File in the same folder like the .exe File? btw: great software. - Yes, just move it to the same directory as the exe, then the software will no longer use "My Documents" -- Nic 17:54, 27 January 2009 (UTC) - Thanks, it works. :-) But sometimes the Logged gpx-files are 0-byte-fils or there isnt an gpx-file.... But "GPS-Tacho" has the same problem. - Dennis - btw: Anyboy an idea what to doo against the loss of data? - Dennis US Maps? Has anyone prepared maps for US States? It would be great to have a link if someone has done so. --Liber 02:29, 26 January 2009 (UTC) For what purpose? Running that map on a PDA won't work as it needs a 64bit machine to handle the database size. --Lambertus 09:10, 26 January 2009 (UTC) track recording besides navigation i use gosmore to make tracklogs mainly, as is the only soft that works great on my device, "ndrive 280" (PNA on Win CE). i would be perfect if we could change the time between points, so it works for car, bike or walk. thanks, nice work. --Sergionaranja 11:23, 1 February 2009 (UTC) .osm file not saved if no GPS fix The Win CE version doesn't save an .osm file if no GPS fix was found (don't know if it's a bug or a feature...). --JohannesF 08:20, 16 March 2009 (UTC) Key bindings Gosmore doesn't seem to have many key bindings for keyboard control, which is a pain with a track stick mouse. I've written a quick patch to add some basic key controls. Can I send it to you guys for consideration? It may need some fixing as I'm new to GTK and no idea about the Win CE side of things. Gosmore and gpsd I tried Gosmore with gpsd (with an USB GPS receiver) on my eeepc running Ubunutu. Gosmore seems to work with inexact lat/lon-coords, for example: Gosmore sets the coords to: "?lat=48,20000&lon=16,31667", but gpsd send following: latitude="48.211736" longitude="16.327152". Other applications (viking, tangoGPS,...) get the right data und work with the right data. Is there a bug with a math-operation in Gosmore? I hope, there is a help, because i want to use it... A second question i have: is it possible, to put a marker (a point, a cross, ...) at the current position on the map? thanks! --Bananenfisch 18:49, 15 April 2009 (UTC) - In order for Gosmore not to link against libgpsd, it interprets raw NMEA data from gpsd. I tested it on a few receivers, but it's possible that your setup generates slightly different NMEA data. Please execute 'netcat localhost 2947' or 'telnet localhost 2947' and post 10 lines here. Or email it to nroets@gmail.com - Only WinCE supports placing a marker on the map. -- Nic 08:28, 24 April 2009 (UTC) - Thanks for your answer. Well, I use gpsfake for the Test, here is the Data: - gpsfake-input: - gpsfake: line 1: $GPGGA,011325,4812.7042,N,01619.6291,E,1,04,5.6,1173.6,M,34.5,M,,*7C - gpsfake: line 2: $GPRMC,011325,A,4812.7042,N,01619.6291,E,74.5,188.1,010409,5,E,A*33 - gpsfake: line 3: $GPGGA,011327,4812.6627,N,01619.6233,E,1,04,5.6,1104.5,M,34.5,M,,*71 - gpsfake: line 4: $GPRMC,011327,A,4812.6627,N,01619.6233,E,75.0,185.4,010409,5,E,A*31 - telnet to gpsd: - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - R - GPSD,R=1 - $GPGGA,011325,4812.7042,N,01619.6291,E,1,04,5.6,1173.6,M,34.5,M,,*7C - $GPRMC,011325,A,4812.7042,N,01619.6291,E,74.5,188.1,010409,5,E,A*33 - $GPGGA,011327,4812.6627,N,01619.6233,E,1,04,5.6,1104.5,M,34.5,M,,*71 - $GPRMC,011327,A,4812.6627,N,01619.6233,E,75.0,185.4,010409,5,E,A*31 - When I click on "Follow GPSr" at gosmore, gosmore says: "?lat=48,20000&lon=16,31667&zoom=17" - The trackfile from gosmore: - <trk> - <trkseg> - <trkpt lat="48,200000000" lon="16,316666667"> - <ele>1173,000</ele> - - </trkpt> - <trkpt lat="48,200000000" lon="16,316666667"> - <ele>1104,000</ele> - - </trkpt> - </trkseg> - </trk> - But correct Data at the trackfile from viking: - type="track" name="REALTIME" - type="trackpoint" latitude="48.211736" longitude="16.327152" altitude="1173.000000" unixtime="1238548405" extended="yes" speed="38.326000" course="188.099999" fix="3" - type="trackpoint" latitude="48.211044" longitude="16.327055" altitude="1104.000000" unixtime="1238548407" extended="yes" speed="38.582999" course="185.400000" fix="3" - This is very strange for me :-( - Thanks --Bananenfisch 00:04, 29 April 2009 (UTC) - On my computer gosmore produced the correct results (below). The one possibility that atof() is included in the math library and it was not linked in on your machine. Try adding '-lm' to the EXTRA variable in Makefile. - The other possibility is that your locale does not use '.' as the decimal sign. You should try is to run the application with 'LANG=C ./gosmore'. - <trkpt lat="48.211736667" lon="16.327151667"> - <ele>1173.600</ele> - </trkpt> - <trkpt lat="48.211045000" lon="16.327055000"> -- Nic 22:41, 31 May 2009 (UTC) where to ask this Is there a gosmore forum, mailing-list or some other place where the development is going on? I'm constructing a web-service to collect traffic messages and would like to get input from as many other navigation -applications as possible to make the service usefull to as many of them as possible. --MarcusWolschon 08:29, 20 April 2009 (UTC) - I've been in contact with David Dean (who has done some Gosmore contributions) on the osm-dev IRC channel. Other then that all contact has been with Nic Roets through email. --Lambertus 12:00, 20 April 2009 (UTC) WinCE Gosmore installation. The procedure says unzip files on a SD card, insert the card then run gosm_arm.exe... But, how do I run gosm_arm.exe (or any .exe in fact) on a GPS device (a Medion or a Connex for example)? The standard interface of those devices doesn't allow me to run any software on the SD card. (With a Connex, it seems that if I put an any_prog.exe as \MobileNavigator\MobileNavigator.exe, it will run if I select 'Guide' in the main menu; I'll try 'gosmore' soon)-- Xof 17:16, 1 August 2009 (UTC) - Depends on the device. Many will run "\MobileNavigator\MobileNavigator.exe" if you tell it that your software is external. Most will allow you to change the registry if you connect it via USB to a computer with ActiveSync & CeRegEdit.exe -- Nic 17:53, 18 August 2009 (UTC) - Under WinCE there is a file explorer (german "Datei-Explorer"). Open this application, select the SD Card and the Gosmore-Exe and double click on the exe. Now the application will start. Hope this helps to solve your problem. --Wst 18:25, 21 August 2009 (UTC) Adding City name behind road name in search list? The search box allows to look for street names. If you look for a common street name which exists in several cities, the result list shows each entry, but does not specify to which city the entry belongs to. According to the program description same entries are ordered by distance from the current location, so the nearest is listed first, but that does not always help to select the right one. I would like to suggest to add the city name in capital letters after the road name. For example: Mozartstrasse HAMBURG On small devices there is limited space available on the screen, in this case there could be displayed just the first 3 - 5 letters of the city name (in capital letters). For example: Mozartstrasse HAM Unfortunately I am not a programmer. Is this a lot of work to implement? Could somebody add this feature? - In SVN there is now a new version where the search results are much more verbose (although the city names are not displayed as you are requesting) as well as many improvements (Better rendering & 3D). Further improvements are being worked on and the Windows ports will only be updated at the end of the process. Please be patient. -- Nic 18:33, 1 October 2009 (UTC) Compiler (g++) warnings I compiled gosmore today with the -pedantic flag on my linux system (Gentoo, 64 Bit, running gcc-4.3.4) and got some errors. I will try to submit patches for those problems but it seemed best to have a list here. I consider clean compilation a really good thing since it helps you spot errors when they are introduced! When compiling with -pedantic enabled I get the following errors/warnings: libgosm.h:22: error: ISO C++ does not support ‘long long’ C++ does not define the long long type. A better (not totally clean) solution would be to use the int64_t type. - It's been a long time since I read the relevant docs, so it's good you tell me. ... libgosm.cpp: At global scope: libgosm.cpp:1045: warning: unused parameter ‘w’ Trivial fix ... rullzer 11:53, 24 October 2009 (CET)
http://wiki.openstreetmap.org/wiki/Gosmore/Talk2008
CC-MAIN-2017-13
refinedweb
4,647
73.88
Classic FDM and FDMEE MigrationSSN01 May 13, 2014 7:07 AM All, We are planning to migrate Classic FDM to FDMEE, So I have few questions, can you people give some suggestions/feedback? 1) Will the performance would be improved if we move from Classic FDM to FDMEE with same set of data's & Configuration? 2) We are using some Scripting & API's in classic FDM, would all those functionality supported in FDMEE? 3) is any other Risk / impact involved if we migrate to FDMEE? Thanks SS 1. Re: Classic FDM and FDMEE MigrationFrancisco Amores May 13, 2014 9:37 AM (in response to SSN01)1 person found this helpful Hi, here you have my inputs. 1) Will the performance would be improved if we move from Classic FDM to FDMEE with same set of data's & Configuration? Technically you will, just with FDMEE on Middleware, using ODI... you will get a better performance. But you need to take into consideration that if you have a poor design in FDM, and you don't review it, then you will have a poor design in FDMEE. Migration from FDM to FDMEE is a very good opportunity to review your Classic FDM design as FDMEE adds new functionality that may simplify your current FDM app like multidimensional mapping, SQL scripts, format mask for mappings, import format for multi-period loads, data load rules, batch executions, etc. 2) We are using some Scripting & API's in classic FDM, would all those functionality supported in FDMEE? FDMEE uses Jython for import scripts and mapping scripts. For Custom and Event Scripts you can chose between Jython and VB. So if you have: import scripts -> rewrite is needed to Jython mapping scripts -> rewrite is needed to Jython or you may replace them with multidimensional mappings or SQL script mappings scripts in Logic Accounts -> rewrite is needed to Jython scripts in Validation Rules -> rewrite is needed to Jython Custom or Event Scripts -> Jython or VB. If you want to keep your old VB scripts, you will have to adjust them as classic FDM was using old VB script (6.0) and FDMEE uses VB script .NET You have in the admin guide both APIS for Jython and VB. IN addition you can see different examples. I would recommend to have all in Jython. 3) is any other Risk / impact involved if we migrate to FDMEE? The most important points is that you make use of new functionality to improve your FDM application. Customization suppose the most import risk as they will need to be re-implemented. For example, if you have customization in your database, or as I said, scripting... To me, the biggest risk is to repeat errors of the past due not knowing the product and its functionality. I hope that helps. Regards 2. Re: Classic FDM and FDMEE MigrationSSN01 May 13, 2014 11:27 AM (in response to Francisco Amores) Thanks for your suggestion. As FDMEE support Custom VB Script, Can we use the existing Custom Script(Classic FDM) in FDMEE ? Does it required any modification? Thanks SS 3. Re: Classic FDM and FDMEE MigrationFrancisco Amores May 13, 2014 12:35 PM (in response to SSN01) They do require some modifications. 1. You need to declare input parameters as the script will be executed with cscript utility (you have sample code in admin guide) 2. Some of the API functions/subs have been removed Best thing would be to check which FDM API functions you are using and try to find them in the API section of the admin guide. Standard VB syntax will be the same. Note: that if you migrate to linux platform, vbscripts will not work anymore.
https://community.oracle.com/thread/3559026
CC-MAIN-2016-50
refinedweb
616
71.04
Counting Sort Reading time: 20 minutes | Coding time: 7 minutes Counting sort is an algorithm for sorting integers in linear time. It can perform better than other efficient algorithms like Quick Sort, if the range of the input data is very small compared to the number of input data. It is a stable, non-comparison and non-recursive based sorting. It takes in a range of integers to be sorted. It uses the range of the integers (for example, the range of integers between 0–100), and counts the number of times each unique number appears in the unsorted input. It works by counting the number of objects having distinct key values (kind of hashing). Then doing some arithmetic to calculate the position of each object in the output sequence. In Counting sort, the frequencies of distinct elements of the array to be sorted is counted and stored in an auxiliary array, by mapping its value as an index of the auxiliary array. Is counting sort a comparison based sorting algorithm? Algorithm Let's assume that, array A of size N needs to be sorted. - Step 1: Initialize the auxilliary array Aux[] as 0. (Note: The size of this array should be ≥max(A[])) - Step 2: Traverse array A and store the count of occurrence of each element in the appropriate index of the Aux array, which means, execute Aux[A[i]]++ for each i, where i ranges from [0,N−1]. - Step 3: Initialize the empty array sortedA[] - Step 4: Traverse array Aux and copy i into sortedA for Aux[i] number of times where 0≤ i ≤max(A[]). Example The operation of COUNTING - SORT on an input array A[1...8], where each element of A is a non-negative integer no larger than k = 5. Now taking new array B of same length as of original. Put number at the value at that index and then increament the value at the index of the count array.[]. - C++ C++ // C++ Program for counting sort #include <iostream> Using namespace std;} Complexity The array A is traversed in O(N) time and the resulting sorted array is also computed in O(N) time. Aux[] is traversed in O(K) time. Therefore, the overall time complexity of counting sort algorithm is O(N+K). Worst case time complexity: Θ(N+K) Average case time complexity: Θ(N+K) Best case time complexity: O(N+K) Space complexity: O(N+K) where N is the range of elements and K is the number of buckets Important Points - Counting sort works best if the range of the input integers to be sorted is less than the number of items to be sorted - It is a stable, non-comparison and non-recursive based sorting. - It assumes that the range of input is known - It is often used as a sub-routine to another sorting algorithm like radix sort - Counting sort uses a partial hashing to count the occurrence of the data object in O(1) time - Counting sort can be extended to work for negative inputs as well
https://iq.opengenus.org/counting-sort/
CC-MAIN-2020-29
refinedweb
514
57.61
Red Balloon - How Does It Work? Introduction: Red Balloon - How Does It Work? The carbon monoxide sensor detects high levels of CO-gas concentrations in the air. When the concentration reaches a high level (which we pre-set) the LED changes color from green to red. Step 1: Getting the Components Click on the magic link to circuito.io and the components will alreadyt be pre-selected for you: carbon monoxide sensor, RGB Led common anode, Arduino Uno and a 4 AA battery case. You can use a different controller and power supply but make sure you change the code accordingly. Apart from these components, you’ll also need peripherals such as jumper wires, a few different resistors, a breadboard and a few other things, and they will all be listed for you in the BoM section of the reply Step 2: Assembling the Circuit Once you have all the components, you can start putting the circuit together according to the step-by-step wiring guide in the circuito.io reply. In addition to the step by step guide, also make sure to: 1. Solder the MQ7 sensor to the breakout board. On Step-by-Step the MQ7 sensor is currently not sitting directly on the breadboard. We recommend that at first, you connect it directly to the board (as you can see in the image above). After you test with the code (in step #3 - Coding) that everything is up and running, you can connect the MQ7 sensor to the ribbon as described here below. 2. Solder a 4 or 6 wire ribbon to the Q7 sensor. We used a 15 feet ribbon (approx. 5 meters). 3. You can use crimps at the tips of the ribbon to connect them to the breadboard. Generally we like to use these instead of soldering. You can buy a kit such as this one, it’s super handy. Step 3: Coding - Download the code and extract it to your computer. - Next, open it with the Arduino IDE, and make sure that the port and board (Arduino Uno) are selected properly, then upload the code to the Arduino. - If the code is working, you need to replace the code you got from circuito.io with the code below, since this is the specific code for the project. If you've never used Arduino before, follow the steps below: Download this software, which can be used with any Arduino board. Refer to the Getting Started page for Installation instructions. #include "Global.h" void setup() { // Setup Serial which is useful for debugging // Use the Serial Monitor to view printed messages Serial.begin(9600); Serial.println("start"); } void loop() { int val = potentiometer.read(); val = map(val, 40, 120, 0, 255); Serial.print("pot:"); Serial.print(potentiometer.read()); Serial.print(" val:"); Serial.println(val); // Change the rgb colors with an interval of 500ms rgbLed.setRGB(255 - val, val, 0); delay(100); } Step 4: Buy a Balloon and You’re Ready to Go! Last but not least, find a place that sells helium balloons near your house. It doesn't have to be red :) Connect the MQ7 sensor to the balloon (zip-ties are the easiest way to do this), and start monitoring the air quality around you. We hope you’ll be pleasantly surprised. Step 5: Have Any Questions? Want to Start Your Own Project? We are more than happy to answer your questions. Just shoot us an email: hello@circuito.io or find us on facebook. If you already have an idea for a new project, you can start right away by submitting the basic components on circuito.io and we’ll do the rest. Don’t forget to share your creation on our facebook page. Enjoy Making!
http://www.instructables.com/id/Red-Balloon/
CC-MAIN-2017-34
refinedweb
622
73.98
Author: Jo Walsh, University of Openness, Limehouse, London 2003-10-24 Experiences developing an RDF application server in perl have left more questions than answers, particularly in the area of HTTP-based query API, dereferencing techniques, and the serialisation of meta-statements (context) in an RDF/XML API. Here we outline the architecture and talk through the concerns raised; where the implementation seems to match the state of the art or lag behind it. A specification is offered for a putative query language which takes some concerns into account; there is a short discussion about contexts and getting a meta mdoel over a web API. This web-based application server written in perl doesn't have a name. It began life as one half of mudlondon, an Instant Message bot written in jabber, that talked to its world state via a RESTful RDF/XML HTTP interface. After several code iterations, the backend now takes a generic approach to RDF model annotation and query. The current version was developed as an infomesh to provide data modelling and query facilities for a foafcorp-derived organisational network mapping project: themutemap. The key to mudlondon's structure was the separation of the model, and its RDF/XML interface, from the interface through which one interacts with it. In fact, the model decouples in at least two places: between at the RDF model interface and the RDF/XML interface, and between the latter and the human-readable interface. The IM bot provides a 'conversational stateful interface' to the model. Originally, the HTTP backend was limited to a specific data model, knowledge about ontologies hardcoded. The RDF graphs are stored in an RDBMS, currently supporting postgres or mysql. A Squish query subsystem that translates into SQL. Each separate model has two tables, one of nodes, one of statements, including a timestamp and a 'silent quadruple' which indicates the content, or provenance, of a statement. In the web interface the model is addressed with a uri substring: e.g. refers to the DMZ model. Now, I hope to describe the thinking behind the interface and areas in which i think it adds to the state of the art or lags behind them. The storage and query API reflects a desire to make the RESTful interface as 'user-friendly' as possible to other software writers who may not care for the details. (A use case is to provide a single point of identity for e-commerce systems, subscription management, website registration; simple GET and POST calls for don't-need-to-know-don't-care, non-RDF-literate application developers). Thus the interface provides a many human meaningful method names like 'addPerson' to create a new node with rdf:type foaf:Person and add predicate-object pairs to it; each method maps to an rdf:type currently hardcoded in the HTTP server. There is a generic addThing method to which one can post an rdf:type in qname or full URI form. A spider running over the top can POST uris directly to the interface using the learnModel method. This is also used in an initial 'bootstrap' process where each model is seeded with various useful or popular ontologies including FOAF, and dublin core. A very minimal of OWL constraints - sameAs and InverseFunctionalProperty - is under development. Especially in use of annotated OWL schemas to semiautomate user interface drawing actions As each new subject node is added to the data model via an add* method, it is given a URI which is used both within the triplestore and as an external identifier. The URI is returned to the client with a 203 created status response. A GET request to that URI returns an RDF/XML model, along with a small portion of the graph that surrounds it - connections to it. A POST request to the same URI, along with one or more predicate-object pairs, and they will be added as statements to the subject URI. Speculatively, a DELETE request will cause all statements with that URI as a subject or object to be retracted. Im not sure abotu the ethics or desirabiklity of allowing free retraction of statements, and have considered moving them to a separate modeland trying to keep versions instead of throwing them away. The idea of versioned storage appeals a great deal. The query interface uses mysql or postgres text search to allow simple searching over the triplestore. The search method looks for a match in a node, then for statements that have that node as the object. Then, all the subject URIs are picked out and all statements with that subject are returned, in RDF/XML, as part of the search result. You may wind up with a graph only a small amount of which is what you expected. SELECT DISTINCT n2.data FROM $Name{triple} t, $Name{node} n, $Name{node} n2 WHERE n.$Name{data} LIKE '%$text%' AND t.$Name{obj} = n.$Name{id} AND t.$Name{subj} = n2.$Name{id} A sample search which returns a simple FOAF-based model of all Liberal Democrat members of UK Parliament Being able to getClasses and getProperties that the model has 'learned' about. As a final remark here i find it curious, that retrieval over a web interface has been thought through in applications like joseki, but addition and emendation of statements, almost 'content management' functions are as interesting in an application framework - though perhaps much simpler, shouldn't be ill-considered near concepts like incremental smushing. When this codebase finally moved into being RDF-generic, the question of expressing and sending arbitrary queries loomed into its path, and is still looming. Perhaps the questions can be made clearest by breaking the problem into three parts - how best to send the query, how to run it, and how to the results. In the same way as the joseki API it is possible to GET an RDF/XML version of a model with a string containing SquishQL. Unlike in joseki, there's no provision to POST a SquishQL query as part of a chunk of RDF/XML: I find this solution potentially aesthetically unsatisfactory - given this is the semantic web, i want the meaning of my query to be more generally machine accessible, than a literal string in a statement even if i also get statements that tell me how or where to parse RDQL. As it stands, the Squish query run against the SQL based RDF model returns tabular data, not a graph. The rows can be replaced into the graph using variable substitution; but to an extent this seems like jumping through hoops. A little work has been done on an interface where you could post an RDF/XML document model with 'holes' in it using a Squish-like '?thing' syntax, but that just involved breaking the graph model down into a Squish query - even more hoops. This kind of query language doesn't seem strong enough for the model. At the least, queries need to have a temporal aspect; support for more logical constructs like 'OR' would be useful. (@@ i think damian steer has done some work extending Squish in this direction). And the ideal of something that, when given a graph, returns a graph directly, still persistsq An XSLT-like, path-based approach has been mooted, being able to 'shake' the graph down into a tree holding it at a certain point. This kind of query syntax would work well with path-style eURI addressing, seem appropriate to the RESTful metaphor. It is an approach which i have not considered in depth and would like to hear more about. Ideally, i would like to be able to send full prolog questions to the interface, using something like Jan Wielemaker's rdf module for SWI-prolog to assert and retract statements in a model; to be able to send queries like the following: codepiction(X,Y,Z) :- person(X,P1), person(Y,P2), depiction(P1,Z), depiction(P2,Z). person(X,P) :- rdf(P,foaf:nick,X). depiction(X,D) :- rdf(X,foaf:depiction,Y). path(X,X,X:nil). path(X,Z,X:Y:P) :- codepiction(X,Y), path(Y,Z,Y:P). path('zool','dmiles')? I'd like to persuade prolog to talk to the triplestore. It might be able to talk directly to redland's dbm files, in an assert(Subj,Pred:Obj), assert(Subj:Pred,Obj). etc; but it would be a shame to miss out on what the API provides and perhaps a slight pain keeping prolog in sync with the DBM files, and getting at it through a webserver. considerations also for distributed query, or a set of answers which become more or less confident over time. Wanting to be able to get incomplete query results, and set a threshold for completeness or for confidence. Would it be possible to filter the model on the basis of a trust network view on it - as jen golbeck has experimented with TrustBot - so that the threshold for you seeing a statement is a function of your connectedness to the person who made it. Generally i driven to think about alternative rdf serialisation formats than XML. An interface which returned n-triples would seem a nice easy answer. n3 is interesting, but it's hard to know strictly what subset to support, what's useful for simple applications. XML is rather double edged as it is stably familiar to people and many may find that reassuring while others retreat in horror. also worth considering here is the YAML data serialisation language; there is support for perl, ruby and python, and there is some curiosity in the perl community as to such an idea. Untitled Language 1 is a posited RDF query language. It omits the SELECT statement that Squish and RDQL feature; instead of returning variables, in returns the whole graph model or "bucket o triples". It aims to have a cleaner symtax than n3, but be able to express logical constructs like 'OR'. It has the operators = != > < ~, e.g. can select time slices from a temporally annotated model. It preserves the 'using' syntax from Squish for namespaces. BNF for Untitled Language 1 grammar for Parse::RecDescent parser for Untitled Language 1 As mentioned the triplestore has support for contexts, or provenance - a fourth 'shadow' column in the triple table which contains a reference to a uri or bnode which is part of the model, usually a foaf:Person or a document. The direct interface to the database supports a whose_triple method, which takes a statement and reveals the graph surrounding the provenance of the statement. In the web interface this hasn't been taken properly into consideration. Experimenting with a getProvenance method in the interface. how would you talk to it? you could POST it an rdf/xml model containing one statement and get back attribution for it in the same format. or send a getProvenance(subject => $subjetc) etc, which is the current implementation. would be nice to be able to represent and share confidence through different triple stores. also, are we fully enough expressing the logical relations about authorship, ownership, suggestion of content, and is that really within our power anyway? essentially what we are returning looks like a metamodel not part of the original model, which makes sense as a way to describe it. context isn't part of the RDF model; whether it's implemented using indirection or as a shadow quadruple. at least the former enables us to express it easily in rdf/xml but that feels like the wrong way about. The syntax for addressing the 'infomesh' transports fairly directly. http://[ host ]/[ model ]/ As redland's API does not (to my current knowledge) support a query language, translation of web-based methods into queries is limited in scope. findStatement and addStatement could be simply implemented. Is this the right level of detail for a web RDF application service?
http://www.w3.org/2001/sw/Europe/events/20031113-storage/positions/walsh.html
crawl-002
refinedweb
1,985
58.92
Bug #2548 Linux, focus bugs and conflict between the key events of AWT and SWT Description There is a conflict between the key events of AWT and SWT using RCP 4.7.3 or RCP 2019-03 (4.11) and Java 1.7, 1.8, 1.10. It seems the bug is present since Java 1.7. It leads to some bugs related to focus management and Part activation. Also see: for workaround tests. Help asking: - Once an AWT component has gained the focus, the AWT Window never loses the focus and seems to capture all key events => then every SWT component that should trigger event from keys doesn't work (Text field, Table arrow keys, etc.) - Specificities selection bar chart => Banality spinner doesn't react to keys if the chart AWT component has gained the focus once - Progression => Query field doesn't react to keys if the chart AWT component has gained the focus once - CA, selection of lines in columns and rows tables doesn't work anymore - Browser Part activation doesn't work - TBD I made this minimal sample example: import java.awt.BorderLayout; import java.awt.Frame; import java.awt.Panel; import java.awt.TextField; import org.eclipse.swt.SWT; import org.eclipse.swt.awt.SWT_AWT; import org.eclipse.swt.layout.FillLayout; import org.eclipse.swt.widgets.Composite; import org.eclipse.swt.widgets.Display; import org.eclipse.swt.widgets.Shell; import org.eclipse.swt.widgets.Text; public class SWTAWTFocusBugsSnippet { public static void main(String[] args) { final Display display = new Display(); final Shell shell = new Shell(display); shell.setLayout(new FillLayout()); Composite composite = new Composite(shell, SWT.EMBEDDED); Frame frame = SWT_AWT.new_Frame(composite); Panel panel = new Panel(new BorderLayout()); frame.add(panel); panel.add(new TextField()); Text text = new Text(shell, SWT.BORDER); shell.setSize(200,70); shell.open(); while(!shell.isDisposed()) { if (!display.readAndDispatch()) { display.sleep(); } } display.dispose(); } } In this sample, once the AWT TextField has gained the focus, it becomes impossible to type in the SWT Text widget. It works well on Windows. Need to check on OS X. Solution 0/ workaround¶ - set the AWT frame as non focusable on Linux (and Mac?) - it will restore the SWT key events managment - it will disable all key event in charts - it will disable tooltips of chart entities Solution 1¶ - stop to use the SWT_AWT bridge - use JavaFX bridge - use full SWT charts engine but no solution are mature as JFreeChart, especially for interactive charts - try to use JavaFX bridge that embeds the AWT components - TBD Solution 2¶ - push Oracle/SWT issues so they fix these problems - check/find information about this issue in Java 11, 12 and most recent Eclipse/SWT versions - I didn't manage to test Java 11 in Eclipse 4.7.3 History #1 Updated by Sebastien Jacquot almost 2 years ago #2 Updated by Sebastien Jacquot almost 2 years ago #3 Updated by Sebastien Jacquot almost 2 years ago #4 Updated by Sebastien Jacquot almost 2 years ago #5 Updated by Sebastien Jacquot almost 2 years ago #6 Updated by Sebastien Jacquot almost 2 years ago - % Done changed from 0 to 50 A Linux workaround that seems acceptable has been implemented in /org.txm.chartsengine.rcp/src/org/txm/chartsengine/rcp/swt/SwingChartComposite.java - an AWT mouse listener on the AWT/Swing chart component calls Frame.setVisible(false) and Frame.setVisible(true) when the mouse exits the component. It frees the key event listener and the Frame focus but the SWT application becomes "not active/gray on Ubunutu/moved to back" - so SwingChartComposite.this.chartEditor.getShell().forceActive() is called just after the Faame.setVisible(true) to reactivate the SWT part - it leads to a little flickering of the whole SWT Window that may not be visible on actual machines - since the chart component is created only when a chart is created, a similar listener does the same process on the embedded frame root panel to manage the result editors that can be empty/have no chart eg. Progression #7 Updated by Sebastien Jacquot 12 months ago - Target version changed from TXM 0.8.2 to TXM X.X Also available in: Atom PDF
https://forge.cbp.ens-lyon.fr/redmine/issues/2548
CC-MAIN-2021-10
refinedweb
690
55.84
LuviTools Some useful unity plugins, created by Thanut Panichyotai (@LuviKunG) Version History January 3rd 2020 Add Animation Event Viewer Editor. - Multiple select on Animation Clip and right click, select ’LuviKunG > Animation Event Viewer’ menu to open editor window. From now on, I’ll trying to rearrange all tools and serparate by Assembly Definition. So that will making easily to add/remove from Unity Project and faster complie time. December 20th 2019 - Add StringInput Attribute - Define to string to act as selectable in inspector. October 8th 2019 - Add new Better Button - Just a Unity UI Button with onUp & onDown Unity Events. - Add URL - Tools that able to parse query into Dictionary and able to add, remove or modify parameters. - Add Loop - Like a List but able to get current, next or previous object index. - Add Parallax - It’s a tools that help transform to parallax. - Update StringSceneAttribute - Add default constructor. - Update GUIDebug - Add word wrap to label. - Remove old Loop - Remove Post Process Build WebGL because it’s already include in Build Pipeline for WebGL - Relocate Limit September 26th 2019 - Add Legacy GUI Debug. - It’s smaller version of LuviConsole. - I’m used it during debugging on WebGL in mobile. - Add Post Process Build WebGL that able to remove warning in mobile. - Still need more improvement! (like UI window in Unity) So wait for me! - Remove these plugins from this repository, because it’s already created as repository UPM. (Click on the name to redirect to it’s repository UPM) September 17th 2019 - Add new Touchable (Image Mask) - Same as Touchable but able to set Image to check alpha of the image to raycast. - Must enable ‘Read/Write Enable’ on the sprite’s texture to use this. August 26th 2019 - Add new LuviSocketIO (Client side only) - Using with Node.js SocketIO. - Still in development and improvement. August 17th 2019 - Update Camera Aspect Ratio - Add Width mode to lock the camera aspect by width (instead of default of height). - Update Inspectator GUI. - Fix NullReferenceException when target camera is null while selecting the component’s game object. August 7th 2019 - Mipmap Bias - Remove menu from ‘Window > Mip Map Bias’. - User can single select, multiple select or folder select, then using ‘Assets > LuviKunG > Open Mipmap Bias Window’ menu to open the editor window with selecting content. - Remove input of float to set bias because it’s useless. - Now using enum popup to select the value of mip map bias instead. - Deprecated Camera Aspect - Update Camera Aspect Ratio - Include option to select the aspect ratio mode between ‘Expand’ or ‘Shrink’, similar with UnityUI Canvas. - No longer execute in edit mode. - By selecting the component’s game object, it will update the aspect ratio by automatically. - Deprecated Combine Assets - Add new Assets Management - List GUI with icon type. - Able to combine assets. - Able to rename sub assets. - Able to show or hide sub assets. - Positioning - Remove two options in PivotPositionbecause it’s same as the old option with inverse size. August 6th 2019 - New Scripting Define Symbols Editor - Simply add/remove/reorder element of scripting define symbols element. - Serparate option for every build target groups. - Build Pipeline for Android - Add editor option for increase bundle version number everytime that user perform to build. August 1st 2019 - Big day update! - Android Management - New option for Keyboard Input that you can choose to hide input or not. - Attribute - Fix EnumFlag cannot select the choice that value of ~0 byte. - Add LayerAttribute to int type for layer selection in Unity Inspector. - New Build Pipeline for Android - Setting your build settings at first (include keystore) - Choose the folder to build *.apk - Build with one click! - Add new Capture Screenshot (for editor) - Included with supersampling. - Required IntPopup attribute. - Extension - IListExtension not require LuviKunG.Listnamespace. - FPS Meter - Add FPSMeterLegacyGUI for easily and quick to check FPS without doing uGUI. - Pool - Add Pool class. It’s a dynamic pooling that can instantiate prefab within target transform, include instantiate event, and require interface of IPool to check the member is active or not. - Thai Font Adjuster Pack - Fix null string broke the adjuster class. - Add TextThaiGlyph that act like Text of UnityEngine.UIbut include thai character adjuster string. - Add ThaiCharacterReplacer & ThaiCharacterReplacerTMP to help put thai character into Text or TextMeshProUGUI components. - UnityUI - Add ButtonToggle to act like button + toggle that can switch between two group of GameObject that show active/inactive state. - User Interface Management - Change UserInterfaceBehaviour class that handle GameObject reference of yourself. - Fix UserInterfaceSound with wrong namespace and reference. - Utilities - Fix Loop that handle wrong initial index. - Add FloatRange and IntRange to define the range of numbers. - LuviConsole - Fix changes are not save via Unity Inspector. - Simplified many script. - Remove unused namespace. July 10th 2019 - Add Extended Unity UI 2018.4 for Unity UI. - Add new extension of StringBuilderRichTextExtension. - Help to add color, bold, italic while using System.Text.StringBuilder. - Add new User Interface Management components. - Really & very helpful for manage Android Back Button - Using focusing and ordering. - Update LuviConsoleto version 2.4.1 - New Command Group has been added. Using for grouping your command. - New Execute Command Immediately has been added. Will execute the command instantly when press the command button. - Remove internal Rich Text display for Log. But… - Require extension of StringBuilderRichTextExtensionto display rich text in Log. - New Command Log to display your executed command in Log. - Update IListExtension - Add new ListIteration<T>for Each<T>to execute each of member. June 28th 2019 - Deprecated Enchant Listbut… - Add new List in Utilities. Loopfor loop the member in list while using Next or Prev. Limitit’s list that when adding new member will remove first-in member if it’s reach to limit. - Add RandomStringfor random string. - Add Setterfor making extension that receive member and set it back without doing cache. - Add StringSceneto making string field into Scene Selection in Inspector. - Update StringPath, StringPopup, IntPopupto show an error if using the attribute in the wrong type. June 26th 2019 - Add ThaiCharacterReplacerTMPin Thai Font Adjuster Pack for Text Mesh Pro for typing Thai character. June 25th 2019 - Add DateTimeExtension - Print ISO 8601 format with DateTime.Now.ISO8601(); - June 20th 2019 Update LuviConsoleto version 2.3.7 WebGL support. - Update LuviConsoleto version 2.4.0 - WebGL support. - New drag scroll view on log. (all platform) - New LuviCommand syntax. - Now you can use string in your command by using quote “Your string here” to get full string without serparate by space. - Fix bugs that execute by double quote and got an error. - Realtime update window size and orientation. - Require to start with using LuviKunG; - Add LuviConsoleExceptionto throw error during execute command. - Update Monokai2019.vstheme - Include various monokai color window. June 17th 2019 - Add Monokai2019.vstheme for Visual Studio Community 2019 June 10th 2019 - Add IListExtensionwhich include two useful method. Shuffle()to shuffle all member in list. Combination(int sample)to get all possible combination of sample in list. - Deprecated 2 class, because Unity 2018 or better version are support C# 7.0 and it’s already included these extension method. EnumExtensionof bool HasFlag(enum flags)and bool TryParse<TEnum>(string s) StringBuilderExtensionof void Clear() - Change RichTextHelperinto RichTextExtensionwhich using namespace LuviKunG.RichText; June 5th 2019 - Update Gachaclass. - Add Clear()to clear all gacha elements. - Fix error that was using BitStrap - Changes for loop instead of foreach loop. June 3rd 2019 - Update LuviConsoleto version 2.3.6 - Upgrade compatible with Unity version 5, 2018 and 2019. - Rearrange the inspectator. - Add new unity instantiate menu on GameObject > LuviKunG > LuviConsole. May 27th 2019 - Add MonoBehaviourUIclass. This is helping to get RectTransformfrom components in easier way. - Move Yield Instruction to deprecate state. April 17th 2019 - Add StringPathattribute. - change stringproperty into path selection window. March 8th 2019 - Add LayerCoroutineclass. February 28th 2019 - Add visual studio community theme. Require Color Theme Editor for Visual Studio 2017 and Color Themes for Visual Studio - Monokai (Dark) February 21st 2019 - Add Camera Sorting component. - Using for set a custom transparant sorting by axis per camera. - Update Camera Aspect component. - Update full README.md on previous update. February 9th 2019 - Add and Update many new components. - Deprecated components. - Apply Selected Prefabs - No longer support because it’s obsolete in Unity version 2019 and greater. - CameraPPU - No longer support because it’s have new component instead. - Game Configuration - No longer support because it’s have new component instead. - and other… No longer support because it’s sucks. - Add Yield Instruction component. WaitForCondition: Using in coroutine by yield return new WaitForCondition(isReady); WaitForTimeSpan: Using in coroutine by yield return new WaitForTimeSpan(timespanDuration) - Add Android Management component. - Using for quick adjust primary android settings such as Target Framerate, Multitouch or Screen Sleep Timeout. - Add Attribute. EnumFlagsfor display list of flags in Unity Inspector by decalre attribute [EnumFlags]on an enum parameter. IntPopupfor display selectable int value in Unity Inspector by declare attribute [IntPopup(ArrayOfName,ArrayOfInt)] NotNullfor display or labeled as red color when parameter is null in Unity Inspector. ReadOnlyfor display the property Unity Inspector as read-only. (cannot edit) StringPopupsame as IntPopup but strings. - Add Benchmark component. - Display a score that affect on device’s framerate. (UnityUI) - Add FPS Meter component. - Display framerate in realtime. (UnityUI) - Add Input Conductor component. - Knowing this component are require to holding any input value are pressing (from keyboard or controller). Because some plugins like Unity Oculus Rift Controller are buggy when controller are disconnected while any input are pressing and it’s not update the correct value. - Add Combine Assets Window. (Unity Editor Window) - Using for combine any asset in Project. - Add Extensions. - StringBuilderExtension Clear();for clear any char of string value in StringBuilder. var sb = new StringBuilder(); ... var sb.Clear(); - EnumExtension HasFlag(Enum flags);for checking flags in enum flag type. [System.Flags] public enum SomeEnumFlags { Hi = 1, Hello = 2, Bye = 4, SeeYou = 8 } private SomeEnumFlags emoteFlags = SomeEnumFlags.Hi; ... if (emoteFlags.HasFlag(SomeEnumFlags.Hi | SomeEnumFlags.Hello)) { // Yes it has! } - RichTextHelper for easy to add rich text format by code. RichTextHelper.Color(yourString, yourColor);for set color. RichTextHelper.Bold(yourString);for set bold. RichTextHelper.Italic(yourString);for set italic. RichTextHelper.Size(yourString, size);for set size. (int) - Add LuviKunG’s LocaleCore Plugins. - This is my original component. Using for switch language translation text / asset in realtime. - I’ll write instruction later. (because it’s too large) - Add Mip Map Bias Window. (Unity Editor Window) - Using for set Mip Map Bias in editor. - Add Positioning struct. - It’s calculated position by using a custom size. PivotPositionfor set the list of position. CirclePositionfor set the circle position with radius. GridPositionfor set the position as grid. - Add Thai Font Adjuster Pack. - This is require this unity asset for making thai font are correctly display by GPOS and GSUB rules. - Using Google Fonts + FontForge to batch and applied fonts to become GPOS and GSUB format. - I’ll write instruction later. (because it’s too large) March 14th 2018 - Improve CacheBehaviour for Unity v.2017.3. May 8th 2017 - Add LocalizationTools/LocalizationExportCSV. March 26th 2017 - Add Name (Minimum String Compare Class). March 19th 2017 - Add Text (Localization Class). December 21st 2016 - Add LuviVungleAds. October 4th 2016 - Move obsolete sctipts into ‘Obsolete Scripts’ folder. - Add CacheBehaviour. - Add Loop List. - Add Limit List. - Add Sector List. - Add Gacha. - Add Modified Canvas Group Inspector. - Add Pool Object. January 29th 2016 - Add LuviUpdate. - Modified LuviTools to using namespace. - Modified LuviBehaviour to using namespace. - Modified CameraPPU able to run in Inspector. - Remove ILuviUpdate in Lesson. Older Version - Add Inspector Label. - Add Inspector Divider. - Add CameraPPU. - Add Modified Mesh Renderer Inspector. - Add Do not destroy onload. - Add Game Configuration. - Add Game UI Manager. - Add LuviBehavior. - Add LuviFacebookAPI. - Add LuviJSON. - Add LuviParse. - Add LuviPushwoosh. - Add Prefab Scene Manager. - Add Singleton. - Add Time Stamp. - Add Version Control + MiniJSONDecode. - Add LuviConsole. - Add Apply Selected Prefabs. - Add LuviTools.
https://unitylist.com/p/56y/Luvi-Tools
CC-MAIN-2020-10
refinedweb
1,952
53.47
jGuru Forums Posted By: Anonymous Posted On: Monday, March 3, 2003 12:11 PM in the same action class from multiple submit button on JSP Page. I have created subclass ( public class OperatorDispatchAction extends DispatchAction ) of DispatchAction and my struts-config.xml is configured properly with parameter="method" option for this action. On the jsp page, I have proper actions="xxxdispatch.do" for form's actin attribute. My buttons look like this: I also have proper method in Action class(OperatorDispatchAction), "save" and "update" and default method "execute". Now, when I run my jsp page and click on either update or save button, it always calls execute methods and retuns out from it as I have return mapping.findforward("success") statement at the end, without ever calling my save or update method, which I expect when I click on corrosponding button. I suspect that buttons' attributes need to have extra information or do i have to modified default execute method so that it can delegates control to save or update method. Please help. Re: I have a subclass of DispatchAction to dispatch various actions to corrosponding methods Posted By: AlessandroA_Garbagnati Posted On: Monday, March 3, 2003 12:38 PM
http://www.jguru.com/forums/view.jsp?EID=1062550
CC-MAIN-2014-41
refinedweb
199
52.39
IBM® WebSphere® Portal action events are generated during the first phase, or action phase, of portlet processing. After action events are processed, the second phase of portlet processing, the render phase, begins, during which each portlet is asked to render its view. Only a portlet that receives an action request will go through the action processing phase. Form submission, for example, is one kind of action event usually triggered by interaction with the user. Portlets can also invoke actions in other portlets on the page; these other portlets will go through the action phase as well. Portlets that do not receive action requests will not go through the action phase. All portlets, however, go through the render phase. There can be times when a portlet will need to update its view even when the user is interacting only with other portlets. A Struts portlet that requires updated or refreshed data each time the page renders can take advantage of Render Struts actions, which implement the new com.ibm.portal.struts.action.IStrutsPrepareRender interface in the Struts Portlet Framework in WebSphere Portal. Render Struts actions are invoked during the render phase to gather data so that it can refresh the view of a portlet. They can also enable render time updates of bean values. The execution of Render Struts actions is therefore shifted from the portlet's action phase to the render phase. This article explains how to write Render Struts actions and how portlet processing has been changed to accommodate these actions. The article includes examples written for both the IBM legacy container and the JSR 168 container. About the examples used in this article The examples, figures, and Struts Portlet Framework sample code in this article are based on the IBM legacy container of IBM WebSphere Portal V5.1. The concepts presented in this article, however, also apply to the JSR 168 container. The examples included in the accompanying download file include support for both containers. (The Struts Portlet Framework included in these examples is a preview of the WebSphere Portal V5.1.0.1 release.) For more information on the examples used in this article, see About the download files. In general, Struts actions are processed during the action phase of WebSphere Portal by WpsRequestProcessor in the Struts Portlet Framework. Struts actions are processed until a non-Struts action is found. An IViewCommand object is then created to encapsulate the necessary information for displaying the page. In a typical case: - A Struts action populates a form bean and forwards to a JSP to display a page. - In the Struts Portlet Framework, a WpsStrutsViewJspCommand is created and saves the path to the JSP and the form bean (if request-scoped). - The IViewCommand is created and saved during the action phase of WebSphere Portal. - The saved IViewCommand can then be executed each time the portlet is asked to render itself. - When the WpsStrutsViewJspCommand executes, it displays the page based on the path to the JSP and the data saved in the form bean. This process may be suitable for some Struts portlet applications, but others may have a need to refresh the data in the form bean whenever the portal page refreshes. This might happen when an action event occurs for any portlet on the page, or when the user leaves the page and returns later. The Struts Portlet Framework addresses this need with Render Struts actions, actions that can execute whenever the portlet is asked to render so that its data can be updated with each rendering. The Render Struts action is synonymous with the notion of a page action, which is simply responsible for preparing the page for display. A page action does not rely on data input from the user. It relies instead on the requirements for displaying the page and ensures that information needed to display the page is available. Page actions need to be separated from actions that control business transactions. Provided that the action is not performing a business transaction itself, using page actions is an acceptable strategy. Struts actions executed during the action phase have the flexibility to perform business logic. They can be used to populate form beans, change state, send messages to other portlets, participate with property broker, and so on. Render Struts actions executed during the render phase can only update objects that will be used to create the next page of the user interface. They can read back end data and update form beans. Struts actions can be daisy-chained to Render Struts actions so that they work together. The action phase Struts actions would be processed during the action phase, then the Render Struts actions would be processed during the render phase. The Struts Portlet Framework request processor handles executing the actions during the appropriate phase. In the IBM legacy container, the response object is not available during the WebSphere Portal action phase. Because the execution of Render Struts actions is shifted to the render phase, the response object is available to the portlet. Actions needing access to the response object in the IBM legacy container may also be candidates for Render Struts actions. The Struts Portlet FrameworkLegacyTransformation portlet is an example of this use case. When to use a Render Struts action Since a Render Struts action executes during the render phase of WebSphere Portal, there are limitations regarding what it can do. A Render Struts action can read back end data, update the ActionForm bean to be displayed by a JSP, and statically interact with data access objects. It cannot change portlet state or write to a back end system, and it cannot interact with other portlets. Render Struts actions can access the response object. Render Struts actions can provide good solutions for the following portlet challenges: - The data in your portlet does not update when you are interacting with other portlets on the page. For example, you have a Struts portlet which displays the market value of selected stocks. The only time the stock values will update is if you cause the action phase of the portlet to be invoked through the pressing of a button. In this situation, it's preferable if the information refreshes whenever the page renders. - The portlet is not displaying the correct information when the browser's refresh button is pressed. - You need access to the response object, but get an error when you try to write to it in your action. Example 1: A simple clock Our first example is a simple clock portlet, shown below in Figure 1. Figure 1. A simple Struts clock portlet. The portlet consists of a welcome file, index.jsp, which contains a forward: <logic:forward In the Struts configuration file, the forward name is found: Listing 1. The action, GetTimeAction, determines the current time and updates the value in the ClockBean form bean. It then forwards to time.jsp, which displays the time, as well as a button to update the time. Clicking the Update Time button results in a call to the execute method of GetTimeAction, and time.jsp displays the current time. The source for GetTimeAction is below. The time variable of ClockBean is updated with the current time. Listing 2. Time does not update on page refresh As seen in the above code, the clock's time will be updated in the form bean every time the execute method of GetTimeAction is called. When the portlet initially displays, the execute method is called when the welcome file forwards to this action. When the Update Time button is pressed, the GetTimeAction updates the time during the action phase. The portlet displays the time when it renders. If there are multiple portlets on the page, then only the portlet that the user is interacting with will go through the action phase; all other portlets will only go through the render phase. The render phase may not always be preceded by the action phase; in this case the clock would not update the time. While the user is interacting with other portlets on the page, action events are not occurring for the clock portlet. The execute method of GetTimeAction is never called, and the time does not update. GetTimeAction simply performs the task of obtaining the current time and updating the form bean. It is a candidate for a Render Struts action because it updates information required for rendering the view. It does not change portlet state. In order to specify a Struts action as a Render Struts action, the action would implement the com.ibm.portal.struts.action.IStrutsPrepareRender interface. This change will enable the portlet to properly refresh while in the render phase. The following sections will be helpful to those familiar with the model-view-controller (MVC) design pattern and, specifically, the controller components of Struts. org.apache.struts.action.RequestProcessor For Struts to be used within the portal environment, certain aspects of the framework are overridden. The Struts Portlet Framework provides its own WpsRequestProcessor, which extends the org.apache.struts.action.RequestProcessor. The process method of the RequestProcessor is below. The methods in bold are overridden in the Struts Portlet Framework WpsRequestProcessor. Notice each of the opportunities where this function can exit or return. The processMapping() and processActionPerform() methods of the WpsRequestProcessor are affected by Render Struts actions. Listing 3. To support Render Struts actions, modifications that were made to the Struts Portlet Framework include: - IStrutsPrepareRender -- a new interface added to public API, since V5.1 - WpsStrutsViewActionCommand -- a new IViewCommand class added to public API, since V5.1 - WpsStrutsConstants.STRUTS_VIEW_ACTION -- a new attribute in V5.1 - WpsRequestProcessor.processMapping -- implementation modified in V5.1 - WpsRequestProcessor.processActionPerform -- implementation modified in V5.1 - WpsRequestProcessor.doInclude -- implementation modified in V5.1 . com.ibm.wps.struts.action.IStrutsPrepareRender is an interface that is detected by the WpsRequestProcessor. An action that implements IStrutsPrepareRender will be executed during the render phase of WebSphere Portal. An action implementing IStrutsPrepareRender may forward to a JSP to display the portlet, or forward to another action. Keep in mind that if a Render Struts action forwards to another action(s), then the subsequent action(s) will also be executed during the render phase, so they should be Render Struts actions as well. If a Render Struts action forwards to a non-Render Struts action, a warning will be logged. Some operations, such as property broker, are not allowed during the render phase and will result in exceptions being thrown if there is an attempt to perform them during this phase. WpsStrutsViewActionCommand The WpsStrutsViewActionCommand class is a new IViewCommand class which represents Render Struts actions. This class saves the required information so that the command can be rendered at a later time, and extends WpsStrutsViewCommand. When the WpsRequestProcessor detects a Render Struts action, the ViewCommandFactory will create a WpsStrutsViewActionCommand command object. (Details on how a Render Struts action is detected will be discussed later.) When the WpsStrutsViewActionCommand is called upon to execute during the render phase, it calls WpsRequestProcessor.processNewActionUri so that the Render Struts action can be processed and its execute method called. WpsStrutsConstants.STRUTS_VIEW_ACTION This constant was added to aid in the detection of a Render Struts action. The process method of org.apache.struts.action.RequestProcessor is called twice when a Render Struts action is detected. The first time through the process method, the request attribute is set, and processMapping is interrupted and returns null, which causes the process method to return. The second time through the process method, the attribute is already set, processMapping continues by returning the mapping, and the process method of the request processor is allowed to continue. WpsRequestProcessor.processMapping Represented in pseudocode, this method does the following: Listing 4. WpsRequestProcessor.processActionPerform processActionPerform is responsible for calling the execute method of the action. First it needs to determine what type of action it is, so that it can pass the appropriate parameters: - If the action extends com.ibm.wps.struts.action.StrutsAction and extends IStrutsPrepareRender: return (((StrutsAction) action).execute (mapping, form, portletRequest, portletResponse)); The PortletResponse is passed. - If the action extends StrutsAction and is not a Render Struts action: return (((StrutsAction) action).execute (mapping, form, portletRequest)); The PortletResponse is not passed. The response object is unavailable during the action phase of portal. - If the action extends org.apache.struts.action.Action: If the action is executed during the action phase, then the response object is a pseudo-response object and has limited use. If the action is executed during the render phase, then the response object would be available for the action. return (action.execute(mapping, form, request, response)); WpsRequestProcessor.doInclude This method was modified to add a test for a Render Struts action. If the URI passed to this method is not an action, and the request attribute WpsStrutsConstants.STRUTS_VIEW_ACTION is set, include the JSP. If the URI is an action, process the URI. If not an action, create a new IViewCommand for the URI. The Struts Portlet Framework processes Render Struts action and non-Render Struts actions differently. The next sections use the clock portlet introduced above as Example 1 for illustrating what happens in the Struts Portlet Framework when different scenarios are encountered for both types of actions. (The classes and methods in these sections are documented in the public API of the Struts Portlet Framework.) This exercise will be presented as different scenarios to show the difference in request processing: When GetTimeAction is not a Render Struts action: - Scenario 1a: View the portlet on a page - Scenario 1b: Refresh the page - Scenario 1c: Click Update Time button When GetTimeAction is a Render Struts action: - Scenario 2a: View the portlet on a page - Scenario 2b: Refresh the page - Scenario 2c: Click Update Time button As mentioned, the Struts Portlet Framework request processing logic takes a different path when Render Struts actions are detected. The tables below provide the specifics: - Compare Scenarios 1a and 2a to see where request processing begins to take a different path. - Compare Scenarios 1b and 2b to see how and why the time does not change in 1b, but does get updated in 2b. - Compare Scenarios 1c and 2c to see the differing series of events which lead to the same outcome: the time is updated. - Notice when the new IViewCommands (for example, WpsStrutsViewJspCommand, WpsStrutsViewActionCommand) are created. Scenario 1a: View the portlet on a page, GetTimeAction is not a Render Struts action Notes on Scenario. GetTimeAction is executed. A WpsStrutsViewJspCommand is created to represent time.jsp during the render phase, and is executed during the render phase. Scenario 1b. Refresh the page, GetTimeAction is not a Render Struts action Notes on Scenario 1b: The saved IViewCommand for view mode, representing /time.jsp, is executed during the render phase. The value of the clock's time in the form remains unchanged. Scenario 1c. Select Update Time button to generate action event, GetTimeAction is not a Render Struts action Notes on Scenario 1c: The action to update the clock's time is executed during the action phase. The action forwards to time.jsp, and an IViewCommand representing time.jsp is created during the action phase. The render phase follows and the IViewCommand is executed. Scenario 2a. View portlet on a page, GetTimeAction is a Render Struts action Notes on Scenario. Since it forwards to a Render Struts action, a WpsStrutsViewActionCommand is created during the render phase. The WpsStrutsViewActionCommand executes, and calls GetTimeAction to update the clock's time. Scenario 2b. Refresh the page, GetTimeAction is a Render Struts action Notes on Scenario 2b: The saved IViewCommand for view mode, WpsStrutsViewActionCommand, is executed during the render phase. GetTimeAction executes and updates the clock's time. Scenario 2c. Click Update Time button to generate action event, GetTimeAction is a Render Struts action Notes on Scenario 2c: The action to update the clock's time is a Render Struts action. A WpsStrutsViewActionCommand is created during the action phase. The render phase follows and the IViewCommand is executed. GetTimeAction executes and updates the clock's time. Example 2: XSL transformation The response object is not available during the action phase of WebSphere Portal. The IStrutsPrepareRender interface enables implementing a Struts action that will be executed in the render phase of WebSphere Portal, and therefore will have a response object that can be written to when it executes. The Struts Portlet FrameworkLegacyTransformation portlet demonstrates using an action that implements the IStrutsPrepareRender interface to perform an XSL transformation. The results of the transformation are written to the response object, and the portlet displays the results. Figure 2. Struts XSL transformation portlet The portlet's initial page, index.jsp, contains a link to the Struts Main Configuration Tour: Listing 5. When the user clicks on the link, the request URI is used to find a match in the ActionMappings, shown below: Listing 6. The action specifies ApplyTransformAction as the object type. This class will process the request and return an ActionForward, indicating where control is to be passed. TransformMapping is the class name of the ActionMapping subclass to use for this action mapping object. Listing 7. TransformMapping has four private member variables. The values for these variables are supplied in the Struts configuration using the <set-property> tag. The ApplyTransformAction class is a Render Struts action so that it can write to the response object: public class ApplyTransformAction extends Action implements IStrutsPrepareRender The action class contains a method signature for the execute method in which the TransformMapping is passed: Listing 8. The action will apply the stylesheetUri to the xmlDocument and the result will be displayed in the portlet. The code below demonstrates how the result is written to the response. Since the portlet is using the response to display the results, the action returns null. Listing 9. The action also creates a link in the resulting markup to return to the portlet's initial page. The properties returnUri and returnUriLabel are used for this purpose. For more details on this example, the sample's code and a readme file are available within the WAR file in the download file included with this article. The IStrutsPrepareRender interface is a powerful new feature in the IBM Struts Portlet Framework that enables Struts portlet developers to easily write Struts actions that will be executed during the render phase of IBM WebSphere Portal. This article explained how to create Render Struts actions and the changes made to the Struts Portlet framework to accommodate these actions. The article also offered scenarios and code samples to illustrate these actions for both the IBM legacy container and the JSR 168 container. There may be slight differences in the Struts Portlet Framework packages or class names, depending on which container the portlet is written for: the IBM legacy container or the JSR 168 container. Javadoc is available for all Struts Portlet Framework classes referenced in this article in the <WPS_HOME>/doc/Javadoc directory of WebSphere Portal V5.1, or in the downloadable ZIP file in the WebSphere Portal and Lotus Workplace Catalog (see Resources). Portlets running on WebSphere Portal V4.2.x and later, using an older version of the Struts Portlet Framework, can be migrated to use a new version that includes support for Render Struts actions. Both WebSphere Portal V5.1 and the WebSphere Portal and Lotus® Workplace Catalog contain a version of the Struts Portlet Framework which supports Render Struts actions. The current catalog version, Struts Portlet Framework 5.0.3, contains support for the IBM legacy container only. The Struts Portlet Framework included with WebSphere Portal V5.1 contains support for both the IBM legacy container and the JSR 168 container. The examples included with this article in the download file include support for both containers. The Struts Portlet Framework included in these examples is a preview of the WebSphere Portal V5.1.0.1 release. Each of the samples contains the requirements for building a Struts portlet application, including a collection of JAR files, TLD files, and the org.apache.commons.logging.LogFactory file. These files are stored in the WEB-INF/lib, WEB-INF/tld, and META-INF/services directories, respectively, of a Struts portlet application. There are currently three sample portlets for the IBM legacy container which use Render Struts actions: - Struts Portlet FrameworkLegacyClock.war - Struts Portlet FrameworkLegacyStockQuote.war - Struts Portlet FrameworkLegacyTransformation.war The corresponding samples for the JSR 168 container are: - Struts Portlet FrameworkStandardClock.war - Struts Portlet FrameworkStandardStockQuote.war - Struts Portlet FrameworkStandardTransformation.war All source code is supplied within each code sample. Information about download methods - Download the Struts Portlet Framework from the WebSphere Portal and Lotus Workplace Catalog. (NavCode is 1WP10003N.) The downloadable ZIP file contains samples, javadoc, and additional documentation for the Struts Portlet Framework. - Javadoc for Struts Portlet Framework classes is also available in the WebSphere Portal javadoc, in the <WPS_HOME>/doc/Javadoc/api_docs directory of WebSphere Portal Server 5.1. - WebSphere Portal product documentation for general information on WebSphere Portal or the Struts Portlet Framework. - Official Struts home page. - Get involved in the developerWorks community, participate in developerWorks blogs. - Browse for books on these and other technical topics.
http://www.ibm.com/developerworks/websphere/techjournal/0504_pixley/0504_pixley.html
crawl-003
refinedweb
3,507
55.03
>> check if specific string occurs multiple times in another string in Java? Get your Java dream job! Beginners interview preparation 85 Lectures 6 hours Core Java bootcamp program with Hands on practice 99 Lectures 17 hours You can find whether a String contains a specified sequence of characters using any of the methods − The indexOf() method − The indexOf() method of the String class accepts a string value and finds the (starting) index of it in the current String and returns it. This method returns -1 if it doesn’t find the given string in the current one. The contains() method − The contains a () method of the String class accepts a sequence of characters value and verifies whether it exists in the current String. If found it returns true else it returns false. In addition to these, you can also use the Split() method of the String class. This method accepts a String value representing a delimiter, splits the current String based on the given delimiter and returns a String array containing tokens of the String. You can split the string into an array of words using this method and compare each word with the required word manually. Example Following Java example reads the contents of a file into a String, accepts a word from the user and prints the number of times the given word occurred in the String (file). import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.util.Scanner; public class StringOccurrence { public static String fileToString(String filePath){ Scanner sc = null; String input = null; StringBuffer sb = null; try { sc = new Scanner(new File(filePath)); sb = new StringBuffer(); while (sc.hasNextLine()) { input = sc.nextLine(); sb.append(" "+input); } } catch(Exception ex) { ex.toString(); } System.out.println("Contents of the file: "); System.out.println(sb); return sb.toString(); } public static void main(String args[]) throws FileNotFoundException { //Reading the word to be found from the user String filePath = "D://sampleData.txt"; Scanner sc = new Scanner(System.in); System.out.println("Enter the word to be found"); String word = sc.next(); boolean flag = false; int count = 0; String text = fileToString(filePath); String textArray[] = text.split(" "); for(int i=0; i<textArray.length; i++) { if(textArray[i].equals(word)) { flag = true; count = count+1; } } if(flag) { System.out.println("Number of occurrences is: "+count); } else { System.out.println("File does not contain the specified word"); } } } Output Enter the word to be found Readers Contents of the file:! Number of occurrences is: 4 - Related Questions & Answers - Check If a String Can Break Another String in C++ - How can we check an underflow occurs in Java? - Check if string contains another string in Swift - How to check if multiple strings exist in another string in Python? - Check if a string can be repeated to make another string in Python - In MySQL, how can we pad a string with another string? - How to check if a string contains a specific sub string? - How to check if a String contains another String in a case insensitive manner in Java? - Python - Check if k occurs atleast n times in a list - Check if a string can be formed from another string using given constraints in Python - Check if a string can be obtained by rotating another string 2 places in Python - How to check if the string begins with specific substring in Java? - How to check if the string ends with specific substring in Java? - How can we replace specific part of String and StringBuffer in Java? - How do I determine if a String contains another String in Java?
https://www.tutorialspoint.com/how-can-we-check-if-specific-string-occurs-multiple-times-in-another-string-in-java
CC-MAIN-2022-40
refinedweb
595
64.61
Multithreaded Telnet Server - Chess Game Example WEBINAR: On-Demand Full Text Search: The Key to Better Natural Language Queries for NoSQL in Node.js This sample server demonstraites a server that can handle multiple TCP/IP connections. It also shows how to make a simple telnet server that could be used for things like administrating a server, making a chat room, or even creating a MUD. fatal error C1083: Cannot open include file: 'iostream.h': No such file or directoryPosted by RNEELY on 03/31/2008 01:51pm // replace #includeReply with: #include using namespace std; replace #include <llist.h> by #include "llist.h" to avoid error...Posted by Legacy on 12/29/2003 12:00am Originally posted by: dupont replace #include <llist.h> by #include "llist.h" to avoid error...Reply I think NULL != INVALID_SOCKET ...Posted by Legacy on 10/20/2003 12:00am That's the beauty of a codePosted by Legacy on 07/14/2003 12:00am Originally posted by: Dani Thapa Thanks, helped me fix precompiled header compile error C1010Posted by Legacy on 05/23/2003 12:00 send mePosted by Legacy on 04/15/2003 12:00am Originally posted by: ch masroor router developement using c or c++, i need for help , send me if u canReply special thanks Ch masroor Voice Chat, PleasePosted by Legacy on 01/26/2003 12:00am Originally posted by: Manju NicePosted by Legacy on 01/22/2003 12:00am Originally posted by: Mulock! This code is great had to change a few things to get it to work in vc6 but other than that it's awsome.Reply ReadLine function..?Posted by Legacy on 01/19/2003 12:00am. web serverPosted by Carlos on 02/12/2015 08:24 Confused.Posted by Legacy on 04/03/2002 12:00amReply { nReturnCode = m_pSocket->Send( (char*)&eRequestCode, sizeof( E_PMA_COMM_REQUEST_CODES ), 1 ); }
https://www.codeguru.com/cpp/i-n/network/article.php/c2501/Multithreaded-Telnet-Server--Chess-Game-Example.htm
CC-MAIN-2018-09
refinedweb
307
56.15
Using Attribute Routing in ASP.NET MVC Introduction. An Example Scenario Let's assume that you are building a blog engine that stores and displays blog posts. Further, let's suppose that the blog post data is stored in a database named BlogDb in a table BlogPosts. The ADO.NET entity data model for BlogPosts table is shown below: The ADO.NET entity data model for BlogPosts table The BlogPosts table contains only four columns namely PostID, Title, Content and PublishDate (of course, a real world blog engine would contain many other tables and columns). The PostID is an identity column. Title and Description are nvarchar columns and PublishDate is a datetime column. The ASP.NET MVC application that you wish to create is supposed to display blog posts residing in this table. Under default configuration you would have requested a blog post with a URL like this: In the above URL, Blog is the name of the controller, Show is the name of an action method and 10 is the PostID of a blog post to display. The Show() action method is shown below: public class BlogController : Controller { public ActionResult Show(int id) { BlogDbEntities db = new BlogDbEntities(); BlogPost post = db.BlogPosts.Find(id); return View(post); } } As you can see, Blog controller contains Show() action method that takes the id parameter. Inside, the Show() action method retrieves a BlogPost entity for the specified id and passes it to the Show view as its model. The Show view simply displays the post in the browser: @model AttributeRoutingDemo.Models.BlogPost <!DOCTYPE html> <html> <head> <title>ShowPost</title> </head> <body> <h1>@Model.Title</h1> <p>@Model.Content</p> <hr /> <div>Published on : @Model.PublishDate</div> </body> </html> Where is the default route in this example? If you open RouteConfig.cs file located in the App_Start folder you will find this route definitation: routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); Here, the Default route definition uses {controller} and {action} parameters to map the URL to a controller and its action respectively. The default values for controller and action are specified to be Home and Index respectively. Defining a Simple Route In the previous section you developed a simple ASP.NET MVC application that displays a blog post. The default route used in the application was defined in RouteConfig.cs level. Now, let's use attribute routing to achieve the same effect. Open RouteConfig.cs and add the following call to the RegisterRoutes() method: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); } The above code calls the MapMvcAttributeRoutes() method of the RouteCollection so that attribute routing is enabled for the application. Next, open the BlogController.cs and add the following code: [Route("{action=Show}/{id?}")] public class BlogController : Controller { ... } The above code uses the [Route] attribute on top of the BlogController class to define the default route. The [Route] attribute specifies the template of the route. The template indicates that the action route parameter has a default value of Show. The id parameter is optional as indicated by the ? character. After making these changes if you run the application it should run as expected even if routes.MapRoute() call has been removed from the RouteConfig.cs file. Defining a Custom Route Now, let's assume that you wish to generate URLs of the following form: In the above URL, year, month, day (of the publication of a post) and PostID of a blog post is embedded with the URL itself. This URL is to be mapped with the Blog controller and Show() action method. To accomplish this task add the [Route] attribute to the Show() action method as shown below: [Route("{year}/{month}/{day}/{postid}")] public ActionResult Show(int year,int month,int day,int postid) { BlogDbEntities db = new BlogDbEntities(); BlogPost post = db.BlogPosts.Find(postid); string view = ""; if(post==null) { ViewBag.Message = "Invalid Blog Post ID!"; view = "Error"; } else { DateTime dt = new DateTime(year, month, day); if(dt!=post.PublishDate.Value) { ViewBag.Message = "Invalid Blog Post Date!"; view = "Error"; } else { view = "Show"; } } return View(view, post); } This time the [Route] attribute is placed on top of the Show() action method and defines four route parameters - year, month, day and postid. The Show() action method now takes four parameters corresponding to the route parameter. This way the values of year, month, day and postid can be received in the Show() method. Inside, the Show() method retrieves the BlogPost entity for the specified PostID and also constructs a DateTime instance based on the values of year, month and day. If there is no blog post associated with the postid supplied in the URL, an error message is stored in the ViewBag. On the same lines, if the publication date specified in the URL doesn't match with the actual publication date an error message is stored in the ViewBag. If there is any error the Error view is displayed in the browser, otherwise the Show view is displayed. The following figure shows a sample run of the application with the newly defined route: A sample run of the application Notice how the URL reflects the newly defined route. If you try to enter some non existent PostID (or PublishDate) you will get an error as shown: Invalid blog post ID Route Prefixes In the previous example, the route had four route parameters - values that are changing based on the blog post to access. You can also specify a static route prefix for a route. For example, you may want to have URLs in the following form: In the above URL, myblog is a route prefix and remains static no matter which blog post is being displayed. You can add a route prefix to your routes in two ways - defining them in the route template OR defining them using the [RoutePrefix] attribute. Let's see the first technique mentioned above. Modify the route template as shown below: [Route("myblog/{year}/{month}/{day}/{postid}")] public ActionResult Show(int year,int month,int day,int postid) { ... } As you can see, the route template now includes the route prefix - myblog - followed by four route parameters as before. Henceforth, your URLs will be of the format shown below: The route prefix - myblog To set a route prefix for all the action methods you can use the [RoutePrefix] attribute on top of the controller as shown below: [RoutePrefix("myblog")] public class BlogController : Controller { ... } Once you decorate the controller with the [RoutePrefix] attribute all the routes defined on the action methods automatically assume this route prefix. Thus the Show() action method needs do define a route like this: [Route("{year}/{month}/{day}/{postid}")] public ActionResult Show(int year,int month,int day,int postid) { ... } Since the route prefix is defined using the [RoutePrefix] attribute the route template doesn't include it. What if you wish to deviate from the route prefix defined using the [RoutePrefix] attribute? You can do so by ~ qualifying the route template. Here is how you can do that: [Route("~/myposts/{year}/{month}/{day}/{postid}")] public ActionResult Show(int year,int month,int day,int postid) { ... } The above code shows the [Route] attribute defining a route template that begins with ~/myposts. Doing so will bypass the route prefix defined by the [RoutePrefix] attribute and the new route prefix (myposts in this case) will be used. Route Constraints The URLs can always be tampered by the end user. What if a user enters string values in place of year, month and day? What if a user specifies some invalid value, say 100, for month? Obviously somewhere in the code you will get an exception. However, the URL is successfully "accepted" by your code and then an exception is thrown. Wouldn't it be nice to detect such violations before control reaches your code? Luckily, route constraints allow you to do just that. Using route constraints you can place some criteria for the route parameter values. Let's see how route constraints can be placed for our example: [Route("{year:int:minlength(4):maxlength(4)}/{month:int:min(1):max(12)}/{day:int:range(1,31)}/{postid}")] public ActionResult ShowPostConstraint(int year, int month, int day, int postid) { ... } The above code shows a route that makes use of route constraints. Route constraints are added to a route parameter by appending them using :. Thus the above code uses the following route constraints: - :int - :minlength(n) - :maxlength(n) - :min(n) - :max(n) - :range(n,m) The :int route constraint indicates that the parameter under consideration must contain an integer value. The min() and max() route constraints indicate that value of a parameter must be minimum and maximum up to the specified limit. The minlength() and maxlength() route constraints are used to ensure that the string length of the parameter is minimum or maximum up to the specified value. Finally, the range() route constraint ensures that a value is within minimum and maximum range. There are many other route constraints not covered in the above example and you can have a quick look at them here. If you try to enter some invalid value (say, characters instead of number for year parameter) in the URL you will get an error even before reaching your code. Invaid value error As you can see the web server is throwing 404 - Not Found error indicating that the control didn't reach the Show() action method. Summary ASP.NET MVC 5 introduced attribute routing that simplifies the routing mechanism in MVC applications. The [Route] attribute provided by the attribute routing can be used to define a route template. If used on action methods the [Route] attribute defines a route that lands a matching request to the action method under consideration. If used on controller it can use the {action} parameter to map a request with an action. You can also define route prefixes using the [RoutePrefix] attribute or embedding the route prefix in the route template. Finally, you can also apply constraints to the route parameters so that they contain values meeting certain criteria. There are no comments yet. Be the first to comment!
http://www.codeguru.com/csharp/.net/net_asp/mvc/using-attribute-routing-in-asp.net-mvc.htm
CC-MAIN-2014-35
refinedweb
1,691
63.09
One of the most versatile and useful additions to the C# language in version 6 is the null conditional operator. As I've been using C# 6 in my projects, I'm finding more and more scenarios in which this operator is the simplest and clearest way to express my intent. Ask yourself how much of your code must check a variable against the null value. Chances are, it's a lot of code. (If not, I'd worry about the quality of your codebase.) In every one of those null checks, the null conditional operator may help you to write cleaner, more concise code. We all want our code to be as clear and concise as possible, so let's explore this feature. Null Conditional Operator Syntax The null conditional operator (?.) is colloquially referred to as the "Elvis operator" because of its resemblance to a pair of dark eyes under a large quiff of hair. The null conditional is a form of a member access operator (the .). Here's a simplified explanation for the null conditional operator: The expression A?.B evaluates to B if the left operand (A) is non-null; otherwise, it evaluates to null. Many more details fully define the behavior: - The type of the expression A?.B is the type of B, in cases where B is a reference type. If B is a value type, the expression A?.B is the nullable type that wraps the underlying value type represented by B. - The specification for the feature mandates that A be evaluated no more than once. - The null conditional operator short-circuits, which means that you can chain multiple ?. operators, knowing that the first null encountered prevents the remaining (rightmost) components of the expression from being evaluated. Let's look at some examples to explain those behaviors. Consider this simplified Person class: public class Person { public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } } Assume that p represents a person. Consider these two statements: var name = p?.FirstName; var age = p?.Age; The variable name is a string. The value of name depends on the value of p. If p is null, name is null. If p is not null, name is the value of p.FirstName. Note that p.FirstName may be null even when p is not. The variable age is an int? (which is another way of specifying a Nullable<int>). As with name, the value of age depends on the value of p. If p is null, age is an int? with no value. If p is non-null, age is the wrapped value of p.Age. That's the basics. The power of this feature come from all the scenarios where this feature enables cleaner code. Code Cleanup with the Null Conditional Operator Suppose people is a variable that represents an IList<Person>. Now, we have a couple of levels of member access to navigate, and one of those levels uses the indexer syntax ([ ]). We could write this statement: var thisName = people?[3]?.FirstName; The ?[] syntax has the same semantics as the ?. operator: It's how you access the indexer on an array, or a class that implements an indexer. The rules for its behavior are the same. If people is null, thisName is assigned the value null. If people[3] is null, thisName is assigned the value null. Otherwise, thisName is assigned the value of people[3].FirstName. However, if people is not null, but has fewer than four elements, accessing people[3] will still throw an OutOfRangeException. In the earlier example, I used the null conditional operator on both member accesses. That's a typical pattern because the null conditional operator short-circuits. The evaluation proceeds from left to right, and it stops when the expression evaluates to null. Let's look at a second example. Consider this enhancement (shown in bold) to the Person class so that it contains a reference to a person's spouse: public class Person { public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public Person Spouse { get; set; } } You would retrieve the spouse's name as follows: var spouseName = p?.Spouse?.FirstName; Semantically, this is roughly equivalent to the following: var spouseName = (p == null) ? null : (p.Spouse == null) ? null : p.Spouse.FirstName; or, in a more verbose form: var spouseName = default(string); if (p != null) { if (p.Spouse != null) { spouseName = p.Spouse.FirstName; } } This example shows how much cleaner code becomes by using the null conditional operator. The more lengthy form is quite a bit more verbose. While this example used the ?. operator on each member access, that's not required. You can freely mix the null conditional operator with normal member access. If the above assignment were used in a routine where p had already validated to be non-null, you could assign the spouse's name as follows: var spouseName = p.Spouse?.FirstName; Or, if a particular scenario will be called only using people that are married, you can assume the Spouse property will never be null: var spouseName = p?.Spouse.FirstName; When you mix the null conditional operator with the traditional member access operator, the resulting expression will return null if the left operand of ?. evaluates to null, and throw a NullReferenceException if the left operand of ?. evaluates to null. Remember that the short-circuiting still applies, so p?.Spouse.FirstName returns null when p is null, whereas p.Spouse?.FirstName throws a NullReferenceException when p is null. Other Scenarios There are a couple more interesting scenarios that ?. enables. I've often used it for raising events. A typical scenario is when a type supports INotifyPropertyChanged. Let's expand the Person class to support this interface, and raise the PropertyChanged event whenever one of the properties changes. Here is how I would implement the FirstName property: public string FirstName { get { return firstName; } set { if (value != firstName) { firstName = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(FirstName))); } } } private string firstName; Examine the highlighted line of code carefully. I'm also using the new nameof operator. (I'll cover that in more detail in a later article.) This line uses the null conditional operator to raise the PropertyChanged event only if code has registered a handler on that event. It would be nice if I could put the ? directly before the invocation , but that would lead to syntactic ambiguities. The C# 6 team disallowed this syntax. That's why I'm explicitly using the Invoke method on the System.Delegate class to invoke the event handler. Astute readers may be wondering if this usage is thread-safe. In earlier versions of C#, we would write this construct as follows: var handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs("FirstName")); } We would capture the current value of the event handler, and then test that value and invoke the handler if it was not null. The null conditional operator does that same work for us. It evaluates the left operand of the ?. operator only once, storing the result in a temporary variable. In this construct, that's important for thread safety. It's also important in many other scenarios, as I describe shortly. Let's return to this example, with a small change: var spouseName = GetPerson()?.Spouse?.FirstName; Notice that the variable p has been replaced by a method call. That method call may have side effects or performance implications. For example, suppose GetPerson() makes a database call to find the current user. Earlier, I translated that expression to a longer version using if statements. The actual translation is more like the following code: var spouseName = default(string); var p = GetPerson(); if (p != null) { var pSpouse = p.Spouse; if (pSpouse != null) { spouseName = p.Spouse.FirstName; } } Notice that GetPerson() is called only once. Also, if GetPerson() returns a non-null object, GetPerson().Spouse is evaluated only once (through the temporary variable p). The result of this work is that you can use the null conditional operator in scenarios that reference return values from property accessors, indexers, or method access without worrying about possible side-effects. The event-handling scenario is certainly the most common delegate usage for ?. but it isn't the only one. We can create filters that handle logging based on a delegate type: public class Logger { private Func<Severity, bool> Publish; public void GenerateLog(Severity severity, string message) { if (Publish?.Invoke(severity) ?? true) { SaveMessage(severity, message); } } } This portion of a Logger class uses the Publish delegate to determine whether a message should be written to the log. It uses the ?. operator to safely check an optional delegate that filters messages. It also leverages the existing ?? operator so that if the Publish delegate is null, all messages are published. It's syntactic sugar of the sweetest kind. Finally, there is one other scenario in which the null conditional operator comes in quite handy: variables that may implement an interface. This usage is particularly useful with IDisposable. When I create libraries, I often create generic methods or classes that create and use objects. Those objects, depending on the type, may or may not implement IDisposable. The following code shows a quick way to call Dispose() on an object only if it implements IDisposable: var thing = new TFoo(); // later (thing as IDisposable)?.Dispose(); In practice, I've only used this idiom when I create generic classes that create objects of the types specified by their type parameters. Some Initial Guidance on Working with the Null Conditional Operator I've been very aggressive in updating existing code bases with this feature because the new syntax is so much more concise and clear. I've replaced any number of null checks with the null conditional operator. If I combine it with the null propagating operator (??), I can often replace several lines of code with a single expression. In the process, I've also found bugs that have lingered in a code base. As I described earlier in this article, the code generated by the ?. operator is carefully constructed to evaluate the left side of the operand only once. I've found that handwritten algorithms may not be so carefully managed. Because the replacement can change code behavior, it does require adding tests to make sure that no other code relies on the existing hand-coded algorithm. Overall, though, I've aggressively reviewed classes and replaced code to use the idioms shown in this article. This usage has reduced code size, reduced bug counts, and made my code more readable.
http://www.informit.com/articles/article.aspx?p=2421572&WT.mc_id=Author_Wagner_Null
CC-MAIN-2018-51
refinedweb
1,756
58.58
UPDATE! Thanks to a comment by Alan Suffolk I fixed my script a little bit, just moving the end = time.clock() and time_taken = end – start after the cur.execute(query) because that’s exactly when the SAP HANA query ends…you can see the new processing time in the image below… My good friend Pakdi Decnud gave a great idea while we were having lunch the very same day of the SAP CodeJam Montreal event. Pakdi told me…”Why don’t you make a comparison between SAP HANA and let’s say…MongoDB”…I thought that it was of course a great idea…so yesterday I start exploring MongoDB…so you may ask yourselves…”Why then are you talking about MySQL and PostreSQL?” Easy answer…and here are my thoughts… - I really don’t get MongoDB…the whole No-SQL is really alien to me… - MongoDB is “Document” based, meaning that you create collections of documents, no databases nor tables… - MongoDB doesn’t support Inner Joins and aggregates need a framework that it’s more weird than MongoDB itself… - MongoDB is not meant for enterprise applications That’s why I decide to make a little bit shift and grab the two most used databases by start-ups and developers…MySQL and PostgreSQL. For this blog, I wanted to have a lot of information…so as always, I grab my beloved Python and create a little script to generate 1 million records for two tables. One script per table. The structure of the tables is as follows… DOC_HEADER DOC_DETAIL And here are the script to generate the 1 million records in a nice .CSV file With the two files ready, I upload them to MySQL, PostgreSQL and SAP HANA. To measure the speed, I create three Python scripts using…yes…again Bottle… The basic idea is to join the two tables, select the Document_Id, Year, Area and the sum of Amount. Let’s start with the MySQL Script… I let the script run…and after more than one hour…I simply got bored and interrupt the process… So, I continue with PostgreSQL… This time…I was lucky… Out of 2 millions records, PostgreSQL managed to aggregate the amount field and generate 7669 records in 36 seconds…not bad at all… For SAP HANA, I decided to take fully advantage of the Calculation Views, so I create the following… I joined both tables, used a projection, applied the aggregation and specified the result…then I wrote this Python script… After the execution…I couldn’t be happier…here’s the result… SAP HANA managed the same 2 million records…generate the same 7669 aggregated records in only 18 seconds…that’s 50% faster than PostgreSQL and…well…let’s only say…way faster than MySQL… Now…tell me that SAP HANA is not the coolest and fastest Database around…I dare you 🙂 By doing that little fix on my Python Script for SAP HANA…the new processing time, without generating the Bottle table is… Great comparison Alvero. Wow, only twice as fast? I would have expected 1.8 seconds for HANA at least. Perhaps I have drunk too much KoolAid, but it is a tad more than twice the price. What hardware were mysql and PostgreSQL running on? What if you installed them on an SSD ? What about MongoDB or couchDB v HANA? It is sold as being able to handle unstructured data also. You don’t show the CREATE TABLE queries – did you index the primary keys? There is always much more that what meets the eye in these types of comparisons but this is excellent and much food for thought. Nigel: It’s only twice faster because it’s a plain selection…no optimization done 🙂 and I’m not an SAP HANA expert…so I’m sure a lot of people can do it better than me 😉 Both MySQL and PostreSQL are running on my laptop…8GB RAM, SSD, Windows 7, 2,8Ghz. SAP HANA is running on Amazon Web Services… In the beginning of the blog I talked why I didn’t used MongoDB…I just can’t understand it 🙁 Same goes for CouchDB and all the No-SQL gang… I didn’t used any CREATE TABLE statements…as I used the graphical interfaces…no primary keys or indices for any of the 3 DB’s…wanted to keep it as primitive as possible… I agree with you completely…that what I used the “friendly comparison” in the title…what I did can’t be considered as a professional benchmark…but it’s still fun to see it and it was fun to do it 😉 Greetings, Blag. I thought there was nothing that Blag didn’t understand that Blag didn’t want to understand! 🙂 There’s always a first time for everything 😉 Greetings, Blag. Blag, You should really do some research on NoSQL and MongoDB, it’s a lot of fun. 🙂 Cheers, Pierre Pierre: If I can find a “No-SQL for dummies” then I will think about it 😉 Greetings, Blag. Ok…now this is funny…I just went to my Google Reader and found this… Free MongoDB for Python Developers course starting on January…time to sign up 😛 Greetings, Blag. Thanks for the link, I think I’ll register. Cheers, Pierre Cool! It will more fun if not the only one there 🙂 They also have one for Java and DBA’s Greetings, Blag. Hi Blag, if you still have some ABAP knowledge 😉 , you can try my APIs for Cheers Uwe Uwe: Learning ABAP is like riding a bycicle…you never forget it 😉 I have already joined both groups and will take a look at them as soon as I can… 🙂 Greetings, Blag. in a second reading of your code Blag I would be interested in the time to do the select rather than the time to output the table so if you could kindly put end = time.clock() time_taken = end – start after ret = cur.fetchall() and cur.execute(query) That would be an interesting comparison. Cheers, Nigel Done 🙂 PostgreSQL –> 28.219 SAP HANA –> 17.787 The difference decreased, but…the Python connector for SAP HANA is still on beta…and while PostgreSQL only does a SELECT…SAP HANA is creating a TABLE TYPE, a STORED PROCEDURE, buffering the content on the CALCULATION VIEW on a temporary table and the doing a SELECT on that temp table…way more processes in less time 🙂 Greetings, Blag. So the Python connector is at fault? Is it possible that 16 of those seconds are transporting the data from AWS to your computer? What if you had a Protgres server on AWS? Just trying to get a Pink Lady v Granny Smith comparison (two types of apples) I wouldn’t say “faulty”…but it haven’t been released for customers…I just use it because I like Python -:) Regarding PostgreSQL on AWS…well…I already have 3 servers…SAP HANA, R and SMP…adding one more will only increase my monthly bill 😉 I like to believe that I have planted the seed…I will leave others to run more comparisons and show the results 🙂 Greetings, Blag. Of course not suggesting you should but just thinking through what you have done in this very thought provoking blog. Thanks, Nigel Nigel! Sorry man…failed to see that you were pointing me to the right direction…. after ret = cur.fetchall() and cur.execute(query) I only did the after ret = cur.fetchall() yesterday…today I did the after cur.execute(query) and you can really see the difference now 😉 Next time…I will be more careful with your wisely advices! 😀 Greetings, Blag. Awesome – thanks for updating that and I think you need to do more of these. Others need to have a go too. Cheers, Nigel I dare you! After fixing the error in your Python scripts, I loaded the data and ran the same queries in Sybase IQ (v.15.4). This took 6.3 seconds without any tuning. With elementary tuning, I can run it in 5.6 seconds. Of this, 1.7 seconds is taken by sending the results to the client so the actual query processing time is only 3.9 seconds. With more tuning, this can certainly be improved. This is on a pretty old 2-core Linux machine. The thing is that everyone here always seems to think that the magic words “in-memory” are what it is all about. But it’s not — IQ will keep this dataset in memory as well, the smart query processing algorithms are what counts here. Sybase IQ has been on the market since the mid-1990’s and is a proven product. Alan: Thanks for your comment…but…can you be so kind to point the errors in my Python scripts? Clearly, I haven’t spot them 🙂 Did you used the same 1 million records for each table? The ones generated by my scripts? BTW…as I said already, I didn’t want to use any tuning at all… Sure, my HANA numbers are slow…but as I always say…I’m not an SAP HANA Expert, so I’m sure a lot of people can do it better than me 😉 I don;t have IQ installed, so could you share an image with the processing time? On a second thought…I moved the end = time.clock() time_taken = end – start just after the cur.execute(query) because the ret = cur.fetchall() is on the Python side, because the SAP HANA selection has already finished…the new processing time is 1.180 seconds…will update the blog with the new image 😉 Maybe I was having network delays cause by my Internet provider… Greetings, Blag. Yes, I loaded 1m rows as generated by your scripts into both tables. The error comes out because this function is declared with two parameters but called with only one: def Generate_File(pSchema, pNumber) Generate_File(num_files) I’m not sure what you mean by making an IQ image available. I’m running at a local system here. You can download the free IQ express edition if you want, it’s free. I see…I fixed the parameters but I guess I copied the codes from an older source… What I mean with the IQ Image…is just a picture of the processing time 🙂 But anyway…don’t want to keep up any debate here…this will be most likely my last comment on this blog…please read this… Greetings, Blag. Here is one that took 3.6 seconds. This can definitely be improved but I’m not going to spend more time on this now. For some reason the image didnt get published. It is in the comment below. Click on the image to get a sharp view. Hi Alvaro, Please could you share the latest comparison between MongoDB vs SAP HANA ? I have gone through: SAP Hana DB vs. MongoDB comparison | vsChart.com Thanks Much, Abhishek Abhishek: I have never used MongoDB…so I have never done any comparison between it and SAP HANA 🙂 So…no comments on my side… Greetings, Blag. Development Culture. Hi Abhishek, what about doing it on your own and publishing the results here? Would be great. 😀 That’s the spirit Uwe 😉 Greetings, Blag. Development Culture.
https://blogs.sap.com/2012/12/18/mysql-postresql-and-sap-hana-a-friendly-comparison/
CC-MAIN-2019-13
refinedweb
1,881
69.11
test_tmdb 0.0.2-alpha.2 t. Change Log # All notable changes to this project will be documented in this file. This project adheres to Semantic Versioning. Unreleased # 0.0.2-alpha.1 - 2015-12-18 Fixed # - CHANGELOG links. Changed # - Bump version number. Again. Because I screwed up earlier. 0.0.1-alpha.3 - 2015-12-18 Changed # - Version format. 0.0.1-alpha2 - 2015-12-18 Fixed # - README formatting. 0.0.1-alpha1 - 2015-12-18 # Added # - All API methods implemented but untested. Highly unstable. Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: test_tmdb: ^0.0.2-alpha.2 2. Install it You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. 3. Import it Now in your Dart code, you can use: import 'package:test_tmdb/core.dart'; import 'package:test_tmdb/html.dart'; import 'package:test_tmdb/io.dart'; We analyzed this package on Nov 8, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.6.0 - pana: 0.12.21 Platforms Detected platforms: web Platform components identified in package: html. Health suggestions Fix lib/src/groups/account.dart. (-1.99 points) Analysis of lib/src/groups/account.dart reported 4 hints: line 165 col 57: Use = to separate a named parameter from its default value. line 178 col 54: Use = to separate a named parameter from its default value. line 191 col 59: Use = to separate a named parameter from its default value. line 204 col 56: Use = to separate a named parameter from its default value. Fix lib/core.dart. (-1.49 points) Analysis of lib/core.dart reported 3 hints: line 75 col 21: Use = to separate a named parameter from its default value. line 75 col 56: Use = to separate a named parameter from its default value. line 164 col 40: Use = to separate a named parameter from its default value. Fix lib/src/groups/authentication.dart. (-1 points) Analysis of lib/src/groups/authentication.dart reported 2 hints: line 19 col 3: Avoid return types on setters. line 20 col 3: Avoid return types on setters. Fix additional 5 files with analysis or formatting issues. (-2 points) Additional issues in the following files: lib/src/params.dart(2 hints) lib/html.dart(1 hint) lib/io.dart(1 hint) lib/src/groups/genres.dart(Run dartfmtto format lib/src/groups/genres.dart.) lib/src/groups/lists.dart(Run dartfmtto format lib/src/groups/lists.dart.) Maintenance issues and suggestions Support latest dependencies. (-10 points) The version constraint in pubspec.yaml does not support the latest published versions for 1 dependency ( http). Package is pre-v0.1 release. (-10 points) While nothing is inherently wrong with versions of 0.0.*, it might mean that the author is still experimenting with the general direction of the API. Package is pre-release. (-5 points) Pre-release versions should be used with caution; their API can change in breaking ways. Maintain an example. None of the files in the package's example/ directory matches known example patterns. Common filename patterns include main.dart, example.dart, and test_tmdb.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/test_tmdb/versions/0.0.2-alpha.2
CC-MAIN-2019-47
refinedweb
566
54.39
Code Inspection: Iteration variable can be declared with a more specific type Consider the following class hierarchy: public class Person { public string Name { get; set; } } public class Child : Person { } If we wanted to write a method that would print all of children’s names, we could define the following: public void Print(IEnumerable<Child> children) { foreach (Person p in children) Console.WriteLine(p.Name); } However, why should our iteration variable be Person? In fact, we could easily change it to Child and still get the same result. Please note that there is a case when a more general type could not be replaced with a derived one without changes to the way code behaves. This case can occur if your iteration variable is declared to be of type dynamic. Last modified: 18 April 2018
http://www.jetbrains.com/help/resharper/MoreSpecificForeachVariableTypeAvailable.html
CC-MAIN-2018-17
refinedweb
134
58.32
On 27/08/19 15:27 +0200, Ulrich Windl wrote: > Systemd think he's the boss, doing what he wants: Today I noticed that all > resources are run inside control group "pacemaker.service" like this: > ├─pacemaker.service > │ ├─ 26582 isredir-ML1: listening on 172.20.17.238/12503 (2/1) > │ ├─ 26601 /usr/bin/perl -w /usr/sbin/ldirectord /etc/ldirectord/mail.conf > start > │ ├─ 26628 ldirectord tcp:172.20.17.238:25 > │ ├─ 28963 isredir-DS1: handling 172.20.16.33/10475 -- 172.20.17.200/389 > │ ├─ 40548 /usr/sbin/pacemakerd -f > │ ├─ 40550 /usr/lib/pacemaker/cib > │ ├─ 40551 /usr/lib/pacemaker/stonithd > │ ├─ 40552 /usr/lib/pacemaker/lrmd > │ ├─ 40553 /usr/lib/pacemaker/attrd > │ ├─ 40554 /usr/lib/pacemaker/pengine > │ ├─ 40555 /usr/lib/pacemaker/crmd > │ ├─ 53948 isredir-DS2: handling 172.20.16.33/10570 -- 172.20.17.201/389 > │ ├─ 92472 isredir-DS1: listening on 172.20.17.204/12511 (13049/3) > ... > > (that "isredir" stuff is my own resource that forks processes and creates > threads on demand, thus modifying process (and thread) titles to help > understanding what's going on...) > > My resources are started via OCF RA (shell script), not a systemd unit. > > Wouldn't it make much more sense if each resource would run in its > own control group? While listing like above may be confusing, the main problem perhaps is that all the resource restrictions you specify in pacemaker service file will be accounted to the mix of stack-native and stack-managed resources (unless when of systemd class), hence making all those containment features and supervision of systemd rather unusable, since there's no tight (vs. rather open-ended) blackbox to reason about. There have been some thoughts that pacemaker could become the delegated controller of its own delegated cgroup subtrees in the past, however. There is a nice document detailing various possibilities, but also looks pretty overwhelming on the first look: Naively, i-like-continents integration option there looks most appealing to me at this point. If anyone has insights into cgroups and how it pairs with systemd and could pair with pacemaker, please do speak up, it could be a great help in sketching the design in this area. > I mean: If systemd thinks everything MUST run in some control group, > why not pick the "correct " one? Having the pacemaker infrastructure > in the same control group as all the resources seems to be a bad > idea IMHO. No doubts it is suboptimal. > The other "discussable feature" are "high PIDs" like "92472". While port > numbers are still 16 bit (in IPv4 at least), I see little sense in having > millions of processes or threads. Have seen your questioning this at the systemd ML, but wouldn't think of any kind of inconveniences in that regard, modulo pre-existing real bugs. It actually slightly helps to unbreak firm-guarantees-lacking design based on PID liveness (risk of process ID recycling is still better than downright crazy "process grep'ing", totally unsuitable when chroots, PID namespaces or containers rooted on that very host get into the picture, but not much better otherwise[1]!). [1] -- Jan (Poki) pgpv8zIc3yPjG.pgp Description: PGP signature _______________________________________________ Manage your subscription: ClusterLabs home:
https://www.mail-archive.com/users@clusterlabs.org/msg09030.html
CC-MAIN-2019-43
refinedweb
522
53.61
Introduction to Form Regions In Outlook 2007, developers have the ability to extend the Outlook UI by creating a special kind of Outlook extension called an Outlook form region. Form regions are used primarily to customize Inspector windows, which we introduced in Chapter 10, “Working with Outlook Events.” Inspector windows are the Outlook windows that appear when you double-click an Outlook item—a mail item in your inbox or a task in a task list, for example. With form regions you can do things like add pages to the Inspector window, replace all the existing pages in an Inspector window with your own page, or dock some custom UI onto an existing page. You can also use a certain type of Outlook form region (an Adjoining form region) to customize the reading pane in Outlook Explorer windows. Creating a New Form Region To begin our exploration of Outlook form regions, let’s create a simple one by using Visual Studio 2008. Start by creating a new Outlook add-in project by choosing File > New > Project. In the New Project dialog box that appears, create a new Outlook 2007 add-in, as shown in Figure 16-1. Figure 16-1 Creating a new Outlook 2007 add-in. Now, in your new add-in project, choose Project > Add New Item. Click the Office category to filter to show just the Office-specific items. In the list of Office items, click Outlook Form Region, as shown in Figure 16-2. Name the form region—just use the default name FormRegion1. Then click the Add button. Figure 16-2 Adding an Outlook form region to an Outlook 2007 add-in project. A wizard appears, as shown in Figure 16-3. The first step in the wizard is to decide whether you want to create an Outlook form region or import a form region that was previously designed in Outlook with Outlook’s built-in form designer. For this introduction, click Design a New Form Region. This option lets you use Windows Forms and keeps our editing experience within Visual Studio. Later in the chapter we show you how to use the Outlook built-in form designer, as well as discuss when you might want to use Outlook’s form designer instead of Windows Forms. Figure 16-3 Selecting the form technology to use to create the form region. After you decide whether to design a new form region with Windows Forms or to import an existing Outlook form region designed in Outlook, click the Next button to move to the second page of the wizard, shown in Figure 16-4, which allows you to pick the type of form region you want to create. Figure 16-4 Selecting the type of form region to create: Separate. To understand the types of form regions that are available in Figure 16-4, we must take a step back and discuss Inspector windows in some additional detail. Form regions are used primarily in Outlook Inspector windows. An Outlook Inspector window can have multiple pages associated with it, and Ribbon buttons are used to switch between the pages associated with a particular Inspector window. Consider the Inspector window that appears when you double-click an Outlook task, as shown in Figure 16-5. Figure 16-5 A task Inspector window with the Task page selected. Figure 16-5 has two Ribbon buttons in the Show group: Task and Details. In Figure 16-5 the Task button is selected and the Task page is displayed. The Task page is the default page for the Task Inspector window and is displayed first whenever a task is opened. If you click the Details button, the view changes to the Details page, as shown in Figure 16-6. Figure 16-6 A Task Inspector window with the Details page selected. With this background, you’re ready to go back to Figure 16-4 and make sense of the options. A Separate form region adds a new page (and a new Ribbon button to activate that page) to an Inspector window. So you could add a new page to the Task Inspector window to show something like subtasks that must be completed to finish the main task. In Figure 16-4 the wizard also displays a nice graphic to help you remember what a Separate form region is. In this case the graphic emphasizes that you get a new Ribbon button to display the new page, and you have complete control of the new page that is shown. Figure 16-7 shows what the wizard displays when you select Replacement instead of Separate as the type of form region. A Replacement form region allows you to replace the default page of the Inspector window. So in the task example, you could replace the Task page (the default page for a Task Inspector window), but the Details page would still be available. Figure 16-7 Selecting the type of form region to create: Replacement. Figure 16-8 shows what the wizard displays when you select Replace-All as the type of form region. A Replace-All form region allows you to replace all available pages and make your page available only in the Inspector window. So in the task example, you could replace both the Task page and the Details page; your page would be the only page displayed in the Inspector window. Figure 16-8 Selecting the type of form region to create: Replace-All. When you think about Replacement and Replace-All form region types, you realize that replacing the default pages for an Outlook item type is a pretty powerful capability—actually too powerful, in a way, because you could change the default page for an Outlook item type, such as a task, and implement a new default page that prevents the user from editing key data associated with that task. You may forget to provide a way to set the priority of a task in your Replacement or Replace-All form region, for example. Indeed, the creators of Outlook didn’t want to give you quite that much power, enough to possibly break key functionality of Outlook. To jump ahead a little, select Replacement or Replace-All as the form region type and then skip two steps ahead in the wizard by clicking the Next button twice. You see the wizard page shown in Figure 16-9, where you determine which Outlook message classes you want this form region to be associated with. When you select Replacement or Replace-All, notice that all the standard message classes (Appointment, Contact, Task, and so on) are grayed out in this dialog box. Outlook won’t let you replace the default page or replace all the pages for standard message classes because you may break key features of Outlook. To use Replacement and Replace-All form region types, you must define a custom message class. A custom message class can reuse all the existing functionality of a built-in message class such as Appointment, Contact, or Task and acts as a specialized version of those built-in Outlook item objects. We discuss working with custom message classes in more detail later in this chapter, in the section “Form Region Types and Custom Message Classes,” because you must understand that concept to use Replacement and Replace-All form region types. Figure 16-9 Replacement and Replace-All form regions can be associated only with custom message classes. Moving back to the page in the wizard where you pick the form region type, consider the final form region type: Adjoining, shown in Figure 16-10. An Adjoining form region is appended to the bottom of the default page for an Inspector. Multiple adjoining form regions can be associated with the same message class, so potentially you can have several Adjoining form regions displayed in one Inspector window’s default page. Adjoining form regions have headers that allow them to be collapsed and expanded to make more room in the default page when needed. Figure 16-10 Selecting the type of form region to create: Adjoining. Another interesting application of an Adjoining form region is in an Explorer window. Specifically, an Adjoining form region can be used in the reading pane that is displayed in an Explorer window. In much the same way that they are used in the default page of an Inspector window, multiple Adjoining form regions can be associated with an Outlook message class and can be displayed in the reading pane. Form regions displayed in the reading pane can also be collapsed to their headers. Replacement and Replace-All form regions can be used in the reading pane as well, although in this case they replace what is shown in the reading page and can be used only for custom message classes. Now that you’re familiar with all the form region types, select Adjoining as the form region type and click the Next button to move to the next page of the wizard, shown in Figure 16-11. In this dialog box, you set the name for the form region that will be displayed in the UI, so pick a friendly name. Title and Description are grayed out because you’re creating an Adjoining form region; those options are enabled only for Replacement and Replace-All form region types. Figure 16-11 Setting descriptive text and display preferences. This page of the wizard also has three check boxes that specify when the form region is displayed. The first check box sets whether the form region is displayed for an Inspector window that is in compose mode. An Inspector window is in compose mode when you create a new instance of the Outlook item associated with it—when you create a new task, for example. The second check box sets whether the form region is displayed for an Inspector window that is in read mode. An Inspector window is in read mode when you open an existing item—a mail message, for example. Finally, the third check box sets whether to display the form region in reading-pane view. For this example, keep all the boxes checked and click the Next button to pick which Outlook message classes to associate the form region with, as shown in Figure 16-12. For this example, select Task. Note that you can associate the same form region with multiple built-in Outlook message classes. You could have a form region that displays for both Tasks and Mail messages, for example. You can also associate a form region with custom message classes, which we discuss later in this chapter. As we describe earlier in this section, Replacement and Replace-All form region types can be associated only with custom message classes. Figure 16-12 Picking which message classes will display a form region. Associate the form region with the built-in Task type, and click the Finish button to exit the wizard. Visual Studio creates a new project item called FormRegion1.cs, as shown in Figure 16-13. It displays a visual designer in which you can drag and drop Windows Forms controls from the toolbox to construct the form region. This visual designer is much like the one you use to design user controls and task panes. Figure 16-13 The newly created form region project item in visual design view. Customizing a Form Region Your goal is to add a form region in which subtasks can be associated with a task. First, drag and drop a list box control and a button to create a new task and delete an existing task. Because the user can resize the form region, use the Anchor property of the controls to anchor the list box to the top, left, bottom, and right, and anchor the buttons to the bottom and left. Figure 16-14 shows the final form region. Figure 16-14 A simple form region. Before you go any further, run the add-in project and see what happens. Press F5 to build and run Outlook with the add-in project loaded. If you click a task in a task list and show reading view (by choosing View > Reading Pane > Bottom), you see that the adjoining form region is displayed docked at the bottom of reading-pane view for a task, as shown in Figure 16-15. If you double-click a task, the Adjoining form region is docked at the bottom of the default page for the Inspector window, as shown in Figure 16-16. After you’ve run your project, if you want to remove the form region and add-in from Outlook, choose Build > Clean. Figure 16-15 An Adjoining form region in the reading pane. Figure 16-16 An Adjoining form region in the default page of an Inspector window. Let’s examine the adjoining form region a little more. First, notice that the Name you specified in Figure 16-11 is displayed as the caption above the Adjoining form region. To the left of the form region caption is a −/+ button that expands and collapses the form region. In Figure 16-17 you see what an Adjoining form region looks like when it is collapsed. Remember that several Adjoining form regions could be displayed in one Inspector window or reading pane; the ability to expand and collapse them is important, because it allows the end user to manage screen real estate. Figure 16-17 A collapsed Adjoining form region. Also, notice that when you resize the reading pane or the Inspector window, the form region has a default height. When the user adjusts the size of the form region, Outlook remembers the height and uses that height the next time the reading view is displayed. If you size the window small enough that the default height of the form region can’t be displayed, a vertical scroll bar appears, as shown in Figure 16-18. This minimum height represents the height you set when you designed the form region. To have a smaller or larger minimum height, simply adjust the height of the visual design surface for the form region inside Visual Studio. Figure 16-18 The effect of default height on the form region’s vertical scroll bar. Now exit Outlook and go back to the add-in project to put some code behind the form region. Right-click FormRegion1.cs in the Solution Explorer, and choose View Code from the context menu. The default code for a form region is shown in Listing 16-1. There are three event handlers of interest in our class FormRegion1. The first is actually in a nested class called FormRegion1Factory. This nested class provides a method called FormRegion1Factory_FormRegionInitializing where you can write code to decide whether to show the form region for a given Outlook item. The FormRegionInitializing event handler is passed a parameter e of type FormRegionInitializingEventArgs that can be used to get the Outlook item that the form region is about to be shown for (e.OutlookItem) and to cancel the showing of the form region if necessary by setting e.Cancel to true. Don’t hold a reference to the Outlook item (e.OutlookItem) that is about to be shown; it is provided for use only during the event handler. The form region class itself (FormRegion1) has a FormRegionShowing event handler that is invoked before the form region is displayed (but too late to prevent the display of the form region altogether; that is what FormRegionInitializing is for). In the FormRegionShowing event handler, you can write code to initialize your form region. In this event handler, you can use the property this.OutlookItem to access the Outlook item associated with the form region. When the form region is closed, the FormRegionClosed event handler is invoked. This event handler is a good place to save any changes made to the Outlook item by your form region and to do any final cleanup. Listing 16-1. The Default Code in a New Windows Forms-Based Form Region using System; using System.Collections.Generic; using System.Linq; using System.Text; using Office = Microsoft.Office.Core; using Outlook = Microsoft.Office.Interop.Outlook; namespace OutlookAddIn1 { partial class FormRegion1 { ) { } // Occurs when the form region is closed. // Use this.OutlookItem to get a reference to the current // Outlook item. Use this.OutlookFormRegion to get a reference // to the form region. private void FormRegion1_FormRegionClosed(object sender, System.EventArgs e) { } } } Listing 16-2 shows a simple implementation for the subtasks form region. You don’t need to write any code in FormRegionInitializing because you always want to display your form region. In FormRegionShowing, write some code to get a custom UserProperty object from the Outlook item with which the form region is associated. The custom UserProperty we will associate with the Outlook item will have the identifier "SubTasks" You’ll use this custom UserProperty to store the subtasks that are edited by the form region. If the UserProperty isn’t associated with the Outlook item yet, create the UserProperty for the Outlook item in FormRegionInitializing. The "SubTasks" user property contains a string value that contains subtasks delimited by a new line. You parse any subtasks that are in the string and populate the list box for the form region with the subtasks. In FormRegionClosed, you do the reverse: Grab all the entries out of the list box and concatenate them into a string in which subtasks are separated by new lines. If the subtasks have been changed, set the "SubTasks" UserProperty’s value to the new string and save the associated Outlook item. Finally, a simple implementation for the Add button just adds the current time as a new subtask; a complete implementation would include a dialog box with an edit box in which the user could type a subtask description. The Delete button deletes the selected list item. Listing 16-2. Form Region Code for a Simple Subtasks Form Region Based on Windows Forms using System; using System.Collections.Generic; using System.Linq; using System.Text; using Office = Microsoft.Office.Core; using Outlook = Microsoft.Office.Interop.Outlook; namespace OutlookAddIn1 { partial class FormRegion1 { Outlook.TaskItem task; Outlook.UserProperty subtasks; ) { task = this.OutlookItem as Outlook.TaskItem; if (task != null) { // Check for custom property SubTasks subTasks = task.UserProperties.Find("SubTasks", true); if (subTasks == null) { subTasks = task.UserProperties.Add("SubTasks", Outlook.OlUserPropertyType.olText, false, Outlook.OlUserPropertyType.olText); } } // Convert string string subTasksString = subTasks.Value.ToString(); if (!String.IsNullOrEmpty(subTasksString)) { string[] delimiters = new string[1]; delimiters[0] = System.Environment.NewLine; string[] tasks = subTasksString.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); for (int i = 0; i < tasks.Length; i++) { listBoxSubTasks.Items.Add(tasks[i]); } } } // Occurs when the form region is closed. // Use this.OutlookItem to get a reference to the current // Outlook item. Use this.OutlookFormRegion to get a reference // to the form region. private void FormRegion1_FormRegionClosed(object sender, System.EventArgs e) { if (subTasks == null || task == null) return; string oldTasks = subTasks.Value.ToString(); StringBuilder builder = new StringBuilder(); foreach (object o in listBoxSubTasks.Items) { string t = o as string; if (!String.IsNullOrEmpty(t)) { builder.AppendLine(t); } } string newTasks = builder.ToString(); if (!String.IsNullOrEmpty(newTasks) && !String.IsNullOrEmpty(oldTasks)) { if (newTasks.CompareTo(oldTasks) == 0) return; // no changes } subTasks.Value = newTasks; task.Save(); } private void buttonNew_Click(object sender, EventArgs e) { // Just add current time as a subtask for simplicity listBoxSubTasks.Items.Add( System.DateTime.Now.ToShortTimeString()); } private void buttonDelete_Click(object sender, EventArgs e) { if (listBoxSubTasks.SelectedItem != null) { listBoxSubTasks.Items.RemoveAt( listBoxSubTasks.SelectedIndex); } } } } When you run the form region, it displays as before, but now the Add and Delete buttons work, and you can add subtasks (set to the current time) to the current task.
http://www.informit.com/articles/article.aspx?p=1327185
CC-MAIN-2014-49
refinedweb
3,273
62.88
Descriptor¶ Data instances in Orange can contain several types of variables: discrete, continuous, strings, and Python and types derived from it. The latter represent arbitrary Python objects. The names, types, values (where applicable), functions for computing the variable value from values of other variables, and other properties of the variables are stored in descriptor classes derived from Descriptor. Orange considers two variables (e.g. in two different data tables) the same if they have the same descriptor. It is allowed - but not recommended - to have different descriptors with the same name. Descriptors can be constructed either by calling the corresponding constructors or by a factory function make(), which either retrieves an existing descriptor or constructs a new one. - class Orange.feature.Descriptor¶ An abstract base class for variable descriptors. - get_value_from¶ A function (an instance of Classifier) that computes a value of the variable from values of one or more other variables. This is used, for instance, in discretization, which computes the value of a discretized variable from the original continuous variable. - ordered¶ A flag telling whether the values of a discrete variable are ordered. At the moment, no built-in method treats ordinal variables differently than nominal ones. - random_generator¶ A local random number generator used by method randomvalue(). - default_meta_id¶ A proposed (but not guaranteed) meta id to be used for that variable. For instance, when a tab-delimited contains meta attributes and the existing variables are reused, they will have this id (instead of a new one assigned by Orange.feature.Descriptor.new_meta_id()). - attributes¶ A dictionary which allows the user to store additional information about the variable. All values should be strings. See the section about storing additional information. Discrete variables¶ - class Orange.feature.Discrete¶ Bases: Descriptor Descriptor for discrete variables. - values¶ A list with symbolic names for variables’ values. Values are stored as indices referring to this list and modifying it instantly changes the (symbolic) names of values as they are printed out or referred to by user. Note The size of the list is also used to indicate the number of possible values for this variable. Changing the size - especially shrinking the list - can crash Python. Also, do not add values to the list by calling its append or extend method: use add_value method instead. It is also assumed that this attribute is always defined (but can be empty), so never set it to None. - base_value¶ Stores the base value for the variable as an index in values. This can be, for instance, a “normal” value, such as “no complications” as opposed to abnormal “low blood pressure”. The base value is used by certain statistics, continuization etc. potentially, learning algorithms. The default is -1 which means that there is no base value. Continuous variables¶ - class Orange.feature.Continuous¶ Bases: Descriptor Descriptor for continuous variables. - number_of_decimals¶ The number of decimals used when the value is printed out, converted to a string or saved to a file. - scientific_format¶ If True, the value is printed in scientific format whenever it would have more than 5 digits. In this case, number_of_decimals is ignored. - adjust_decimals¶ Tells Orange to monitor the number of decimals when the value is converted from a string (when the values are read from a file or converted by, e.g. inst[0]="3.14"): - 0: the number of decimals is not adjusted automatically; - 1: the number of decimals is (and has already) been adjusted; - 2: automatic adjustment is enabled, but no values have been converted yet. By default, adjustment of the number of decimals goes as follows: - If the variable was constructed when data was read from a file, it will be printed with the same number of decimals as the largest number of decimals encountered in the file. If scientific notation occurs in the file, scientific_format will be set to True and scientific format will be used for values too large or too small. - If the variable is created in a script, it will have, by default, three decimal places. This can be changed either by setting the value from a string (e.g. inst[0]="3.14", but not inst[0]=3.14) or by manually setting the number_of_decimals. - start_value, end_value, step_value The range used for randomvalue. String variables¶ - class Orange.feature.String¶ Bases: Descriptor Descriptor for variables that contain strings. No method can use them for learning; some will raise error or warnings, and others will silently ignore them. They can be, however, used as meta-attributes; if instances in a dataset have unique IDs, the most efficient way to store them is to read them as meta-attributes. In general, never use discrete attributes with many (say, more than 50) values. Such attributes are probably not of any use for learning and should be stored as string attributes. When converting strings into values and back, empty strings are treated differently than usual. For other types, an empty string denotes undefined values, while String will take empty strings as empty strings – except when loading or saving into file. Empty strings in files are interpreted as undefined; to specify an empty string, enclose the string in double quotes; these are removed when the string is loaded. Python objects as variables¶ - class Orange.feature.Python¶ Bases: Descriptor Base class for descriptors defined in Python. It is fully functional and can be used as a descriptor for attributes that contain arbitrary Python values. Since this is an advanced topic, PythonVariables are described on a separate page. !!TODO!! Storing additional attributes¶ All variables have a field attributes, a dictionary that can store additional string data. import Orange titanic = Orange.data.Table("titanic.tab") var = titanic.domain[0] print var print "Attributes", var.attributes var.attributes["a"] = "12" print "Set a=12" print "Attributes", var.attributes These attributes can only be saved to a .tab file. They are listed in the third line in <name>=<value> format, after other attribute specifications (such as “meta” or “class”), and are separated by spaces. Reuse of descriptors¶ There are situations when variable descriptors need to be reused. Typically, the user loads some training examples, trains a classifier, and then loads a separate test set. For the classifier to recognize the variables in the second data set, the descriptors, not just the names, need to be the same. When constructing new descriptors for data read from a file or during unpickling, Orange checks whether an appropriate descriptor (with the same name and, in case of discrete variables, also values) already exists and reuses it. When new descriptors are constructed by explicitly calling the above constructors, this always creates new descriptors and thus new variables, although a variable with the same name may already exist. The search for an existing variable is based on four attributes: the variable’s name, type, ordered values, and unordered values. As for the latter two, the values can be explicitly ordered by the user, e.g. in the second line of the tab-delimited file. For instance, sizes can be ordered as small, medium, or big. The search for existing variables can end with one of the following statuses. - Descriptor.MakeStatus.Incompatible(3)¶ There are variables with matching name and type, but their values are incompatible with the prescribed ordered values. For example, if the existing variable already has values [“a”, “b”] and the new one wants [“b”, “a”], the old variable cannot be reused. The existing list can, however be appended with the new values, so searching for [“a”, “b”, “c”] would succeed. Likewise a search for [“a”] would be successful, since the extra existing value does not matter. The formal rule is thus that the values are compatible iff existing_values[:len(ordered_values)] == ordered_values[:len(existing_values)]. - Descriptor.MakeStatus.NoRecognizedValues(2)¶ There is a matching variable, yet it has none of the values that the new variable will have (this is obviously possible only if the new variable has no prescribed ordered values). For instance, we search for a variable “sex” with values “male” and “female”, while there is a variable of the same name with values “M” and “F” (or, well, “no” and “yes” :). Reuse of this variable is possible, though this should probably be a new variable since it obviously comes from a different data set. If we do decide to reuse the variable, the old variable will get some unneeded new values and the new one will inherit some from the old. - Descriptor.MakeStatus.MissingValues(1)¶ There is a matching variable with some of the values that the new one requires, but some values are missing. This situation is neither uncommon nor suspicious: in case of separate training and testing data sets there may be values which occur in one set but not in the other. - Descriptor.MakeStatus.OK(0)¶ There is a perfect match which contains all the prescribed values in the correct order. The existing variable may have some extra values, though. Continuous variables can obviously have only two statuses, NotFound or OK. When loading the data using Orange.data.Table, Orange takes the safest approach and, by default, reuses everything that is compatible up to and including NoRecognizedValues. Unintended reuse would be obvious from the variable having too many values, which the user can notice and fix. More on that in the page on Loading and saving data. There are two functions for reusing the variables instead of creating new ones. - Descriptor.make(name, type, ordered_values, unordered_values[, create_new_on])¶ Find and return an existing variable or create a new one if none of the existing variables matches the given name, type and values. The optional create_new_on specifies the status at which a new variable is created. The status must be at most Incompatible since incompatible (or non-existing) variables cannot be reused. If it is set lower, for instance to MissingValues, a new variable is created even if there exists a variable which is only missing the same values. If set to OK, the function always creates a new variable. The function returns a tuple containing a variable descriptor and the status of the best matching variable. So, if create_new_on is set to MissingValues, and there exists a variable whose status is, say, NoRecognizedValues, a variable would be created, while the second element of the tuple would contain NoRecognizedValues. If, on the other hand, there exists a variable which is perfectly OK, its descriptor is returned and the returned status is OK. The function returns no indicator whether the returned variable is reused or not. This can be, however, read from the status code: if it is smaller than the specified create_new_on, the variable is reused, otherwise a new descriptor has been constructed. The exception to the rule is when create_new_on is OK. In this case, the function does not search through the existing variables and cannot know the status, so the returned status in this case is always OK. - Descriptor.retrieve(name, type, ordered_values, onordered_values[, create_new_on])¶ Find and return an existing variable, or None if no match is found. The following examples give the shown results if executed only once (in a Python session) and in this order. make() can be used for the construction of new variables. >>> v1, s = Orange.feature.Descriptor.make("a", Orange.feature.Type.Discrete, ["a", "b"]) >>> print s, v1.values NotFound <a, b> A new variable was created and the status is NotFound. >>> v2, s = Orange.feature.Descriptor.make("a", Orange.feature.Type.Discrete, ["a"], ["c"]) >>> print s, v2 is v1, v1.values MissingValues True <a, b, c> The status is MissingValues, yet the variable is reused (v2 is v1). v1 gets a new value, "c", which was given as an unordered value. It does not matter that the new variable does not need the value b. >>> v3, s = Orange.feature.Descriptor.make("a", Orange.feature.Type.Discrete, ["a", "b", "c", "d"]) >>> print s, v3 is v1, v1.values MissingValues True <a, b, c, d> This is like before, except that the new value, d is not among the ordered values. >>> v4, s = Orange.feature.Descriptor.make("a", Orange.feature.Type.Discrete, ["b"]) >>> print s, v4 is v1, v1.values, v4.values Incompatible, False, <b>, <a, b, c, d> The new variable needs to have b as the first value, so it is incompatible with the existing variables. The status is Incompatible and a new variable is created; the two variables are not equal and have different lists of values. >>> v5, s = Orange.feature.Descriptor.make("a", Orange.feature.Type.Discrete, None, ["c", "a"]) >>> print s, v5 is v1, v1.values, v5.values OK True <a, b, c, d> <a, b, c, d> The new variable has values c and a, but the order is not important, so the existing attribute is OK. >>> v6, s = Orange.feature.Descriptor.make("a", Orange.feature.Type.Discrete, None, ["e"]) "a"]) >>> print s, v6 is v1, v1.values, v6.values NoRecognizedValues True <a, b, c, d, e> <a, b, c, d, e> The new variable has different values than the existing variable (status is NoRecognizedValues), but the existing one is nonetheless reused. Note that we gave e in the list of unordered values. If it was among the ordered, the reuse would fail. >>> v7, s = Orange.feature.Descriptor.make("a", Orange.feature.Type.Discrete, None, ["f"], Orange.feature.MakeStatus.NoRecognizedValues))) >>> print s, v7 is v1, v1.values, v7.values Incompatible False <a, b, c, d, e> <f> This is the same as before, except that we prohibited reuse when there are no recognized values. Hence a new variable is created, though the returned status is the same as before: >>> v8, s = Orange.feature.Descriptor.make("a", Orange.feature.Type.Discrete, ["a", "b", "c", "d", "e"], None, Orange.feature.MakeStatus.OK) >>> print s, v8 is v1, v1.values, v8.values OK False <a, b, c, d, e> <a, b, c, d, e> Finally, this is a perfect match, but any reuse is prohibited, so a new variable is created. Variables computed from other variables¶ Values of variables are often computed from other variables, for instance in. The mechanism described below usually functions behind the scenes, so understanding it is required only for implementing specific transformations. Monk 1 is a well-known dataset with target concept y := a==b or e==1. It can help the learning algorithm if the four-valued attribute e is replaced with a binary attribute having values “1” and “not 1”. The new variable will be computed from the old one on the fly. import Orange def checkE(inst, return_what): if inst["e"]=="1": return e2("1") else: return e2("not 1") monks = Orange.data.Table("monks-1") e2 = Orange.feature.Discrete("e2", values=["not 1", "1"]) e2.get_value_from = checkE The new variable is named e2; we define it with a descriptor of type Discrete, with appropriate name and values "not 1" and 1 (we chose this order so that the not 1‘s index is 0, which can be, if needed, interpreted as False). Finally, we tell e2 to use checkE to compute its value when needed, by assigning checkE to e2.get_value_from. checkE is a function that is passed an instance and another argument we do not care about here. If the instance’s e equals 1, the function returns value 1, otherwise it returns not 1. Both are returned as values, not plain strings. In most circumstances the value of e2 can be computed on the fly - we can pretend that the variable exists in the data, although it does not (but can be computed from it). For instance, we can compute the information gain of variable e2 or its distribution without actually constructing data containing the new variable. print Orange.feature.scoring.InfoGain(e2, monks) dist = Orange.statistics.distribution.Distribution(e2, monks) print dist There are methods which cannot compute values on the fly because it would be too complex or time consuming. In such cases, the data need to be converted to a new Orange.data.Table: new_domain = Orange.data.Domain([data.domain["a"], data.domain["b"], e2, data.domain.class_var]) new_data = Orange.data.Table(new_domain, data) Automatic computation is useful when the data is split into training and testing examples. Training instances can be modified by adding, removing and transforming variables (in a typical setup, continuous variables are discretized prior to learning, therefore the original variables are replaced by new ones). Test instances, on the other hand, are left as they are. When they are classified, the classifier automatically converts the testing instances into the new domain, which includes recomputation of transformed variables. # Split the data into training and testing set indices = Orange.data.sample.SubsetIndices2(monks, p0=0.7) train_data = monks.select(indices, 0) test_data = monks.select(indices, 1) # Convert the training set to a new domain new_domain = Orange.data.Domain([monks.domain["a"], monks.domain["b"], e2, monks.domain.class_var]) new_train = Orange.data.Table(new_domain, train_data) # Construct a tree and classify unmodified instances tree = Orange.classification.tree.TreeLearner(new_train) for ex in test_data[:10]: print ex.getclass(), tree(ex)
http://orange.biolab.si/docs/latest/reference/rst/Orange.feature.descriptor/
CC-MAIN-2014-15
refinedweb
2,827
56.15
Get as much done as possible, but don't panic if you don't finish. At your terminal, type in python3 as follows: star[200] ~ # python3Type, try to get the gist of what caused the error. Recall the structure of defining a function: def <name>(<formal parameters>): return <expression> At the Python prompt >>>, type the following: >>> def cube(n): ... return n * n * n ... >>> Be sure to indent the return statement correctly. Then, call the function cube with some numerical argument. For instance, what is 3 cubed? Now we will use Emacs, our code editor. Create a new file sumofsquares.py and type in the program below. Do. Recall that we can do this by saving the file, then running python -i sumofsquares.py . Then call (using python3 -i distance.py), and test the function by calling distance at the python prompt ">>>" with the appropriate arguments. >>> distance(1, 1, 1, 2) 1.0 >>> distance(1, 3, 1, 1) 2.0 >>> distance(1, 2, 3, 4) 2.8284271247461903 Now, let us edit this program to get the distance between two 3-dimensional coordinates. Your distance function should now take six arguments and compute the following: Reload the program, and once again test that your function does the right thing! >>> distance(1, 1, 1, 1, 2, 1) 1.0 >>> distance(2, 3, 5, 5, 8, 3) 6.164414002968976: to do so,. For the next two exercises, you will be using Emacs. Create a file lab1.py and put the following two function definitions in this file. We will have you print this file at the end. Before you get started, you should know a little bit about the if statement. The basic form looks like: if test: do something here elif other_test: do something else here else: do something else (else) here The elif is basically 'else if' in other languages, which means if the first 'if' fails, you can try another test. The tests need to be expressions that evaluate to boolean truth values. So for example: if 3 > 5: print("Nice try!") elif 3 > 4: print("Try, try again!") else: print("There we go!") That bit of code would print "There we go!". You'll see a lot more of if in the next lab, so don't worry too much about it right now. Feel free to ask your TA if you need more help! Define a procedure max(a, b) that takes in two numbers as arguments and returns the maximum of those two numbers, without using the built-in max function. Define a procedure absolute(a) that takes in one number as an argument and returns the absolute value of that number, without using the built-in abs function . Use the lpr command to print your lab1.py file. To do this, type the following command into your main xterm window (not Emacs or Python): lpr lab1.py This will print the lab1 file in the room opposite from 273 Soda. You can print other files by issuing a similar command, such as lpr my_file.txt. You have 200 free pages under your account. These pages will be handy in the coming weeks to turn in paper submissions for projects and homeworks.
http://inst.eecs.berkeley.edu/~cs61a/fa12/labs/lab01/lab01.php
CC-MAIN-2018-17
refinedweb
534
84.68
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Add Partner in activities automatically Hello ! I try when i create an activity in the calendar to have automatically my user in my partner_ids (attendees) I try to add the many2many in the code but it doesn't work ! my code : def _needaction_domain_get(self, cr, uid, context=None): return [('date', '<=', time.strftime(DEFAULT_SERVER_DATE_FORMAT + ' 23:59:59')), ('date_deadline', '>=', time.strftime(DEFAULT_SERVER_DATE_FORMAT + ' 23:59:59')), ('user_id', '=', uid), ('partner_ids', 'in', [(6,0,[uid])])] Thanks for your help ! About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/add-partner-in-activities-automatically-104151
CC-MAIN-2017-34
refinedweb
129
50.63
All You Need to Know About JSON With Swift All You Need to Know About JSON With Swift JSON is easy for humans to read and write and easy for machines to parse and generate. It can work with Swift to reduce the amount of code tremendously. Join the DZone community and get the full member experience.Join For Free JSON stands for JavaScript Object Notation. It’s the de facto standard for server-client data communication. In this post, we will go over a few concepts about data and demo using JSON in a Xcode Playground. You are encouraged to try out and modify examples. A Word on Data and Information Applications manipulate data and present it to the user as information. There’s a subtle distinction involved: - Data is a collection of symbols on which operations are performed usually by computers. Data is quantitative, it takes up storage such as 3 kilobytes. For example, a piece of data is “June 2, 2014”, a date. - Information is the specific meaning of data. For example, the date of “June 2, 2014” represents the Swift language release date. As civilization’s data storage needs and solutions progressed from clay tablets through paper to electronics over thousands of years, computer scientists improved on ways to store and handle massive amounts of data over the course of a few decades. Common Formats Data storage formats tend to be specialized for their own so called problem domain. For each kind of data you have at least one way of efficiently representing it. The JPG format is well suited for images captured by cameras’ sensors whereas PNG is better suited for designed images. The same parallels could be dotted between the MP3 format and OGG sound format. Databases are used for large quantities of data. They use: - Tables, which store data. - Schemas, which define the kind of information stored in a Table. - Queries, which are used to retrieve or manipulate data from tables. - Views, which hold results of queries. SQL is the most used database management system and several providers offer implementations for it. We mentioned this here, because working with data we need to be aware notions, ask ourselves and answer some of the same questions as database developers. According to ANSI (American National Standards Institute), it is the standard language for relational database management systems. Let’s consider a blob of data in our human minds: We want to describe some of the Swift's programming language facts. It was designed by Apple Inc. It first appeared on June 2, 2014 It's typing is static, strong and inferred It runs on Mac OS, Linux, FreeBSD It uses the Apache License 2.0 Website: swift.org We would like to be able to transmit or store this information in a way that we can write code that interacts with it. Custom Format You can invent and implement you own custom data format. Although the pride factor of inventing and implementing your own format would be very high, the usefulness of it would be less than limited, only the author’s software would be able to read and write from it. An example of a custom format representing the information about swift: I HAS A VAR ITZ Swift I HAS A VAR ITZ Developer = Apple Inc. BTW VAR Release::June 02, 2014 I HAS Typing -> static, strong, inferred ... K THANKS BYE! Apart from the strong influence from LOLCode, it looks like special handling would need to be implemented to handle this format. The values aren’t even clearly separated in it. One would fare much better adhering to a standard format. Your job isn’t to reinvent the wheel, it’s to know what kinds of wheels exists and which works best for your current situation. A Bit About Standards Standards are conventions over how specific tasks should be done so multiple groups of people can create interoperable parts. Creating a component that can interface with other standard parts saves a lot of implementation time and opens doors for interoperability with other systems. For example take the USB standard, short for Universal Serial Bus, is an industry standard initially developed in the mid-90s that defines the cables, connectors, and communications protocols used in a bus for connection, communication, and power supply between computers and electronic devices. Because the USB is a technical standard, anybody can add a USB port to their product, anybody can manufacture ports or cables, and you can use any combination of them together. While some standards are about mechanical components, others are for aerospace engineering, computers, communication and data transfer protocols, food, chemicals, etc. A technical standard is an established norm or requirement in regard to technical systems. It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices. In contrast, a custom, convention, company product, corporate standard, and so forth. (ex. power supply and sockets) Use of the standards aids in the creation of products and services that are safe, reliable and of good quality. They help businesses increase productivity while minimizing errors and waste. By enabling products from different markets to be directly compared, they facilitate companies in entering new markets and assist in the development of global trade on a fair basis. The most know standards organization is ISO, the International Organization for Standardization. ISO is an international standard-setting body composed of representatives from various national standards organizations. It is headquartered in Geneva, Switzerland, and as of 2015 works in 163 countries. CSV CSV is a text file of comma separated values or any other character, actually. This is known as the separator: Name, Designer, Release Date, Typing, OS, License, Website Swift, Apple Inc., June 2 2014, static; strong; inferred, Mac OS; Linux; FreeBSD, Apache License 2.0, swift.org This looks a lot better, each value corresponds to a column. They are all separated by a comma , character. A ; is used to separate multiple values in the same column. We can’t really say it’s very legible, we can’t say that the type of value is contained in it, but it can be worked with. XML XML used to be the de facto standard for server-client data communication. It’s the X in AJAX. XML stands for eXtensible Markup Language. It was designed to store and transport data while being both human and machine-readable. You may know of HTML, the format that represents web pages. That is one of the extensions of XML. Both look the same, the main difference is that HTML has special tags for certain elements, while XML can use any possible tag. How our swift data would look in XML format: <language> <name>Swift</name> <designer>Apple Inc</designer> <releaseDate dateformat = longStyle>June 2, 2014</releaseDate> <typing> <element>static</element> <element>strong</element> <element>inferred</element> </typing> <OS> <element>Mac OS</element> <element>Linux</element> <element>FreeBSD</element> </OS> <license>Apache License 2.0</license> <website>swift.org</website> </language> We had to use 397 characters to represent the data in XML format. We even have a date format encoded in the data, so one would have an easy time parsing it out. A thing to note about XML is that opening and closing tags shouldn’t be mixed. For example: <name>Swift<developer>Apple</name></developer>. Swift provides the NSXMLParser class that can be used to decode or encode XML. In case you find this very similar to a .plist file, you should know that Apple’s property lists are another extension of XML. JSON JSON stands for Javascript Object Notation. It's a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language: { "name": "Swift", "developer": "Apple Inc.", "releaseDate": { "format": "longStyle", "date": "June 2, 2014" }, "typing": [ "static", "strong", "inferred" ], "OS": [ "Mac OS", "Linux", "FreeBSD" ], "license": "Apache License 2.0", "website": "swift.org" } This is 264 characters long, quite a change from XML’s 397. This is mostly the reason why JSON is nicknamed “low-fat XML,” as much of the redundancy is gone. In today’s information-hungry world, the 133 extra characters would impact the bandwidth needs of the app’s users. A JSON object can be thought of as a dictionary in the way that it contains keys and values for those keys. Types that can be stored in JSON are: - Strings ( "John Appleseed"). - Numbers ( "3.14", but not “NaN”). - Noolean ( true, false, True, False). nullthe absence of value [represented by nilin Swift]. - Arrays (a list of basic types, arrays, or dictionaries, ["red", "blue", "yellow"]) [represented as Arrayin swift]. - Dictionaries (a key-value collection of basic types, arrays, or dictionaries). An example JSON that can be used to represent a person could be: { "name": "Andrei", "age": 20, "favourite_colors": [ "red", "blue" ], "memories": { "best": [ "Riding the bicycle", "Building model boats" ], "worst": null }, "is_friendly": true } Another example with a list of todos: [ { "task" : "Buy milk", "done" : false }, { "task" : "Eat a cookie", "done" : true } ] To make a quick check that it’s a valid JSON, an online validator can be used such as JSON lint. Mismatching parentheses or missing quotes will yield malformed JSON errors. Arrays or dictionaries contained in other arrays or dictionaries are called nested objects. In Swift, JSON objects are represented by Dictionary or Array objects. To keep the Swift code clean, it’s good to define constants when dealing with any string type keys. It may be a bit of an overhead for small projects, but the amount of avoided spelling errors and clarity of the code is well worth it. Working With JSON in Swift Use Foundation Framework’s JSONSerialization to convert Data types to JSON. It is called serialization because it implies converting an object to a stream of bytes. We get this amazing chance because we chose a standard for transmitting data. Using JSON, we can be fairly certain that any other programming language already has libraries that help parse it. Creating JSON Objects Usually, mobile developers don’t need to instantiate JSON data because most of the time it’s provided from server APIs. API stands for Application Programming Interface. In the context of JSON, it refers to a set of requests that can be made to servers and their returned values. In the context of the iOS SDK, APIs mean the same thing but it all happens on the same device. Creating a JSON object is possible by transforming a native Swift dictionary: import Foundation let jsonDict = [ "name" : "Swift", "developer" : "Apple Inc.", "releaseDate" : [ "format":"longStyle", "date": "June 2, 2014" ], "typing": ["static", "strong", "inferred"], "OS": ["Mac OS", "Linux", "FreeBSD"], "license": "Apache License 2.0", "website": "swift.org" ] as [String : Any] if JSONSerialization.isValidJSONObject(jsonDict) { if let data = try? JSONSerialization.data(withJSONObject: jsonDict, options: []) { print("JSON data object is:\(data)") } } } JSONSerialization.WritingOptions: - prettyPrinted specifies that the JSON data should be generated with whitespace designed to make the output more readable. If this option is not set, the most compact possible JSON representation is generated One interesting observation would be that we can’t add nil in a dictionary. As such, we are left with two options to represent the absence of value: - Remove the key. - Use NSNull()instead of nil, and have that translated in the JSON nullas the serialization occurs. Reading JSON Objects Network requests or bundled files return Swift data objects by default. Accessing the contained JSON is done by converting the data object to a Swift dictionary object: let json = try? JSONSerialization.jsonObject(with: data, options: []) if let dictionary = json as? [String: Any], let name = dictionary["name"], let developer = dictionary["developer"], let typing = dictionary["typing"], let website = dictionary["website"] { // treat it as a string key dictionary. print("language name: \(name)") print("language developer: \(developer)") print("language typing: \(typing)") print("language website: \(website)") ... } JSONSerialization can return either of the two top-level objects: Array and Dictionary: if let rootArray = json as? [Any] { // treat it as an array. } if let rootDictionary = json as? [String: Any] { // treat it as a string key dictionary. } To try this out, combine the creating and reading code to make the round trip from dictionary to JSON data and then back to a dictionary. Or see the full code below: Full Round-Trip Code All contained objects are String, Number, Array, Dictionary or NSNull. Numbers can’t be “NaN” (not a number) or infinity. JSONSerialization. Date Formats Handling With JSON Date formats are usually specified in the server API documentation. Different servers may use different date formats, for example, a U.S.-based server may use the month-day-year notation while European servers use day-month-year notation. Developers need to take into consideration that users can be active in different time zones so moments in time must be represented in a valid way for all of them. The most commonly used formats on the internet are: - epoch time: The number of seconds passed since 1 January 1970. - rfc 8601: A standard for date strings that contain the year, month, day, time and time zone. The most important point to be made here is that the transmitted date should mean the same thing on the server and the client: 1.2.2017, meaning 2 January 2017 won’t end up being represented as 1 February 2017. One may use a Swift extension to instantiate dates from strings: extension Date { // expects a dictionary containing: // "format": "longStyle", // "date": "June 2, 2014" static func date(dictionary: [String: Any]?) -> Date? { if let format = dictionary?["format"] as? String, let date = dictionary?["date"] as? String { if format == "longStyle" { let formatter = DateFormatter() formatter.dateStyle = .long return formatter.date(from: date) } } return nil } //expects a string like: 2016-12-05 static func date(yyyyMMdd: String?) -> Date? { if let string = yyyyMMdd { let formatter = DateFormatter() formatter.dateFormat = "yyyy-MM-dd" return formatter.date(from: string) } return nil } // The date format specified in the RFC 8601 standard // is common place on internet communication // expects a string as: 2017-01-17T20:15:00+0100 static func date(rfc8601Date: String?) -> Date? { if let string = rfc8601Date { let formatter = DateFormatter() formatter.dateFormat = "yyyy-MM-dd'T'HH:mm:ssZZZZZ" return formatter.date(from: string) } return nil } } We’ll the leave epoch time implementation out as an exercise for the reader. Instantiating Custom Objects From JSON While having JSON data in dictionaries is a good thing, we would love to be able to add methods and custom behavior to specific objects. The preferred way is to use Swift structs. This is mainly because they are passed by value and we can develop better reasoning about the information they represent over the lifetime of the app. A simple struct to represent the programming language data structure may be similar to: struct ProgrammingLanguage { let name: String let developer: String? let releaseDate: Date let typing: [String]? let os: [String]? let license: String let websiteURL: URL? init?(json: [String: Any]) { // As a rule, the developer needs to plan for cases when the data may be polluted. // Almost anything can be received, keys may change over updates, contents can change over updates. // Use optional chaining for required values guard let name = json["name"] as? String, let releaseDate = Date.date(dictionary:(json["releaseDate"] as? [String: Any])), let license = json["license"] as? String else { // If the JSON is missing any of the required fields, the resulting // object wouldn't be considered valid. return nil } self.name = name self.releaseDate = releaseDate self.license = license // Use optionals for non critical values self.developer = json["developer"] as? String self.typing = json["typing"] as? [String] self.os = json["OS"] as? [String] self.websiteURL = URL(string:json["website"] as? String ?? "") } } Consuming a JSON API JSON is a standard format used for internet communication that is based on the witty structuring of strings to populate apps with data that is relevant to the user. They are usually received as Swift Data in networking callbacks and get translated to Swift dictionaries or arrays. The developer then uses custom structs to add rich behavior to the objects. Servers that use JSON APIs can vary from weather, fashion, social media, messaging, news, pretty much everything that can be transmitted over the internet. For your delight, let’s consider using a real life API in Swift: List Google Books Google provides a free JSON API for books. A request can be made to which is an endpoint. This endpoint needs a qparameter to perform a query with an URL encoded value. The URL encoded value for Swift programming is %22swift%20programming%22. This replaces spaces and other URL reserved characters such as slashes with percent encodings. The full request with the parameters can be also tried out in the browser: Swift programming google books search. You should get a response such as: { "kind": "books#volumes", "totalItems": 46, "items": [ { "kind": "books#volume", "id": "Wt66BQAAQBAJ", "etag": "6wKECQYXBz8", "selfLink": "", "volumeInfo": { "title": "Beginning Swift Programming", "authors": [ "Wei-Meng Lee" ], "publisher": "John Wiley & Sons", "publishedDate": "2014-12-04", " ... Making a request with no parameters returns an API error: Google books search without a query. { "error": { "errors": [ { "domain": "global", "reason": "required", "message": "Required parameter: q", "locationType": "parameter", "location": "q" } ], "code": 400, "message": "Required parameter: q" } } It is the duty of the developer to handle cases when errors are returned from APIs and make sure that his app continues to function correctly. In the case of this particular error, the app user should be notified that he entered no search string and be allowed to make a new search. Feel free to check out the Google Books API. All APIs are different in the way that they return other data and require different parameters, but most of them have a specification document like this one. Instantiating Google Books JSON in Swift After reading the API documentation, we can inspect the returned JSON and select pertinent information to use in the code: { "kind": "books#volumes", "totalItems": 46, "items": [ { ... "volumeInfo": { "title": "Beginning Swift Programming", "authors": [ "Wei-Meng Lee" ], "publishedDate": "2014-12-04", "description": "Enter the Swift future of iOS and OS X programming Beginning Swift Programming ...",* ... "imageLinks": { "thumbnail": "" }, ... From this abundance, we can consider a model consisting of a few keys: struct GoogleBook { let title: String let authors: [String] let publishedDate: Date let description: String? let thumbnailURL: URL? init?(json: [String: Any]) { // the initializer should be similar in essence to the ProgrammingLanguage one. ... From the top JSON dictionary, the items key would be extracted. For every item in that array such a GoogleBookshould be instantiated. If all the required keys will be present, of course. To make an asynchronous request in the Xcode playground we will need to import PlaygroundSupport, enable the needsIndefiniteExecution property and signal the playground that the execution actually stopped by calling PlaygroundPage.current.finishExecution(). The code for actually running the request would look something like the following snippet. Except of course, the forced unwrapping which is a recipe for a crash: let searchTerm = "%22swift%20programming%22" let googleBooksEndpoint = "\(searchTerm)" let url = URL(string: googleBooksEndpoint)! let request = URLRequest(url: url) let config = URLSessionConfiguration.default let session = URLSession(configuration: config) let task = session.dataTask(with: request, completionHandler: { (data, response, error) in // JSON parsing code. PlaygroundPage.current.finishExecution() }) task.resume() PlaygroundPage.current.needsIndefiniteExecution = true A data task is created with the URLSession object, it gets configured with the request and completion handler. On resume() it starts running and will call back on the completionHandler. Once the request runs, we will need to handle the response information and create all the GoogleBook objects from it: if let error = error { // The error should bhe extracted from it's JSON dictionary and presented to the user. } else if let data = data { let json = try? JSONSerialization.jsonObject(with: data, options: []) if let rootDictionary = json as? [String: Any], let items = rootDictionary["items"] as? [[String:Any]] { var googleBooks = [GoogleBook]() for item in items { if let book = GoogleBook(json: item) { print("") print("parsed a book: \(book)") googleBooks.append(book) } } if googleBooks.count == 0 { // Should present a message to the user that no results were found. } } } The error cases are usually handled by displaying an AlertController. It’s important to keep in mind that changes can occur in the received objects at any time (including when the application goes live). Because of this, check and make sure that no forced unwrapping is used and always be able to determine what happens if certain data is invalid. Feel free to inspect the full code: Full GoogleBook JSON Code SwiftyJSON As one can notice in the above code, there is quite some boilerplate code which checks if actual values are present and are of the expected types. This can create a sizable overhead when dealing with multiple JSON object schemas. If there were a way to translate: let json = try? JSONSerialization.jsonObject(with: data, options: []) if let rootArray = json as? [[String: Any]], let userDict = rootArray[0] as? [String:Any], let userName = userDict["name"] as? String { //Now you got your value into: let json = JSON(data: dataFromNetworking) if let userName = json[0]["user"]["name"].string { //Now you got your value } It would reduce the amount of code tremendously. The fantastic library’s name is SwiftyJSON. Don’t forget to give it a star on GitHub! References Published at DZone with permission of Andrei Nagy , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/all-you-need-to-know-about-json-with-swift
CC-MAIN-2018-47
refinedweb
3,575
56.35
There are at least 2 use cases of import: (1) providing reusable functionality - providing ant build file fragments by framework distributors - using parts of a buildsystem in a cross project manner. (2) linking subprojects in a large project Both partially contrary in terms of basedir handling. For (1) imported functionality would be used in the context of the main project. Basedir should always the basedir of the main project. For (2) imported functionality would be used in the context of the import (the subproject). The basedir should be set to import context. proposed behavior: If basedir is set in project element it means: basedir is set in context to the file containing the project element (as before). This is used for subproject handling. If basedir is not set it will be inherited from the importing project. Thats for module usage. May be downward compatible (basedir not set in main project is always '.'). Sometimes it is necessary to address resources in the context of an import. For that reason the basedir (context) of each imported project should be accessible from everywhere. A property can be instanciated with name like imports.<import-name>.basedir while importing a project. imports.basedir should be responsible for main (root) projects basedir so root context is accessible from everywhere. Implementation proposal for basedir handling: Each target is executed in a context holding the basedir. If basedir is set in an imported project a new context is instanciated and used. Otherwise the existing context is used. So basedir is hold by a context not the project. Proposed behavior for target and property name prefixing Each target and property name and id (forgot something?) should be seen as in a namespace. The namespace is formed by the containing project and named by the containing project name. Inside a project the namespace (prefix) is implicit and can be omitted. Referencing targets and properties outside the containing project the namespace is used. The resulting name or id is constructed by namespace, delemiter, local name. The root resources can be referenced using an empty namespace (lokal name with preceding delimiter). Concrete examples at (Except the root of imported files is named 'module'): Namespace collisions by independent developed imports can be prevented by using naming conventions like for java packages (preceed url of provider of import). Proposed implementation: It is done for targets in ANTModule () in a special ProjectHelperImpl while parsing the project file. ANT integration hints are given (). Additionally the namespace resolving functionality provided by the projecthelper must be used for all property names, id's and target names and dependencies while initialising this objects. In my opinion importing and providing namespace related functionality should be placed in the projecthelper not in a task. So the existing order problem for top level elements is solved automatically. ------------- Claas Thiele -- To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org>
http://mail-archives.apache.org/mod_mbox/ant-dev/200301.mbox/%3C3E2494BF.8060803@ct42.de%3E
CC-MAIN-2014-52
refinedweb
490
57.77
I am wondering how much GPU computing would help me speed up my simulations. The critical part of my code is matrix multiplication. Basically the code looks like the following python code with matrices of order 1000 and long for loops. import numpy as np Msize = 1000 simulationLength = 50 a = np.random.rand(Msize,Msize) b = np.random.rand(Msize,Msize) for j in range(simulationLength): result = np.dot(a,b) If you use numpy, you are probably using one of the BLAS libraries as computational backend, such as ATLAS, OpenBLAS, MKL, etc. When you are using the fastest one MKL, you can find a recent performance benchmark here, between a recent Nvidia GPU K40m and Intel Xeon 12-core E5-2697 v2 @ 2.70GHz where K40m is 6x faster than 12-thread E5-2697. Considering MKL scales well on multi-core CPU. K40m is ~72x faster than 1-thread E5-2697. Please also note 1000-dim is almost the lower bound to fully utilise both the GPU and CPU. Smaller matrix size usually leads to more performance degrade on GPU. If you are using slower BLAS backend for numpy, say the GNU-licensed ATLAS. You could then find the comparison between MKL and ATLAS here where MKL is 2~4x faster than ATLAS. For Nvidia GPUs, the only widely used backend is CUDA's cuBLAS, so the performance won't change a lot like ATLAS vs. MKL. As @janbrohl says, data transfer between host RAM and GPU device memory is an important factor that affect the overall performance. Here's a benchmark of the data transfer speed. CUDA - how much slower is transferring over PCI-E? Given the matrix size, you can actually calculate out the absolute time for computation and data transfer, respectively. These could help you evaluate the performance better. To maximise the performance on GPU, you probably need re-design you program to minimise the data transfer, by moving all the computational operations to GPU, rather than matrix multiplication only.
https://codedump.io/share/dDZ9PKAM7CV3/1/speedup-gpu-vs-cpu-for-matrix-operations
CC-MAIN-2017-17
refinedweb
334
65.83
September 30, 2018 Single Round Match 738 Editorials The round was held on 30th Sept, 2018. Thanks to wild_hamster for the interesting problem set and editorials. Thanks to Petr for recording his screen during the round. Click here to see Petr’s screencast of the round. Div2Easy: DriveTheCarEasy Increasing speed by speedi before the second momentsi leads to increase speed by speedi during S–momentsi+1 seconds. So for each pair speedi, momentsi we need to add S–momentsi+1 * speedi to the answer. Code: public class DriveTheCarEasy { public long calculateDistance(int S, int N, int[] speed_changes, int[] moments) { long ans = 0; for (int i = 0; i < N; i++) ans += (long)(S - moments[i] + 1) * speed_changes[i]; return ans; } Time complexity O(N). Div2Medium: EnergySource There can be at most 10^5 possible divisions under given constraints. We can generate all of them recursively. For each division, we can take every element not equal to 1 and try to split this element it into g > 1 new elements and go recursively to the new generated division if we didn’t already visit it recursively. Doing it straightforward will not fit in TL, so there are possible ways to fit in TL: 1. Precalculate all values from 1 to 90 inclusive. 2. Remember not all the elements of the division, but for each distinct element, we can remember the only number of occurrences of it in a division. 3. Precalculate divisors of each number from 1 to 90 and for each element in the division we can try to split it only in the number of divisors parts. import java.util.*; public class EnergySource { long ans1 = 0; long ans2 = 0; int itr = 0; ArrayList[] divisors = new ArrayList[105]; ArrayList g = new ArrayList(); HashSet f = new HashSet(); int[] get_idx = new int[105]; int[] cur_divisors = new int[105]; long get() { int sz = g.size(); long ans = 0, mul = 1; for (int i = sz-1; i>= 0; i--) { ans += mul*(g.get(i)+1); mul *= (cur_divisors[sz-i-1]+2); } return ans; } void go() { itr++; if (f.contains(get())) { return; } f.add(get()); ans1++; long cur = 1; for (int i = 1; i < g.size(); i++) { for (int j = 0; j < g.get(i); j++) cur *= cur_divisors[i]; } ans2 += cur; for (int i = 1; i < g.size(); i++) { if (g.get(i) > 0) { for (int j = 1; j < divisors[cur_divisors[i]].size(); j++) { int cur_div = divisors[cur_divisors[i]].get(j); int diff = cur_divisors[i]/cur_div; g.set(get_idx, g.get(get_idx) + cur_div); g.set(i, g.get(i)-1); go(); g.set(i, g.get(i)+1); g.set(get_idx, g.get(get_idx) - cur_div); } } } } void solve(int n) { for (int i = 1; i < divisors[n].size(); i++) { g.add(0); } g.add(1); for (int i = 0; i < divisors[n].size(); i++) { get_idx[divisors[n].get(i)] = i; cur_divisors[i] = divisors[n].get(i); } go(); } public long[] countDifferentSources(int power) { for (int n = 1; n<= 100; n++) { divisors[n] = new ArrayList(); for (int i = 1; i<= n; i++) if (n % i == 0) { divisors[n].add(i); } } solve(power); long[] ans = {ans1, ans2}; System.out.println(f.size()); return ans; } } Div2Hard: MovingByPoints For any pair of points (x1, y1), (x2, y2) the minimal number of added points between them needed to get from one point to another not using another given points is equal to |x2–x1|+|y2–y1|–1. Let¢s define every point as node of graph and make edge between any pair of points with distance equal to distance between points. After that we can use Dijkstra to find the shortest path between 1 and N. public class MovingByPoints { int d[][] = new int[1005][]; int w[] = new int[1005]; int used[] = new int[1005]; int Abs(int x) { return (x>0?x:-x); } public int countMinimumPoints(int N, int[] X, int[] Y) { for (int i = 0; i < N; i++) d[i] = new int[1005]; for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) d[i][j] = Math.max(0, Abs(X[j] - X[i]) + Abs(Y[j] - Y[i]) - 1); for (int i = 0; i < N; i++) w[i] = 1000000000; w[0] = 0; for (int i = 0; i < N; i++) { int min1 = 1000000000; int idx = -1; for (int j = 0; j < N; j++) { if (used[j] == 0 && w[j] < min1) { min1 = w[j]; idx = j; } } used[idx] = 1; for (int j = 0; j < N; j++) { w[j] = Math.min(w[j], w[idx] + d[idx][j]); } } return w[N-1]; } public String checkData(int N, int[] X, int[] Y) { if (N < 1 || N > 500) { return "N must be between 1 and 500, inclusive."; } if (X.length != N || Y.length != N) { return "X and Y must have exactly N elements."; } for (int i = 0; i < N; i++) { if (X[i] < 1 || X[i] > 1000000 || Y[i] < 1 || Y[i] > 1000000) { return "Each element of X and Y must be between 1 and 10^6, inclusive."; } } return ""; } } Time complexity O(N^2) Div1Easy: FindThePerfectTriangle We can notice, that if one of the sides of the triangle will be irrational number, than perimeter will be irrational too. So we need to find all possible such vectors (vx, vy), where vx^2+vy^2 = k^2) (for some positive integer k £ P, vx and vy are integers). We will call such vectors good vectors. There will be not many such vectors(not more than 10^4). We can construct two sides of the triangle using good vectors (vx1, vy1) and (vx2, vy2) on such set of points: 0, 0, vx1, vy1, vx1+vx2, vy1+vy2 ((vx1+vx2, vy1+vy2) must be also good vector). In that way we can construct all possible triangles with integer coordinates, integer area and integer perimeter. After that we can check if (P, S) belongs to that possible triangles. public class FindThePerfectTriangle { int[] a = new int[1005000]; int[][] good = new int[1005][]; int mx[] = new int[5005]; int my[] = new int[5005]; int segment_sz = 0; int Abs(int x) { return (x>0?x:-x); } int S(int x1, int y1, int x2, int y2, int x3, int y3) { int ans = (x1+x2)*(y1-y2)+(x2+x3)*(y2-y3)+(x3+x1)*(y3-y1); return Abs(ans); } public int[] constructTriangle(int area, int perimeter) { for (int i = 0; i <= 1000000; i++) a[i] = 0; for (int i = 1; i <= 1000; i++) { a[i*i] = i; } for (int i = 0; i <= 1000; i++) { good[i] = new int[1005]; for (int j = 0; j <= 1000; j++) { good[i][j] = 0; } } for (int i = 0; i <= 1000; i++) { for (int j = 0; j <= 1000; j++) { int x = i*i + j*j; if (x <= 1000000 && a[x] > 0) { mx[segment_sz] = i; my[segment_sz] = j; good[i][j] = good[j][i] = a[x]; segment_sz++; } } } for (int i = 0; i < segment_sz; i++) { for (int j = 0; j< segment_sz; j++) { int x1 = mx[i] + mx[j]; int y1 = my[i] + my[j]; int ar = S(0, 0, mx[i], my[i], x1, y1); if (ar > 0 && ar % 2 == 0 && x1 <= 1000 && y1 <= 1000 && good[x1][y1] > 0) { int P = good[mx[i]][my[i]] + good[mx[j]][my[j]] + good[x1][y1]; if (P == perimeter && ar/2 == area) { int[] ans = {1, 1, mx[i]+1, my[i]+1, x1+1, y1+1}; return ans; } } x1 = mx[i] - mx[j]; y1 = my[i] + my[j]; ar = S(0, 0, mx[i], my[i], x1, y1); if (ar > 0 && ar % 2 == 0 && Abs(x1) <= 1000 && y1 <= 1000 && good[Abs(x1)][y1] > 0) { int P = good[mx[i]][my[i]] + good[mx[j]][my[j]] + good[Abs(x1)][y1]; if (P == perimeter && ar/2 == area) { int x = Math.max(1, -x1+1); int[] ans = {x, x, mx[i]+x, my[i]+x, x1+x, y1+x}; return ans; } } } } int[] ans = {}; return ans; } int find_int_dist(int x1, int y1, int x2, int y2) { int dist = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1); for (int i = 1; i <= 5000; i++) if (i*i == dist) return i; return -1; } boolean in_range(int x) { return (x >= 0 && x <= 3000); } } Div1Medium: LightBulbGame As this is a combinatorial game and the state space is too large to solve it using brute force, we have to look for a more clever solution. One of the traditional tools that often works is the Sprague-Grundy theory that can be efficiently applied whenever the game can be split into a collection of independent smaller games. The main challenge in this problem was to realize that this is indeed the case here. At the first glance, this is not obvious because it seems that the lightbulbs cannot be independent if you can sometimes turn two of them off in one move. Imagine that instead of lightbulbs we play the game with pebbles: add a pebble instead of turning the lightbulb on, and remove a pebble instead of turning it off. Then, “just turning lightbulb L off” is the same as “just removing the pebble from L”. I now claim that “turning off L and toggling L'” is equivalent to “moving a pebble from L to L'”. Why is that the case? If I turned L’ on, the equivalence is obvious. If I turned L’ off, I now have two different situations: in the lightbulb game the lightbulb is off, while in the pebble game I now have two pebbles at L’. Why are these two situations equivalent? Because the two pebbles at the same location can safely be ignored, as if they weren¢t there. More precisely, imagine the two pebbles are not there and find out which player has the winning strategy. That player still has a winning strategy if we add those two pebbles back. Said strategy: follow the original strategy with the original pebbles, and copy your opponent’s moves on the two extra pebbles. Thus, we can use simple dynamic programming to compute the Sprague-Grundy value for each lit lightbulb, xor them to get the Sprague-Grundy value of the entire board, and then we can examine all possible first moves and evaluate each of them independently. import java.util.*; public class LightbulbGame { public int countWinningMoves(String[] board) { int R = board.length, C = board[0].length(); int[][] SG = new int[R][C]; for (int r=R-1; r>=0; --r) for (int c=C-1; c>=0; --c) { HashSet reachable = new HashSet(); reachable.add(0); for (int nr=r+1; nr<R; ++nr) reachable.add( SG[nr] ); for (int nc=c+1; nc<C; ++nc) reachable.add( SG[r][nc] ); SG[r] = 0; while (reachable.contains(SG[r])) ++SG[r]; } int sg = 0; for (int r=0; r<R; ++r) for (int c=0; c<C; ++c) if (board[r].charAt(c) == '1') sg ^= SG[r]; int answer = 0; for (int r=0; r<R; ++r) for (int c=0; c<C; ++c) if (board[r].charAt(c) == '1') { int needed = sg ^ SG[r]; if (needed == 0) ++answer; for (int nr=r+1; nr<R; ++nr) if (SG[nr] == needed) ++answer; for (int nc=c+1; nc<C; ++nc) if (SG[r][nc] == needed) ++answer; } return answer; } } Div1Hard: DriveTheCarHard We can notice, that increasing speed before i-th second(0-indexed) by K meters/second leads to increasing passed distance in the end by (totalTime–i)*K. So we have obvious dynamic programming solution with O(distance^2*log(distance)) time complexity: let dp[distance][time] be optimal answer for time seconds and distance meters, then for each non-negative K we can update dp[time+1][distance+(time+1)*K] with min(dp[time+1][distance+(time+1)*K], dp[time][distance] + K^2). But it is not fast enough under given constraints. So we can do following optimizations: Let¢s define array a where a[i] means increasing speed by a[i] before second with number i. We can notice that when (time+1)*time/2 ³ distance, we can construct an array of increasing speed for each second consisting only with zeros and ones that will lead to moving for exactly distance meters in time seconds. After constructing it we can improve the amount of fuel greedily by trying to add 2 at the start of the array. It can be proven that it is never optimal to add 3 for that case. We can also observe that for time>= 261 and distance £ 30000 we can construct an array of increasing speed for each second consisting only with zeros and ones without even adding 2 to the array and calculate the answer as the number of ones in the array. This is because: 1. You can always take a smaller value in the last iteration of the adding one to the array to produce the exact distance you need. That solution is clearly the optimal solution that only uses speedup by 1. 2. For these constraints you never need to increase your speed by more than 1. This is because for time ³ 261 even if you use the above greedy for distance = 30000, the three largest unused effects of a speedup by 1 will still be at least as large as the effect of changing the speedup at the beginning from 1 to 2. Now we need to solve the problem for (time+1)*time/2 < distance. Let’s try to solve this problem in non-integer values: If we need to move for distance meters in time seconds, the total distance travelled will be equal to time*x1 + (time–1)*x2 + (time–2)*x3 + … + 1*xtime, for time*x1 meters will be wasted x1^2 fuel, for (time–1)*x2 will be wasted x2^2 fuel and so on. To minimize the amount of used fuel we need to maximize the amount of meters travelled for 1 unit of fuel. So we need to maximize values time/x1, (time–1)/x2, (time–2)/x3, ¼, 1/xtime and they all must be equal. We have x1 = time*x, x2 = (time–1)*x, ¼ ,xtime = 1*x, and x1*time + x2*(time–1) + ¼ + xtime*1 = distance, leading to x*(1^2+2^2+¼+time^2) = distance, and minimal amount of fuel is equal to x^2*(1^2+2^2+…+time^2) = distance^2/(1^2+…+time^2). Now we can calculate this values for all distance £ 30000 and time £ 261 and compare it with dp[time][distance] counted in O(distance^2*log(distance)) and find out that the maximum difference is 30. We also can get that for dp[time][distance] the optimal K taken in double value in dp[time–1][distance–K*time]+K*K is equal to x1 = time*x = 6*distance/((time+1)*(2*time+1)). Values of dp[time–1][distance–K*time]+K*K in double values will be monotonous sequence on segments [K, +infinity) and (–infinity, K], so we can iterate K down and up in integer while dp[time–1][distance–K*time]+K*K will be not bigger than optimal_double_solution(time, distance) + 30 and it leads to less than 10^8 operations for distance £ 3*10^4. public class DriveTheCarHard { int dp[][] = new int[305][]; double calc(int i, int j) { return (1.*j*j/(1.*(i)*(i+1)*(2*i+1)/6)); } int solve_fast(int x, int y) { for (int i = 1; i <= 30000; i++) dp[1][i] = i*i; for (int i = 2; i <= 250; i++) for (int j = 1; j <= 30000; j++) dp[i][j] = 1000000000; int cnt = 0; for (int i = 2; i <= 250; i++) { for (int j = 0; j <= 30000; j++) { double pre = calc(i,j); int optK = (int)(6.*j/(1.*(i+1)*(2*i+1))); for (int k = optK; j-k*i >= 0; k++) { if (dp[i-1][j-k*i] + k*k < pre + 30) { dp[i][j] = Math.min(dp[i][j], dp[i-1][j-k*i] + k*k); } else { break; } } for (int k = optK; k >= 0; k--) { if (dp[i-1][j-k*i] + k*k < pre + 30) { dp[i][j] = Math.min(dp[i][j], dp[i-1][j-k*i] + k*k); cnt++; } else { break; } } } } return dp[x][y]; } int solve_over_sqrt(int T, int D) { int cur_cnt = T; int[] ones = new int[1050]; int cur_sz = 0; while (D > 0) { int val = Math.min(D, cur_cnt); D -= val; ones[cur_sz++] = val; cur_cnt--; } int ans = cur_sz; int pnt = 0; int left = 0; while (true) { if (cur_sz < 3) break; int sz = cur_sz; if (ones[pnt]+left >= ones[sz-1] + ones[sz-2] + ones[sz-3]) { left += ones[pnt]; } else { break; } ans += 3; while (ones[cur_sz-1] <= left) { left -= ones[cur_sz-1]; cur_sz--; ans--; } pnt++; } return ans; } public int findMinimumFuel(int total_time, int distance) { for (int i = 0; i < 300; i++) dp[i] = new int[30500]; if (total_time <= 250) { return solve_fast(total_time, distance); } return solve_over_sqrt(total_time, distance); } }; wild_hamster
https://www.topcoder.com/single-round-match-738-editorials/
CC-MAIN-2019-43
refinedweb
2,810
65.96
Answered by: Windows 8 Serial Port Support Will Windows 8 support the Serial Port, managed code and Metro apps like it has in the past? Meaning, will we still be able to do something like below to communicate with a USB/Virtual COM that uses serial data (e.g. COM1, 9600, etc. )? SerialPort spTest = new SerialPort("COM1"); Hope so.... this is a biggie for us. Thanks Question Answers - All replies Wait, what? The .Net Framework v4's SerialPort class is unsupported on win8? I don't think you meant that. And If your platform has a serial port and it has Win8 installed I'm pretty sure the serial port will work just as it always has worked. Mark Roddy Windows Driver and OS consultant - Edited by markRoddyModerator Tuesday, September 20, 2011 12:36 PM This is only based on what I saw at the BUILD conference and blogs over the last week. Here is what I know... Desktop apps will still support the full .NET framework, so File.IO and SerialPort will still work. The new Metro-style apps is a "sandbox" and will not have certain access to things ( at least without the new contracts ). eg. File.IO, sockets, etc. I'm wanting to know if the SerialPort class ( or something similar ) will be available in the new Metro ( WinRT ) apps? If it is not supported, will there be something else that allows us to read/write to serial devices? It appears there is a new driver model to make things easier to connect to devices, but vendors will have to develop new interfaces for this. I hope that Microsoft ( and community ) can deliver something consistent for serial communications. - - you should not be communicating with a usb device as a virtual com port. use more native USB abstractions (Which also are not projected into a metro app either unless it is WPD or storage most likely) d -- This posting is provided "AS IS" with no warranties, and confers no rights. you should not be communicating with a usb device as a virtual com port. use more native USB abstractions (Which also are not projected into a metro app either unless it is WPD or storage most likely)you seem to think that one is able to control both sides (app developer on the Host side and a fixed situation of a virtual COM port on the device side). There are plenty of existing devices that can only communicate via the USB Virtual COM Port. d -- This posting is provided "AS IS" with no warranties, and confers no rights. - when you have a choice, using serial to talk to a usb device is usually one of the worst paths to go down. unnecessary abstraction. if you have no choice, you have no choice. if you want to talk to a serial port in a metro app, that is not going to work. d -- This posting is provided "AS IS" with no warranties, and confers no rights. There are plenty of existing devices that can only communicate via the USB Virtual COM Port. So is this really a big problem? Just ignore that Metro stuff and use the same applications, almost in same way as they work today on Win7. This covers all machines compatible with Win7. For ARM tablets... no idea. --pa Since we don't have a serial solution on Win8 or WP7/8, my company is forced to stay on Win7 and PocketPC. There is nothing wrong with using WPF and is great to use for these types of apps. I do believe, if MS is saying WinRT is the future, they should have a solution for this platform. I'm just trying to found out what support the new WinRT will have for serial communications. There are ( and will be ) many devices that use serial communications for years to come and this should be a MUST to have in WinRT. I really like the ease of use of the SerialPort Class and would like for it to be ported and included out-of-box in WinRT. If not the SerialPort Class, it would be nice to have something! Hope MS is listening on this feature. Who said it had to be a physical serial port? And who said it had to be on ARM? I can think of lots of applications that can be used without serial interfacing when unavailable, and can be augmented with extra functionality when the serial device is available. There are many devices that incorporate the Bluetooth Serial Port Profile. Let's just start with any regular GPS module. (Nearly) all GPS modules communicate through serial NMEA output. I know you could use the sensor API for generic GPS positioning data, but when using industry grade GPS receivers that's of no decent use: you'd want the raw data. And what about hardware to control? Barcode scanners, robotics, EEPROM flashing, even my iRobot robotic home vacuum cleaner supports a serial interface. Is there really no way to bypass this limitation? It would be a huge no-no for our company if serial support is abandoned. I'm not afraid of putting in some extra mileage when the overly simplyfied SerialPort class is depricated and I won't mind introducting low level handling myself. But serial access is really a necessity! We have several needs to communicate with serial devices. This includes WPF/WinForm apps using the SerialPort Class to communicate via the PC's serial ports ( either the physical 9-pin or a USB virtual COM dongle ). There are several different protocols we use to communicate serially with our products or other devices. Example of this would be Modbus @ 9600 baud. I agree with @Joep, in that it is not a MUST to have the same SerialPort Class for WinRT and Metro-style apps. However, it would be very nice as this is an easy-to-use API. If the SerialPort Class can not be provided, then at least have a solution using the new HID Driver interface, etc.. I'm assuming companies like FTDI will provide this. I just hope it will be around the time Win8 and WP8 RTM. Also, another thing to point out... Microsoft removed the SerialPort Class in Visual Studio 2003. There was such a demand for this, so they added it back in VS 2005. I hope MS will listen to our needs and add for their new platform, as well. Thanks Again, I realize that the desktop mode in Win8 will continue to support the SerialPort Class in .NET. My questions/concerns are related specifically to WinRT... meaning, what will be the solution for connecting and communicating with serial devices ( including supported devices and the APIs to use ). I came across this Build 2011 session ( see link below ), but have not watch yet. If you notice in the description it mentions UARTs. I would like Microsoft to provide more documentation and examples on how to communicate using this new HID Class, Sensor Fusion ( or whatever it is ). Again, I know Win8 has not even RTMed yet, but would be nice to know how this will work, who/what types of devices will be supported and possibly where to get started. Thanks again. @Doron No dis-respect Doron but you dont seem to know the embedded community and what/how the USB port is used for- Again state its a shame there is no CDC class support. To enlighten yourself please check out an ARM website for USB - The *** CDC class is the defacto standard *** for coms using a virtual serial port. No actual serial port is required. So any solution not involving a complete firmware re-write for Metro Apps and paying thousands for Windows driver??? Yes you still dont have a clue - Nothing to do with your ARM equipment - which would use an high-end A8/A9 etc. This is the embedded market which ARM are just 1 manufacturer. Here's another microcontroller link using CDC. CDC is the best method for USB comms. Ok, so CDC is the best method to expose virtual serial ports. Now could you explain how virtual serial ports can be high priority for the Microsoft's ARM tablet, competitor of iPad and Android tablets? Do many iPad users badly lack virtual serial ports? Note that these tablets already have BT, various sensors, touch - all that without needing a virtual com port. -- pa Actually in my company's industry, the answer is yes... IPad and Android does badly lack this. Even with BT, they do not have a good solution for the Serial Port Profile ( SPP ) for BT. I agree, in most cases with tablets... they won't be used for serial communications today. In my industry ( where we communicate with many various serial embedded devices ), we will need this for both tablets and PCs. If tablets and smartphones are the future... then yes... they will be required to have serial communications ( USB and/or BT ). The main reason I created this thread a year ago is because Microsoft is stating it is moving toward this new WinRT platform. I understand there will still be the desktop mode ( at least for Win 8 ) and we will still be able to use the regular .NET SerialPort Class within that mode. I'm simply asking what is the option Microsoft is providing on the new WinRT platform since it is the "future". When I mentioned "virtual COM ports" I wasn't requesting that virtual COM ports were a must have, but instead saying serial communications was a must have regardless of the method provided. I was asking how will developers will be able to send and receive serial data out of the USB and BT ports. Example in C#: TheNewWinRtPort port = new TheNewWinRtPort(); port.Baud = 9600; port.Open(); port.WriteBytes("hello world"); // Send bytes out of port. port.Close(); I honestly believe that companies like FTDI will provide some type of new method to communicate serially ( I guess with the Sensor API ), but again I'm wanting to know what the direction is and how WinRT is going to provide this. Is there a standard for this in WinRT? Will this method be a standard between Intel and ARM? Will this standard work the same serially from a standpoint of USB ports and BT ports? Don't get me wrong, I like WinRT and it will bring nice opportunities for me and my company... I just want a simple answer from Microsoft what this answer is. Do we need to fill out a NDA? If so, please send the info on how to get this information. Also, I attempted to contact the Sensor API group, but no response. One last note... I noticed on the BT profiles released for Win8 that it did not include the Serial Port Profile ( SPP ). Thanks again. While you're waiting answers from FTDI and MS, please consider that Modbus is a very ancient protocol. Today you can buy (or even hack yourself) an Ethernet adapter for your Modbus devices, and just run them over network. It will be also much faster, because IIRC Modbus over the PC serial port requires kind of timing that Windows hardly can sustain at high baudrates. I haven't checked recently if USB to Modbus (not raw serial!) adapters exist - if not, you can again hack your own. Either LAN or USB is much faster and reliable than raw serial. -- pa @ Shaggyqi : Finally somebody with sense!! Good post. I bet FTDI will produce a driver but thats an expensive way of doing things. Most microcontrollers supoort USB / CDC @ Pavel : Yes obviously its not a priority to support CDC/virtual serial ports otherwise microsoft would have done it. I think you are confused with RS232 serial & "Modbus"??? For short its call just 'Serial' My point is, its widely used on a PC/Mac/Linux. The code/firmware exists on both the microcontroller and computer - making its easy to develop USB based products. Since the the embedded market is unknown to some, I will give an example for my company's need: >>>> To display & store car diagnostic signals - attaching a device which converts the 'cars' signals to USB. <<<<< When you work in certain fields ie web-designer, or desktop programmer, its sometimes not immediately apparent the other multitudes of uses a tablet/PC might have!!! This is 1, I could think of industrial, medical etc. where a company wants a generic USB standard. Have a nice day... - This link is worth a read: I was just looking into this as a way to connect an RFID reader to an ARM Win8 tablet for an embedded scenario. It would be highly convenient to have a tablet-type device w/touchscreen instead of a full laptop as the "intelligence" behind a mobile timing system. The RFID reader I currently work with (passive tags, very low cost per tag) uses a serial interface. In the past working on an earlier version of this system I've used a C# library for serial port communication back before it was supported as part of .NET. I would love to be able to use an ARM tablet as the intelligence behind it. From TI's site: "Each reader is accompanied with PC compatible software that allows the user to read and program tags. Serial communications RS232 or RS422/485 are required." I don't care about the PC software (I want to write my own) but the serial port is a must. This is by far the best line of products from a hardware perspective for my task. I don't want it to be a desktop app, but rather an immersive app that's touchscreen focused. Is there any way to get this to work? Even if it involves buying some sort of hardware converter that takes the serial port connector and outputs something that works with the sensor fusion stuff, that would be fine. FYI, in the past I've used a serial -> USB device to plug it into modern computers. This has never been an issue for me. Even if I needed x86 that would be fine if there was a way to access it via the immersive environment and not the desktop. I'd prefer something as cheap, light, and power-efficient as possible though so ARM seems like a good fit. Any thoughts on solving this problem? (on Windows 8) If its a TI pre-built device with R232 then that pretty much limits you to a USB to serial converter and i believe x86 with the full version of Windows (non WinRT) 8 for now. And I would test it well to ensure the device still communicates well with no timming issues with the original software. I bet the gap left by microsoft's CDC serial support will be filled by some 3rd party driver solution but until then we have to wait and see. Dare I say, but it may be worth considering an alternative platform in the future if your company is supplying the tablet & writing the software. Contact myself should you find you need to out-source your design - our website is Good luck that sounds an interesting project, Steve. the only 3rd party drivers for arm that can be written are by the OEM and its partners. there are no after market drivers you can install on a winrt tablet. also, to create a winrt tablet, you must work directly with Microsoft for the entire process. so that means while TI has a bunch of parts you may want to use, you can't create an off the shelf product with them without first working with us. X86 is a different story. d -- This posting is provided "AS IS" with no warranties, and confers no rights. We are the 'OEM' ? We are the guys who build the device to connect to the tablet. Ok driver by a 3rd party company specializing in USB driver development such as jungo. Metro style device app development - whole section on USB. CDC 'is' missing lols of course. Heavy reading but quite possible - Its my job to use a 'bunch of parts' :) Few late nights ahead... This is a big concern for us as well, we currently develop software for Android tablets and the iPad since both of them provide a Bluetooth serial link (Apple's might not be referred to as the serial port protocol - but that is what their external accessory protocol is really). We need to access the embedded controllers through a Bluetooth serial stream - from your screen name I suspect that we're in similar industries "OBDSystems.com"! Microsoft are unfortunately missing out here, I would far rather develop for Windows/Windows RT than the other platforms but since they have the support for serial over Bluetooth then that is what we will have to work with. Until we find consensus on whether or not serial communications is the way to go: Could someone at Microsoft elaborate on what our options are? The fact is that we require communications with bluetooth devices which only support SPP protocols. There is no way we can change that, not even the OEM can, since the devices already exist and cannot be updated for what communications are concerned. In our line of business we are talking about devices in the price range of a decent SUV, some of them a multiple of that. So replacing 20 of them overnight is not an option. That's why I won't mind putting extra effort in to write custom drivers or something like that. But I would need a pointer in the right direction. I know about driver certification and signing etc. But one step at a time: this is for now only to be used in our internal environment, so running in test mode wouldn't be an immediate problem. And we could take it from there if needed and proven successful... -> Metro and hardware forum. -- pa -> Metro and hardware forum. -- pa Just trying to keep this thread up, in hope that Doron and other MS folks are reading. The more they are aware of real usage cases, the better. As we all know, some time ago MS & PC ecosystem rulers declared the serial port obsolete and started phasing it out. But they can reconsider / redesign the serial support, if there's a real need (= real money). -- pa Does this mean that no hardware company can create a software solution for WinRT for pre-existing hardware if it doesn't use any of the standard contracts that Microsoft implemented? That's odd... Whether or not a desktop app is better depends on the intended use. Our systems are designed to be used out in the field - truely mobile. We want our devices as portable as feasable, and x86 systems supporting desktop apps will never be that portable. It's insane to require heavier (both physical and financial) hardware when the lighter hardware has all the required capabilites, but it's OS just lacks the ability to expose it... The fact that tablets of competitors don't support SPP is not an argument to not support it on WinRT. The lack of support with other vendors was one of the main reasons to stick with Windows for this application. Sandboxing applications is a great concept for regular use, but if the end-user really really really wants to allow a specific app to break out of the box it should be possible somehow. Especially in corporate environments. If it is allowed to bypass the Store by side loading applications by the domain administrators, you would expect these domain administrators to be able to lower the gates for specific applications somehow. - Edited by Joep Beusenberg Friday, September 28, 2012 8:12 AM typos There is now a follow up article: and interviewee from the first article has his own post: So what's the point? Windows - including Win8 - is, and always has been, very friendly to "makers". Most of cross toolchains happily run on Windows. Toolkits for Android, ARM, JTAG and so on. The real enablers for "makers" are companies that specialize in interface solutions, such as FTDI. They provide complete solution on the host side (drivers, API libs) so that "makers" don't have to bother.Windows *RT* is locked down, but this is apparently what it is made for. So just don't use it for developer work. There are plenty of other Win7 and Win8 machines. -- pa Has there been any update on this? I have several very expensive data acquisition devices (spectrophotometers, colorimeters etc) that I would love to write Metro tablet apps for. Not having any support for Serial protocols over USB or BT seems absolutely ridiculous to me. Like others have pointed out in the Maker Blog this is not about having a physical serial port on a device - it's about supporting a protocol that is used by a myriad of embedded applications in engineering, educational, medical, military and many other fields. I really don't understand what Microsoft is thinking here. Making it a lower priority (as compared to other popular features on modern tablet devices) is ok, but not giving it any consideration at all, and without providing any reasonable alternatives is very disappointing. I have been reading through the Device Access API / Custom Driver Access Samples on MSDN all evening. Is that the proposed solution you are referring to? From what I understood, this approach will not work on WinRT, because no device driver installations are permitted, or is that not the case? I found this article: I'm not fully clear on whether this also applies to Metro tablet apps, but assuming that it does, it still means that each application would have to supply its own driver. Also, given that older devices are no longer supported by the manufacturers, and device documentation is generally not available and requires reverse engineering, it will be nearly impossible to make this work in any reasonable amount of time. - Edited by gmpreussner Sunday, January 20, 2013 5:37 AM - Correct , my approach doesn't work on windows rt. That is why I mentioned x86 and x64. You could create a proxy root enumerated driver which just forwards Io to the real device. Not too complicated and it lets you reuse all of the existing drivers. The proxy driver can be umdf so all if your development could be in user mode. d -- This posting is provided "AS IS" with no warranties, and confers no rights. Would such approach still pass WACK and Store Certification? I'm currently looking into a solution using an IP-based service which can run on an external system for the Store App to consume. I assume that would pass certification for sure, right? It's just a web-interface on an arbitrary system. But what would happen if the end user decided to install that service on his local system and enables the local loopback (intended for testing only)? I can't control what my end users do to their local system. If they like to disable the lookback restriction manually, i'm not going to stop them... Or would configurability of the web-service address violate store certification and is a fixed service address required? - I don't know the specifics to your scenario. the scenario I proposed should pass the WACK if you use the modern brokered interfaces to talk to open a handle to the virtual device and used them to send IOCTLs to the driver. d -- This posting is provided "AS IS" with no warranties, and confers no rights. Hello, just my two cents on this discussion which I followed for some time. Fully understand the need to have serial communication working on Windows 8 for industrial devices, for sure. Here is what I see may happen if you really want to have Windows 8 device and get it working with your industrial equipment over serial link Windows 8 on x86 or X64 platform --------------------------------- comes with traditional COM1: type device driver and your application can access it in traditional API way (CreateFile() etc.). In addition for notebooks without serial hardware a usb-to-serial adapters are available and their manufacturer provides device drivers, digitally signed, which you can install on Windows 8, thus getting almost exactly the same case as on Windows 7 Windows 8 on ARM platform ----------------------------- comes without traditional COM1: device driver. It does have internal SoC type UARTs which do have device drivers but those are not accessible to applications. Common usb-to-serial adapters either don't have drivers for ARM or Windows RT won't let you install those. Workaround ? Let's say you are in a need to make this happen. Lot's money are on the table as the cost of losing your equipment to work is huge. Then you contact tablet manufacturer, say Nvidia, and make them to produce small number of custom versions of Windows RT tablets for you. In those special industrial-geared tablets Nvidia will work with FTDI and Microsoft to allow incorporation of usb-to-serial drivers in Windows rt so that you can get your communication to equipment working over serial on Windows rt. It is possible I am sure but the effort to make this happen may be non reasonable. Hope this helps - Edited by SergeiR [MCTS] Tuesday, January 22, 2013 6:04 PM typo corrected Magnificently still missing the point entirely. We want to develop Store Apps that can use a serial interface, either via USB or Bluetooth. It doesn't have to be ARM per se. These drivers are readily available for the x86/x64 platform, but for reasons nobody can explain or understand, these drivers cannot be addressed from within the Store App sandbox. Reasons mentioned so far are mostly incorrect. For example the fact that an App shouldn't be dependent of external hardware. If that was true, even OneNote shouldn't be in the store, since it depends on my physical stylus. And the app could (if well designed) be usable without the hardware too, but also use hardware when available. Another reason that's mentioned is the fact that Apps could suspend and resume, and the hardware shouldn't depend on the App. True too, but that's true for any app using the camera too. And again: if well designed that's no problem again. Skype is still useful if you don't have a cam, and the cam can be used without Skype too... Contacting a tablet manufacturer wouldn't help at all - and would be out of scope for nearly everyone. Even more, there is no way nVidia can help me with my serial over Bluetooth connection. What we need is a legitimate workaround for a horrible limitation. Either be able to use such drivers that already exist and work on the desktop, or get a legitimate way to contact a desktop service that can work as a message broker. Of course only with the users consent. - Edited by Joep Beusenberg Tuesday, January 22, 2013 6:34 PM linebreaks went missing But can you tell, why you want your apps to be Metro? Is it just matter of fashion or a genuine business/process need? On a normal Win8 (not RT), normal desktop apps can use touch, the stylus and look "immersive". There is a lot of ways to sell desktop apps. Please note that serial devices likely will keep the system permanently powered so the power saving won't be as advertised. For those who hoped RT is much more secure the recent pack of patches showed that is false (not a surprise). (Or well, go ahead, make store apps for ARM tablets, serial hw and industrial use. But it won't be the Microsoft's store.) -- pa We want our apps to be genuine full screen 'Modern UI' Apps to enable our end users to use 'Modern UI' Apps. I know it sounds stupid, but it's actually true. We want the simple App-type user experience. The end user should know about his work process, not his pc/tablet. That's what the 'Modern UI' is all about: less chrome, more content. Our people work out in the field in any outdoor condition possible (sub-zero this week, hot in the summer, dusty on construction sites, being shaken on an off-shore oil rig). In these conditions, they want their device be as portable as possible - 10" or 11" is perfect - and they want to be able to control their tablet with one hand with ease, having frozen and smudgy fingers - no pixel perfect stylus tapping possible. Metro or 'Modern UI' is perfect for that. Full screen experience, no task bar, no using a stylus to exactly tap on the tiny buttons, etc. But still having a decent device that enables them to use e-mail, domain specific software, etc. We are currently developing the app as a desktop app that mimics the RT-style. Full-screen, app-bar-like buttonstrips. Multi-touch mapping control. But we keep having to build workaround over workaround. For example: when a user enters or leaves a text box, we want the on-screen keyboard to appear and disappear. That happens automagically in RT, but not in the desktop. I know there are means to show the KB, but you can't make it disappear or make it adapt to it's context (url entry, number entry, address entry). When a file dialog is invoked, the file name textbox is highlighted, but no keyboard appears. And the user can't invoke it, because we hid the taskbar. And if we invoke it in advance and the user closes it, he can't get it back. When scrolling in 'Modern UI', the content in the scrollviewer will nudge further and dog-ear back as expected. In desktop mode however, the whole window gets nudged. With a lot of (hardly documented) work that can be overriden, but even then for some reason the taskbar appears over the application. Only to disappear on release. When an application overlays full screen, the user can still swipe from the right to get to the RT start menu, but there is no intuitive way to return to the desktop. When the desktop is selected, the app reappears. That said, we want our app to be available on ARM RT tablets too. We know the serial comms are out of the window (no pun intended) on ARM, but nevertheless it's a useful app stand-alone too. When ran on x86 - on slighty heavier hardware - the same software can be extended to use external sensors and controllers. I've tried, but it's really impossible to cross-develop an app in both WinRT and WPF. The languages are the same, but the framework is just too different.... Hi All, I originally posted this thread over a year ago as I was really excited about the new platform ( WinRT ) and was curious in what the solution Microsoft was going to provide for serial communications. I'm thankful for all the feedback and support ( good and bad ) and hope developers continue to request this to be added to WinRT in the future. I also want to say, that I'm a Microsoft faithful and believe in .NET and their products... including the new WinRT. I will not rant or threaten to leave to go to another platform as some have suggested because this feature is not available today. However, I'm very disappointed as I tried to explain as much as possible what my needs are and I've attempted to contact Microsoft in various ways to provide them with my use-cases. I understand serial communications ( a.k.a SerialPort Class ) is still available using the desktop mode and that is what my company is restricted to using when customers ask for applications that communicate with serial devices. I also have to recommend other platforms as they support this... but it is usually my last options. As I stated in my earlier threads, I have no problem using the desktop mode to create a WPF app to perform serial communications. My question was based on a couple of things. 1.) If Microsoft is wanting developers to create apps using their new "killer" platform, then there needs to be a solution to allow us to communicate serially. I don't care if it is a physical DB-9 port or a USB port. There needs to be a solution similar to the SerialPort Class for .NET and should be something that is consistant between platforms ( ARM, x84, 64, etc. ) and work across the various related devices hardware vendors create ( FTDI, Cypress, etc. ). 2.) I keep hearing that WinRT is the future and the desktop is going away. I understand the desktop will be here for years to come, but again... we need a solution using technology that is going to replace the desktop. I've seen people post questions on why we could not use an alternative to the serial port ( DB-9 or USB ). In my industry ( building automation ), there is several legacy devices that will need to be supported for years to come. If there is a device that can be a substitute ( IP/Serial Port ) that uses a trivial API and easy to setup with WinRT... please let me know. I don't believe that should be the solution, especially since there will be many device form factors and most will include USB ports that could provide a method of reading and writing data serially. :( I've also watched videos based on driver development and peripheral makers and it appears somewhat confusing. They stated that if you create a device and drivers to run on WinRT, then your WinRT application would have to include a reference id or similar before publishing to the store. This approach seems a little odd as I could not use a generic serial dongle ( like one provided by FTDI today ) to use between different apps that perform serial communications. I might be misunderstanding this concept, but I read this in more than one place related to this restriction. I hope that Microsoft is listening to these types of requests from concerned customers/developers. I've had patience on this topic as we still have alternatives today and WinRT is at revision 1. This concerned developer is just asking for a little bit of information on what the story is going to be for the future ( WinRT that is ). Thanks for reading this far. +1. especially, as they ship Android oriented products for quite a while. It was the election day today - hope you've made a good choice ;) Pragmatic developers vote for flexibility and openness. -- pa We want it Metro because we want to support the consumer with whatever device they have. Yes, it is easy to do it with x86 hardware. All it takes is a $5 USB to RS232 converter. Unfortunately, x86 devices are considerably more expensive than RT devices. Also, RT devices have a considerably longer battery life. I am looking at a system where we need an 8 hour battery life in the field w/ no AC power available. Because we could not use the serial port, we have an ugly kludge. Instead of a USB converter, we have been forced to use a $50 WiFi to serial converter and communicate using UDP. All to send signals less then 3 feet. The code is complicated. Communications are subject to EFI. It is harder for the user to configure their system. Also, while they are using their tablet to control our hardware, they cannot talk to the Internet! (I admit that they often will not have WiFi connectivity anyways.) I have yet to see any rational reason why the serial port is not supported. Why do you say that they "will keep the system permanently powered"? It is no more likely to keep the system powered than any other HID interface. When I use USB/RS232 adapters on existing x86 systems, it does not prevent them from sleeping or powering off. Why is it more secure? I would think that talking over a wire is infinitely more secure that using WiFi. Embedded systems and microcontrollers almost always have a serial interface. By stripping support for SerialIO from WindowsRT, Microsoft is cutting themselves off from a potentially lucrative market. My client was looking at OEMing WinRT tablets to go with their hardware. Without the serial port and the resulting kludge, that is no longer an option. I've tried, but it's really impossible to cross-develop an app in both WinRT and WPF. The languages are the same, but the framework is just too different.I agree completely! I, too, have tried to write applications for both WinRT and WPF. Why they couldn't make the framework the same is beyond me. They rewrote everything several times over, why couldn't they do it right? (Did anyone else try writing WinRT apps using Visual Studio 2010, VS2012 consumer preview, VS2012 Release Candidate, and finally VS2012 RTM? You had to throw out your old code and start over with each new version.)... For this very reason, since they could not make a simple, elegant solution using a WinRT device, my client now has teams working on iOS, MacOS, and Android solutions for their control interface. A year ago when we started this project, the other OS's were not in the picture. - I'm now finding there isn't any System.IO.Ports namespace, and with some further digging (this forum now included) it appears as though it's been left out on purpose? This is rather mind boggling as, although many of the actual "COM Ports" aren't included as hardware on our PC's, the communication is still essential. As "legacy" as the serial port may be, we use it frequently in the robotics field. I'm an Electromechanical Engineer by day, and "hobby robotisist" in the evenings. In the industrial automation field, most of our Allen Bradley hardware (both PLC's and our HMI's), weld controllers, and servo drives, still rely on COM Ports. This is also the case regarding my own robots' microcontrollers & CNC machines I use to create the structures. I don't know, but IMHO this move really cuts out a chunk of market potential... as well as limits future innovations whose dubbed "dated hardware" would benefit greatly form the allowance of these resources on the software development side. Thought I would share some news I came across for Windows Blue. I hope this is true. If so, it should fix what I've been asking for on this thread.." " Rumors and gossip... nothing serious. Here is the link, by the way. -- pa - I agree, but hope is all we have for now. We shall see at Build 2013. Justin Angel is usually close to what is expected. Here is the original post... Thank you Microsoft! Finally, the new serial framework documentation! Still not clear whether devices based on this framework are compatible with Metro apps. /* my guess is not, simply because none of new IOCTLs allow sending and receiving data, and metro apps cannot use Read and Write */ -- pa Again, thank you Microsoft for listening! See 8:40 mark on
http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/b2924065-1627-400d-9b81-6990c53e1652/windows-8-serial-port-support?forum=wdk
CC-MAIN-2013-48
refinedweb
6,583
63.49
STRRCHR(3) BSD Programmer's Manual STRRCHR(3) strrchr, rindex - locate last occurrence of a character in a string #include <string.h> char * strrchr(const char *s, int c); char * rindex(const char *s, int c); The strrchr() function locates the last occurrence of the character c in the string s. The terminating NUL character is considered part of the string. If c is '\0', strrchr() locates the terminating '\0'. The rindex() function is an old synonym for strrchr(). The strrchr() function returns a pointer to the located character or NULL if the character does not appear in the string. After the following call to strrchr(), p will point to the string "obar": char *p; char *s = "foobar"; p = strrchr(s, 'o'); memchr(3), strchr(3), strcspn(3), strpbrk(3), strsep(3), strspn(3), strstr(3), strtok(3) The strrchr() function conforms to ANSI X3.159-1989 ("ANSI C"). The rindex() function is deprecated and shouldn't be used in new code..
http://mirbsd.mirsolutions.de/htman/sparc/man3/rindex.htm
crawl-003
refinedweb
162
63.7
Hottest Forum Q&A on CodeGuru - December 29th Introduction: Lots of hot topics are covered in the Discussion Forums on CodeGuru. If you missed the forums this week, you missed some interesting ways to solve a problem. Some of the hot topics this week include: - How do I port VC++ code to Linux? - How can I use STL with Linux? - How do I make a function that allocates memory for you? - How can I create a 3D vector? - How do I return a vector from a function? vir_123 wrote some code in VC++. Now, he needs to convert that code for a Linux OS. How can he do that? I have developed VC++ applications on the Windows platform. I want to convert it to the Linux platform. Is any cross complier method to convert it to the Linux platform? Please suggest an easy method to do so. Several questions come to mind before you can convert any VC++ code to a Linux OS. - Is your program graphical? - Do you use MFC? - Do you use ATL? - Do you use straight WIN32 functions? - On what system components does the program rely? - What does the program do? Porting an application from VC++ can be easy, but you have to be careful that you use APIs that can be run on both OSes (Windows as well as Linux). If you still want to use Microsoft APIs, you can try WINE (a Windows emulator). Or, you should use wxWindows, a cross-platform GUI API. See the Web site for more info. This is an another Linux question. Here, KnNeeded needs to run his simple "hello world" program on a Linux system. But, unfortunately, he did not know how to use the STL in Linux. Do you? I am trying to port my current C++ and C skills to Linux programming. I have successfully created the standard Hello World starting project in C for Linux, using both files and standard output. I am trying to do something simular using the anyone know of a good reference on the C++ libraries in Linux? Is this my problem or does Linux not support STL? I am new to Linux programming so I am at the moment only really looking for a good reference site. Thanks in advance. You can download the STLport here for free. An installation instruction is included. yiannakop does have some problems with his function. He is trying to allocate memory in a function; it should not be that difficult. But, the problem is in passing the parameters to the function. Here is his problem: Suppose I want to implement a function like this: void function_name(int length, short *array) { array = (short*)malloc(sizeof(short)*length); // do something } This, unfortunately, doesn't work. To make it work, I must place the allocation line ( array = (short*)malloc(sizeof(short)*length ) in my main(), just before calling the function. Why is this happening? MrViggy presented the solution very well. Here is what he says about that problem: If you are trying to get the function to allocate memory for you, you either need to pass the array pointer by value, or pass in a pointer to the pointer to the array. Currently, you are passing the array pointer by reference; therefore, the function will not be able to modify it. Pass by value: void function_name(int length, short* &array) { array = (short*)malloc(sizeof(short)*length); } Or, pointer to pointer: void function_name(int length, short **array) { *array = (short*)malloc(sizeof(short)*length); } int main() { short *myShort; function_name(3, &myShort); } AfricanDude knows how to create a 2D vector, but unfortunately, he has some problems with creating a 3D vector. I am trying to use the STL vector multi-dimenstional. I know how to do 2D, but I want to use 3D. This is what I have tried: vector < vector <vector <int> >> m(3, vector< vector<int> >(4,5)); I also tried the following: vector < vector <vector <int> >> m(3, vector<int> (5, vector<int> (4))); I get the similar error as shown above. Can you please shed some light? Also, is there another way to avoid using a 3D vector if I want to use it for the following? int i,j,k; float a[i][j][k]=value; Of course, I would like to vary the size of array dynamically. c:\Program Files\Microsoft Visual Studio .NET 2003\Vc7\include\ vector(357): error C2664: 'std::vector<_Ty>::_Construct_n' : cannot convert parameter 2 from 'int' to 'const std::vector<_Ty> &' with [ _Ty=std::vector<int> ] and [ _Ty=int ] The answer was provided by Paul McKenzie: Use typedefs: #include <vector> // To much work!! std::vector < std::vector <std::vector <int> > > m(3,std::vector< std::vector<int> >(4, std::vector<int>(4,0))); // Easier to maintain typedef std::vector<int> Int1D; typedef std::vector<Int1D> Int2D; typedef std::vector<Int2D> Int3D; int main() { Int3D My3DArray(10, Int2D(10, Int1D(10,0))); } The reason for the typedefs is that, at some point, you may need an iterator. When you define the iterator, what's easier? std::vector< std::vector< std::vector<int> > >::iterator it; or Int3D::iterator it; halmark6Z is new to C++. His background is actually Java. But, now he needs to write some C++ code in which he wants to return a vector in a fuction. I'm rather new to C++ (I'm changing from Java to C++), so please bear with me. I have a question about STL and returning a vector array from a static method. How do I do it correctly, so that the function contructs a vector that can be returned to the caller? I know if I create a local object from a class and return its reference, it will point to null (if I'm right) after the function call. So, the object should be created with the new operator. But, how do I do it with a template call? I just get errors when I try to compile. It should go something like this (please, do correct my syntax. Pointers and references are used very wrong, I think): static &vector<MyClass> loadFromFile() { vector<MyClass> *data = NULL; try { // load from file and populate the vector } catch(...) { } return &data; } The perfect answer is again provided by Paul McKenzie: A vector has proper copy semantics, so there is no need to use references or pointers. #include <vector> class MyClass { }; static std::vector<MyClass> loadFromFile(); // if you really need // a static function std::vector<MyClass> loadFromFile() { std::vector<MyClass> data; try { // load from file and populate the vector } catch(...) { } return data; } int main() { std::vector<MyClass> M; M = loadFromFile(); } The code is much more easier than you thought it would be. I know if I create a local object from a class and return its reference, Return it by value (as my code does above), not by reference. It will point to null (if I'm right) after the function call. So, the object should be created with the new operator.You're right, you do come from a Java background. You do not need to create an object with "new" in C++. There are no comments yet. Be the first to comment!
http://www.codeguru.com/columns/forum_highlights/article.php/c6609/Hottest-Forum-QA-on-CodeGuru--December-29th.htm
CC-MAIN-2015-48
refinedweb
1,198
65.52
I've tried using binary builds from here: and they give initial impression of working, but when I try to create a connection everything falls apart (ST2 crashes). File: pyzmq-2.1.7.1.win-amd64-py2.6.msi Sublime: win64, 2177 Plugin code: - Code: Select all import zmq c = zmq.Context() s = c.socket(zmq.PUB) s.bind("tcp://*:5555") This works, but as soon as I do this on the other side, everything just blows up. - Code: Select all c = zmq.Context() s = c.socket(zmq.SUB) s.connect("tcp://127.0.0.1:5555") I have the .mdmp / appcompat / WERInternalMetadata saved if necessary. EDIT: Actually, it sometimes crashes on s.bind("tcp://*:5555") as well.
http://www.sublimetext.com/forum/viewtopic.php?f=3&t=5244
CC-MAIN-2014-10
refinedweb
118
62.64
dpns_setatime - set last access time for a regular file to the current time #include <sys/types.h> #include "dpns_api.h" int dpns_setatime (const char *path, struct dpns_fileid *file_uniqueid) dpns_setatime sets the last access time for a regular file to the current time. This function should only be called by the stager after the file has been successfully recalled and every time a stagein requests this file, even if the file already resides in the disk pool. The file can be identified by path name or by file_uniqueid. If both are specified, file_uniqueid is used. file does not exist or is a null pathname. EACCES Search permission is denied on a component of the path prefix or the caller effective user ID does not match the owner ID of the file or read permission on the file itself is denied. EFAULT path and file_uniqueid are NULL pointers. ENOTDIR A component of path prefix is not a directory. EISDIR The file is not a regular_stat(3), dpns_statg(3) LCG Grid Deployment Team
http://huge-man-linux.net/man3/dpns_setatime.html
CC-MAIN-2019-04
refinedweb
170
64.71
Key Takeaways - The .NET CLI includes a template engine, which can create new projects and project items directly from the command line. This is the “dotnet new” command. - The default set of templates covers the essential project and file types you need for default console and ASP.NET-based apps, and test projects. - Custom templates can create more interesting or bespoke projects and project items, and can be distributed and installed from NuGet packages or directly from the file system. - Custom templates can be very simple or much more complex, with substitution variables, command-line parameters and conditional inclusion of files or even lines of code. - Maintenance and testing of custom templates is easy, even with conditional code, by making sure a project template is always a runnable project.. The tooling story changed dramatically with .NET Core, because of its serious emphasis on the command line. This is a great fit for .NET Core's cross-platform, tooling-agnostic image. The dotnet CLI is the entry point to all of this goodness, and it contains many different commands for creating, editing, building, and packaging .NET Core projects. Here, we’ll focus on just one aspect of the dotnet CLI — the dotnet new command. This command is mainly used to create projects, and you can often create a simple boilerplate project and then forget about it. We’ll look at how to get the most out of this command, by passing arguments to modify the generated projects, and seeing how we can use the command to create files as well as projects. We'll also see that this tool is a full-fledged template engine, and can be used to install custom templates, as well as make personal templates. dotnet new in action So how do you use dotnet new? Let's start at the beginning and work up to the most interesting stuff. To create a simple console application, start up the command line, change directory to a new empty folder (an important step, explained below), and call dotnet new console: > dotnet new console The template "Console Application" was created successfully. Processing post-creation actions... Running 'dotnet restore' on /Users/matt/demo/MyNewApp/MyNewApp.csproj... Restoring packages for /Users/matt/demo/MyNewApp/MyNewApp.csproj... Generating MSBuild file /Users/matt/demo/MyNewApp/obj/MyNewApp.csproj.nuget.g.props. Generating MSBuild file /Users/matt/demo/MyNewApp/obj/MyNewApp.csproj.nuget.g.targets. Restore completed in 234.92 ms for /Users/matt/demo/MyNewApp/MyNewApp.csproj. Restore succeeded. As I mentioned before, make sure you're in a new, empty folder first. By default, dotnet new will create files in the current folder and will not delete anything that's already there. You can make it create a new folder by using the --output option. For example, you could create a project in a new folder called ConsoleApp42 by typing: > dotnet new console --output ConsoleApp42 The template "Console Application" was created successfully. Processing post-creation actions... Running 'dotnet restore' on ConsoleApp42/ConsoleApp42.csproj... Restoring packages for /Users/matt/demo/ConsoleApp42/ConsoleApp42.csproj... Generating MSBuild file /Users/matt/demo/ConsoleApp42/obj/ConsoleApp42.csproj.nuget.g.props. Generating MSBuild file /Users/matt/demo/ConsoleApp42/obj/ConsoleApp42.csproj.nuget.g.targets. Restore completed in 309.99 ms for /Users/matt/demo/ConsoleApp42/ConsoleApp42.csproj. Restore succeeded. Looking at what you’ve created At this point, dotnet new has created a new console project and restored NuGet packages — it's all ready to run. But let's take a look at what's been created: > ls ConsoleApp42/ ConsoleApp42.csproj Program.cs obj/ As you can see, you now have a project file based on the name of the output folder. If you wish, you could use the --name parameter to specify a different name: dotnet new console --output ConsoleApp42 --name MyNewApp This would create the project files in a folder called ConsoleApp42, and would use MyNewApp as the name of the console application being created — you'd get MyNewApp.csproj. If you take a look at Program.cs, you'll also see that the name parameter is used to update the namespace: using System; namespace ConsoleApp42 { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } } Prepping for another project But, if you take a look at the folder structure of the project you've just created, you might spot something missing — there's no solution file. You've only got a single project, and while this works fine with dotnet run, it will cause problems when you want to add another project. You can easily create one: dotnet new sln This will create a new, empty solution file. Then, it’s another step to add a project to it. If you created your solution in our demo's root folder, it would look like: dotnet sln add ConsoleApp42/MyApp.sln You can also use the dotnet sln command to remove or list projects in a solution. If you want to add or remove references to a project, you need to use the dotnet add command. I suggest reading Jeremy Miller's article on the extensible dotnet CLI for more details, or type dotnet help sln or dotnet help add. Adding another project is very easy also, but you must do it in this two-step fashion — create, then add. For example, you could add a test project to your solution: dotnet new nunit --output Tests --name MyAppTests dotnet sln add Tests/MyAppTests.csproj Adding new files to a project Adding new files to a project is even easier, mostly thanks to the improvements .NET Core made to MSBuild files. You no longer need to explicitly list C# files in the .csproj file, because they're automatically picked up through wildcards. You just need to create a file in the folder and it will automatically become part of the project. You can create the file manually, but you can also use dotnet new to provide a template file. For example, you could add a test file to your test project using the nunit-test item template: dotnet new nunit-test --output Tests --name MyNewTests Speaking of templates, how do you know what templates are available? How can you tell the difference between a project template and an item template? That's a job for dotnet new --list, which outputs a list of available templates: This lists all templates. You can use the --type parameter to filter this down, using --type project, --type item, or --type other. Project templates will create a project, item templates create a single file, while other is really only useful for the sln template to create a solution file. The short name (2nd column above) in this list is the name you use in the call to dotnet new (e.g. dotnet new console, dotnet new classlib, dotnet new mvc, etc). Some templates support multiple languages, with the default shown in square brackets (spoiler — it's always C#). You can choose a different language with the --language option, but be careful of the # symbol! Some command-line shells treat this as a comment character and parsing can fail with --language F#. This can be handled by quoting the value - "--language F#". Finally, each template has one or more tags. These are a way of classifying templates, but aren't currently used as part of the command-line tooling. However, they could be used for grouping or filtering by other hosts. Yes, that's right, the dotnet new template engine can be used in other hosts, such as IDEs. More on that later. Customising templates So far, we've just looked at a very simple Hello World console app, and added some tests. Let's take a look at something more interesting. Say you want to create a new ASP.NET project. Looking at the list of templates above, you have a few choices. You could create an empty web project, an MVC project, a project with Angular, or one with React.js. But these are fairly rigid templates. Can you customise these at all? The good news — yes, you can. Templates can take in parameters that change what's being generated. The -- help command will provide details on the parameters this template understands. Let's start with a simple example: > dotnet new classlib --help Class library (C#) Author: Microsoft Description: A project for creating a class library that targets .NET Standard or .NET Core Options: -f|--framework The target framework for the project. netcoreapp2.1 - Target netcoreapp2.1 netstandard2.0 - Target netstandard2.0 Default: netstandard2.0 --no-restore If specified, skips the automatic restore of the project on create. bool - Optional Default: false / (*) true * Indicates the value used if the switch is provided without a value. Here you can see that the classlib template has two parameters: --framework--framework to specify what target framework is written to the project file; and --no-restore, to control if NuGet restore is performed when the project is created.to specify what target framework is written to the project file; and --no-restore, to control if NuGet restore is performed when the project is created. dotnet new classlib --framework netcoreapp2.1 --no-restore The web templates have similar parameters, but there are many more of them than we have space to list here. Try dotnet new mvc --help to get an idea of what's available. There are parameters to decide what type of authentication you want, whether to disable HTTPS or not, whether to use LocalDB instead of SQLite, and so on. Each of these parameters changes how the template code is generated, either replacing content in files or including/excluding files as appropriate. While we're talking about help, here are two very useful commands: dotnet help new, which opens a web page on the dotnet new command itself; and dotnet new {template} –help, which shows help for the named template and its parameters. Adding customised templates The real power of the dotnet new command is the ability to add new, custom templates. Even better, templates can be distributed and shared, simply by packing them into a NuGet package and uploading to nuget.org. This makes it very easy to get started with a framework, or automate the boilerplate of creating new projects or project items. To add a new custom template, use the dotnet new --install {template} command, passing in either the name of a NuGet package, or a file folder for a local template. But how do you find new templates? One way is to search for the framework you're using and see if templates are available, but that's a bit hit and miss. Fortunately, you can instead visit dotnetnew.azurewebsites.net and search for templates by keywords. There are over 500 templates tracked on the site, which makes it a good discovery resource. For example, you could install a set of templates for AWS Lambda projects with dotnet new --install Amazon.Lambda.Templates. One very nice feature of installing templates via NuGet packages is that each package can contain more than one template. This AWS Lambda package contains 28 different templates, including a tutorial project. Of course, if you no longer want to use the template, simply uninstall it with dotnet new --uninstall {package}. The name passed here is the name of the installed template package, and if you're not sure of the name, simply run dotnet new --uninstall to get a list. Creating your own templates You can also create your own custom templates. These don't have to be for popular frameworks, but might be for internal or personal projects. Essentially, if you often find yourself creating a specific folder structure, set of references, or boilerplate files, consider creating project or item templates. Project templates are simply plain text files, including the .csproj files — there's no requirement that the generated templates are .NET Core specific, and they can be made to target any framework. It’s very easy to create a new template and they are surprisingly easy to maintain. Traditionally, templates that can perform text substitution use a special syntax, like $VARIABLE$ markers that will be replaced when the template is evaluated. Unfortunately, this is usually invalid syntax for the file type, which makes it impossible to run the project to test that the template is correct. This leads to bugs and slow iteration times, and basically, a bit of a maintenance headache. Fortunately, the designers of the template engine have thought about this, and come up with a much nicer way of working: running templates. The idea is simple — the template is just plain text files. No special formats, no special markers. So, a C# file is always a valid C# file. If a template wishes to substitute some text, such as replacing a C# namespace for one that's based on the project name, this is handled with simple search and replace. For example, imagine we had a template that looked like this: namespace RootNamespace { public static class Main { // ... } } The template's JSON configuration defines a symbol that will replace the namespace. The symbol value would be based on the project name, possibly with a built-in transform applied to make sure it only contains valid characters. The symbol would also define the text it was replacing — "RootNamespace." When the template engine processes each file, if it sees "RootNamespace," it will replace it with the symbol value. This simple search and replace is usually based on a symbol that is based on a parameter, such as the template name, the output name, or an actual custom parameter. But, it's also possible to create symbols based on generators, to create GUIDs, random numbers, or the current timestamp, and so on. But no template is complete without conditional code — something that's added or removed based on a parameter. How does dotnet new handle this and keep "running templates" as an option? This is actually handled on a per-file-type basis, with some default config built in, and the ability to define your own style for unknown file formats. Essentially, the idea is to use the file-specific pre-processor (such as #if for C# or C++) for those file types that support it, and specially formatted comments for those that don't, such as JSON. ```cs public class HomeController : Controller { public IActionResult Index() => View(); public IActionResult About() { ViewData["Message"] = "Your application description page."; return View(); } #if (EnableContactPage) public IActionResult Contact() { ViewData["Message"] = "Your contact page."; return View(); } #endif public IActionResult Error() => View(); } ``` All of the metadata for a template lives in a template.json file. This includes the template's short name, description, author, tags, and supported language. Because a template can only target a single language, it also includes a "group identity" option, which multiple templates can specify, one for each language. The metadata file can also include optional information about source and target of files to be copied or renamed, conditional file copies, substitution symbols, command-line parameters and post-creation actions such a package restore. But by default, the template engine will copy and process all files in the template’s file structure. { "author": "Matt Ellis", "classifications": [ "Hello world" ], "name": "Hello world template", "identity": "Matt.HelloWorldTemplate.CSharp", "shortName": "helloworld", "guids": [ "d23e3131-49a0-4930-9870-695e3569f8e6" ], "sourceName": "MyTemplate" } The template.json file must be placed at the root of the template's folder structure, in a folder called .template.config. The rest of the folder structure is entirely up to you — the template engine will keep the same folder structure when evaluating the template. In other words, if you add a README.md file to the root of your template's folder structure, then the template engine will create a README.md in the root of the output folder when you call dotnet new. So, if you use --output MyApp, you will get a file called MyApp/README.md. > tree -a . ├── .template.config │ └── template.json ├── MyTemplate.csproj ├── Program.cs └── Properties └── AssemblyInfo.cs 2 directories, 4 files To install and test your template, simply call dotnet new --install {template} as you would to install a custom template, but this time, pass in the path to the root of the template folder structure. To uninstall, use dotnet new --uninstall {template}. Again, if you're not sure of what to pass, use `dotnet new --uninstall` to get a full list. Distributing your templates Once you're ready to distribute your template, you can pack it into a NuGet package and upload to nuget.org. You'll need to create a .nuspec file as normal, but with two slight tweaks: add a packageType element and set the name attribute to "Template," then make sure the template folder structure is copied into a folder called "content" <package> <metadata> <id>MattDemo.HelloWorldTemplate</id> <version>1.0</version> <authors>Matt Ellis</authors> <description>Hello World template</description> <packageTypes> <packageType name="Template" /> </packageTypes> </metadata> <files> <file src=".template.config/template.json" target="content/.template.config" /> <file src="MyTemplate.csproj" target="content/" /> <file src="Program.cs" target="content/" /> <file src="Properties/*" target="content/Properties/" /> </files> </package> Additionally, it's possible to include multiple templates in a single package — simply create multiple folders under "content" and add a .tempate.config/template.json for each template. There are many more options and capabilities in the template.json file, but covering them all is beyond the scope of this article. But, based on all we’ve covered here, you can see that the template engine is very powerful, flexible, and yet fairly straightforward to use. Please check out the Microsoft docs site as well as the wiki for the dotnet/templating GitHub site. The Template Engine One of the most interesting things about dotnet new is that it’s designed to be used from multiple hosts. The dotnet new CLI tool is simply one host — the template engine itself can be used as an API from other applications. This is great for those of us who prefer to work with an IDE instead of the command line, but still want to be able to easily add custom project templates, something that isn't always easy with an IDE. We can see this in action in JetBrains Rider. The New Project dialog is powered by the template engine APIs, listing all the available templates, even custom templates. When the user wishes to create a project, the template engine is used to generate the files. If you look closely, you'll see that Rider has more templates than the .NET CLI. This is because Rider ships extra templates to support .NET Framework and Xamarin projects. The template engine API allows hosts to install templates to a custom location, and can list them from both, meaning Rider will show custom templates installed by dotnet new --install as well as using the Install button in the new project dialog’s "More templates" page. Once reloaded, the new template is shown in the list, just like all the others. New, custom projects with ease The dotnet new command makes it easy to create new projects and project items. The default set of templates will get you started with building .NET Core applications, either command line or ASP.NET based, and will help create test projects and target other .NET languages. Custom templates can be easily installed to create projects with other requirements, such as a different folder structure or framework dependencies. And the format for custom templates makes it easy to create your own, taking advantage of substitution variables and conditional code, but still keeping the template project compilable and maintainable. Together with the dotnet sln command, as well as the other extensible dotnet CLI commands, the new template engine makes it easy to create and manage projects, project items and solutions consistently, cross platform and directly from the command line. About the Author Matt Ellis is a developer advocate at JetBrains. He has spent over 20 years shipping software in various industries and currently works with IDEs and development tools, having fun with abstract syntax trees and source code analysis. He also works on the Unity support in Rider.
https://www.infoq.com/articles/dotnet-core-template-engine/?itm_source=articles_about_dotnet-core-3-article-series&itm_medium=link&itm_campaign=dotnet-core-3-article-series
CC-MAIN-2020-34
refinedweb
3,354
63.7
Portrait Of ICANN Chairwoman Esther Dyson 86 ContinuousPark writes "The NY Times has an article on Esther Dyson's difficulties heading ICANN, some of them deriving with her inability to do politics, a much needed thing when you have individual and civil interests on one side and huge commercial ones on the other. Although the article praises her enormous intellectual capacity, it also has EFF's Mike Godwin saying: 'I think that there is a dimension of being a political being that involves going out and getting hands on and dealing with individuals. I don't think she is terribly comfortable with that. I think she is democratic in principle but not entirely democratic in practice.' Is Dyson recklessly ignoring politics or is she maybe redefining them?" For a sarcastic fairy-talish take on this (Score:1) To qoute: "And you have to stop meeting behind closed doors, too. It's dark in there and there might be spiders." Unwillingness to kiss ass... (Score:2) As I see it, unwillingness to kiss ass is to be applauded, not snubbed. The oh-so-wise reporter must have given the corporate lobbyists hell of a laugh by swallowing it all up and wiping their asses afterwards. There are people who are excellent administrators because of their ability to focus on the important and leave the snivelling rats to their fight over the crumbs; their excellence isn't neccessarily appreciated by reporters like this but it is certainly regarded highly by their peers. Oh, the hell with it. I'm being too harsh. I apologise :) Re:Empty Suit (Score:2) ICANN needs someone like Jon Postel to run it. (Score:3) Esther Dyson... (Score:2) At any rate, it doesn't surprise me that she's not a very good leader of ICANN. She's probably so used to people automatically kowtowing to her that she has no patience for anyone questioning her dictates. It's not that she's "redefining politics" or any of that babble ; she's simply not equipped to deal with people that question her marching orders. It's a classic case of a spoiled child throwing tantrums, and the only reason she's being treated otherwise is that people are still afraid to say anything negative about her. -- "Today's children need guidance. And nothing gets a bigger response out of kids than a list of rules hung on a wall." Re:Yet more tired geek misogyny (Score:1) Perhaps you don't. I have seen quite some of these. In addition to Stallman's ever-popular T-Shirts and hair style (kind of like Samson's, I suppose), I have seen more than one article commenting on Linus' fine sense of style, in particular his excellent combination of white socks with sandals. Yeah, and? (Score:3) Personally, though, I don't care if the ICANN chairwoman is a human-o-phobe (although, anyone with a knowledge of history probably should be!), an isolationist, a survivalist, or a martian. What I care about is whether ICANN works, and it's shown little sign of it, so far. Re:ICANN needs someone like Jon Postel to run it. (Score:2) It'll ease the adoption of treaties governing Internet connectivity, and allowing countries that don't adopt and enforce uniform IP laws to be disconnected from the Internet. Otherwise Libya or someone would set up a bomb-proof data haven, fill it with servers and allow anyone to download Microsoft Office and MP3s. Are you thinking? (Score:1) Your arrogant and sarcastic tone isn't becoming. I'd suggest you drop it and stick to the facts. Isn't Esther a VC? (Score:1) Re:Another internet-era cliche... (Score:1) Result: analysts are only brought in when someone who can afford them doesn't have the majority of people behind him/her and wants to "change the tide". Soooo.. all this money is wasted and so-called analysts get rich as a natural by-product of management squabbling! I ignore these types of articles, for the most part.. the few times that I do notice them I either hold them to be contemptuous BS or completely irrelevant. If a company is so desperate so that they actually take these people seriously (as opposed to the aforementioned 'political reasons') then they need to hire a few engineers NOW. Re:political impotence is BAD? (Score:1) political impotence is BAD? (Score:2) News flash: Not serving in public office or contributing in any way to the day to day politic'ing that goes on in, say, your office is generally indicative of a normal, well-adjusted individual. It's only the wierd ones that develop a taste for politics and pursue it, hence the popular quote "Every now and then an innocent man is sent to congress". Hell, I say we erect a statue for the lady - she resisted every attempt to become political in the face of incredible odds - a tribute to how sane this person is! =) Re:Are you kidding? (Score:1) Sigh...perhaps an isolated case. -buffy Esther needs to shape up or get out (Score:1) Dyson continually strikes me as someone who is used to getting her way and being in charge. Which is fine in and of itself, I suppose, but when trying to create a defacto non-governmental "government", being dismissive of (if not outright assholic in response to) people's concerns is the wrong temperament to have. Given what ICANN is at least in theory supposed to become, for it to be headed by someone who looks upon what is essentially her constituency with a kind of derision, and with a disdain for doing anything to reflect accountability to that constituency is boneheaded in the extreme. She can feel like she's right all the time if she wants to. But in what needs to be treated as a role representing the Internet community, she needs to learn how to not respond to questions or criticisms with comments like the one she made in Cairo, dismissing questions and criticism as mere annoying complaints distracting her from the real work at hand. Sorry, Esther, but if you're representing a community of people, part of your real work is listening to their criticisms and even complaints with respect, not scorn and ridicule. If you can't do that, you need to step down. ICANNot believe it's Esther! (Score:5) Steve Gilliard has a piece over at Net Slaves [disobey.com], called "Who watches the Guardians." It is about Esther D. and paints an unlovely portrait. I suggest you all read it before your boners get too hard for her. I, for one, agree with Steve G. She's the last person I want running the ICANN. In fact, I want no one running it, 'cause you can't really trust anyone with that much power. Well, I might trust myself, but you probably wouldn't. :-) Esther has too much money involved and she's too cozy with Technology CEOs to be trusted with this position. Just read her book, RELEASE 2.0, and you'll read what I'm talking about. It's just filled with all the names of all these companies and CEOs that she's involved in. She's also not a very technical person, and you get that from her book, not just 'casue Steve G. says so. Re:political impotence is BAD? (Score:5) Re:"Redefining" politics (Score:2) No, it's a floor wax AND a desert topping! Bad Mojo What is your basis for this? (Score:2) What is your basis for saying ICANN isn't working? Are you just assuming it isn't working because you've read a few headlines stating it isn't working? I'd like to hear a fully stated case supporting the statement "ICANN isn't working" including a defintion of what ICANN's goals and working procedures are. Without supporting what I'm saying I could just as easily say that ICANN is doing a competent job considering the hostile and agressive activities of those opposed to ICANN. A fundemental lack of cooperation and active "monkey wrenching" by other parties is slowing down ICANN progress, but ICANN itself is not to blaim. As with all things, the truth lies somewhere in the middle. Not so. (Score:2) Then again, I'm presuming that THIS is what you mean by 'being able to "politic"'... If you mean "politicing" to be explaining to people the facts and the consequences of choices, then I stand corrected - and Dyson is a great politician. This is the norm for Esther (Score:5) Case in point, back in 1997, Dyson, who has NO experience in the anti-spam community, shot her mouth off about the spam problem [zdnet.com] and proposed a "solution" to it. Had she actually spent more than two seconds thinking about her "solution", or actually posting it to one or more anti-spam forums [claws-and-paws.com] and asking for comments, she would have found that it's something that most anti-spammers don't see as being viable. But nooo, she didn't do that, she evidentally thought she knew more than everyone else, including those of us who have been dealing with spam for years, and the media blindly quoted her as though she were some anti-spam guru, which she's not. I apologize for sounding bitter and turning this post into a rant, but dammit, it annoys me to no end when people don't think, and their short-sighted actions set back the efforts of an entire community. Dyson is in serious need of a clue. <gets off his soapbox> can't resist (Score:2) Um, I guess its a roughly spherical object slightly flattened at the poles. Of course we all know the earth isn't really shaped like that. Right? --Shoeboy Re:What is your basis for this? (Score:3) I invite you to visit ICANNWatch [icannwatch.org] for details on what's wrong with ICANN. P.S. To say that "the truth lies somewhere in the middle" is to (1) allow "truth" to turn on how extreme participants happen to be; (2) ignore the possibility that there is real truth (is the truth half way between the flat-earth crowd and the spherical-earth crowd? What sort of shape is that?) A. Michael Froomkin [mailto], U. Miami School of Law,POB 248087 Coral Gables, FL 33124,USA Re:Are you kidding? (Score:1) Congratulations on sinking to the level of ad hominems. Do you generally assume that people who don't agree with you are stupid, or is it only if they happen to be men? It's true that much of patriarchal culture is based on the fiction that men are somehow more 'powerful' than women. To sustain this, men are encouraged to pretend they have more power than they do. So we have generations of men, busy trying to hide the fact that they are just as confused and vulnerable at times as everyone else. Whereas women are encouraged to hide their power and their intellect from the world, especially so as not to spook the men. Sadly, some women learn to hide their power so well, they even manage to hide it from themselves and then assume it must be the men who 'stole' it, and who they must be revenged upon. The simple truth is that Patriarchy doesn't work for men or women and I don't like it anymore than you do. But bashing men to express your anger isn't going to change anything. Re:Hmm... (Score:1) Re:Yet more tired geek misogyny (Score:2) BTW, the author of the article is female (I know, that doesn't necessarily prove anything but still...) ====== Webmasters: get a Free Palm Pilot [jackpot.com] for referring 25 signups. ======== Re:One thing to remember... (Score:3) The UDRP. The request to WIPO to come up with a list of famous marks that will be a priori excluded from the existing and future namespace. The over-regulation of IPv6space as regards address allocation. The entire constituency model. The introduction of the GAC. The repeated, blatant, and unremorseful overstepping of power and authority boundaries. Top-down rather than bottom-up governance. The "ICANN by-laws of the week" game they play, with retroactive alteration of the by-laws to suit their needs. Their refusal to relinquish power. Their refusal to allow individuals to have any say in the process. Their catering to big corporate and Intellectual Property interests. Either your statement was meant in jest, or you're not well-informed of ICANN's actions. Dyson bad for ICANN, Internet (Score:5) Dyson and the other initial ICANN Board of Directors have demonstrated a distinct lack of ability to work in the open, to accept input in a bottom-up fashion, and to understand the technical aspects of the entity about which they are supposed to make decisions. Now, in Cairo, The ICANN Board of Directors, led by Esther Dyson in this matter, decided to scrap the General Assembly process by which 9 new, individual ICANN Directors would be named, and decided to eliminate 4 of them. Their reasoning? Becuase they're afraid of handing over control to people who "don't understand the Internet". A sad comment indeed, coming from a Board with a vested financial interest in the outcome of decisions related to namespace and IP-space, who do not have even a rudimentary understanding of that which they govern, except perhaps for Vint Cerf. The ICANN Board of Directors was supposed to be completely elected and the original people, Dyson included, removed by September of 1999. Dyson and her cohorts have repeatedly voted themselves more years as ICANN Directors, and have both refused to relinquish control to any form of elected body, and have refused to run ICANN in the grassroots, bottom-up, narrow technical matter that the contracts with the US Government require. That ICANN continues to exist at all is miracle, and a nightmare. That Dyson was chosen as figurehead for it and continues to lend her name to it says volumes about her character. Re:Esther the original geek chick?? (Score:1) Haven't you read any of her crap? Then you'd know - Esther's no geek. a song... (Score:1) long time coding... Where have all the hackers gone, long time ago. Where have all the hackers gone, gone to Heaven everyone... When will they ever learn, when will they ever learn? Corporate interests will eventually drive us all into the status of ham-radio operators. Will you still be around? Is your detication strong enough? Or will you start watching ZDTV? Re:Empty Suit (Score:2) -russ Re:Empty Suit (Score:2) -russ Re:Are you kidding? (Score:2) Dyson is a woman, she fails to meet the narrow standards demanded of women, therefore she cannot be treated as a human. You're actually saying that the NYT are treating her as subhuman. Come on, that's ridiculous. I know some men are really like this, but the article doesn't support that claim. Describing what someone looks like, in the eye of a certain beholder, does not amount to treating them as subhuman. By stark contrast, you pointedly ignore how Richard Stallman is treated. Oh, so it's just par for the course if Stallman is mocked for his looks, but if a women is mocked for her looks (and for God's sake, don't women mock each other for their looks often enough!!) - it's automatically evidence of misogyny? How blinkered and biased is that! Much of what passes for feminism is obscurantist nonsense, in my experience. "Transcending patriarchical reason" and all that nonsense. The trouble is, you get a rebound effect, where feminists go too far and take on the attributes of the oppressors (and even in some cases become the oppressors, ironically). Of people and politics... (Score:2) But an Internet democracy? Gimme a break... Re:Are you kidding? (Score:1) This is a bit offtopic, but what the hell: What is it with these intellectual frauds who insist on deliberately misusing vocabulary to buttress their (generally weak) arguments? For hundreds or years, the word "misogyny" has had a commonly argreed upon meaning, and that meaning is more or less what was described in the previous post. If the word you're searching for doesn't gibe with that definition it doesn't mean there's anything wrong with the definition, it means you're using the wrong word! But of course, this dichotomy of meaning is the whole point: The offending author gets to water down the definition of the charge to fit their needs, while the average reader understands the word to mean something far more serious: "Oh my! Person X is misogynist!" vs. "Oh my! Person X pointed out that Esther Dyson dresses inappropriately for a highly powerful posistion!". It would be funny if not for the Orwellian parellels. But this is half-stepping: Why not take this practice to its ultimate conclusion: "In the context of antithetical theory analysis the word "wrong" is posited to mean correct. For this reason it is impossible to disagree with me. I win." Re:The Feminists Have Nuclear Weapons! (Score:1) --Flam Re:Democracy sucks, sometimes (Score:1) Re:"Redefining" politics (Score:1) The only reason people think we have a democracy is that they don't understand the difference between a democracy and a republic. A democracy is where the majority votes its own government. A republic has some kind of intermediary, voted on by the public, that provides the government. However, a republic generally implies some sort of system to check the power of the elected, setting the boundaries of power, etc. Walt Empty Suit (Score:2) Her last book was completely devoid of content. I think there as been such a push to put a woman in power someplace that they just grabbed the one who's the most adept at self-promotion and not one of the many strong women who could be in this position. Not that I think that the person's sex matters here in any way. The sooner Dyson goes away the better. Re:Dyson bad for ICANN, Internet (Score:2) Unfortunately, one of the things which happened very early on is that a lot of the crazies from the black helicopter club started attacker both ICANN and Dyson on a number of public mailing lists, including the IETF list. This caused more reasoned people who had their concerns about ICANN to get lumped with the crazies, and so therefore a lot of people didn't care to speak up about potential shortcomings with the whole ICANN setup. That's probably the reason for the reluctance of people to speak negatively about Dyson, even now that it's been two years later, which the NY Times noted. It also had the unfortunate effect of isolating the Dysan and the ICANN board from constructive criticism which have helped them out, although it's not clear they're all that willing to listen (which was and is another problem). Lester Dyson (Score:2) Re:The definitions of "politics" (Score:2) Dyson is smart, but maybe not the best choice (Score:1) That has additional drawbacks as well. As others have already pointed out in this thread, intellectuals tend to think of their solutions as the most obvious and sensible, often without regard to other people's opinions. This gets the job done, but in a way that rarely is satisfying to anyone. Should she be a member of Icann? Probably, but she probably should not be its leader. She's competent in that role, but I don't think she excels in it. As the saying goes, Jack of all trades, master of none. She made her reputation in computers. Computers alone do not equal the Internet (people are a bigger part of it), and she's taken a while to learn that. Re:Another internet-era cliche... (Score:3) Well because of Bart's test answers we have changed history again. Now America was discovered in 1941 by Some Guy... Hmm... (Score:4) If I was her, I'd say "Screw you guys, I'm going home" and take her network home, install IPv6 on it, and start my own damn network. With hookers. And gambling. But that's just me. Re:political impotence is BAD? (Score:1) The definitions of "politics" (Score:3) When used in this manner, saying someone doesn't know politics indicates they don't know how to make people happy without actually giving them anything substantial. And when you fail to make a lot of people happy, things simply don't get done. It may be admirable to lack this particular quality, but in reality authority figures who don't know politics are often doomed to fail. Re:The Feminists Have Nuclear Weapons! (Score:1) Once again, the point is missed, and the Anonymous Coward (apt, that) must resort to insults. There is a difference between feminist literature, and literature written by women. -- Re:Are you kidding? (Score:1) See this post. [slashdot.org] -- Re:Are you kidding? (Score:2) I posted the link because you obviously didn't know what it meant, and I figured I would save you the time of looking it up for yourself.? Er, I think it's time to stop reading radical feminist literature, where any criticism of a woman is seen as hatred. Or do you think that woman are above criticism? It would seem so, based on your overreaction to this article. By the way, the article was written by a woman. I guess she's just been corrupted by the male-dominated world, and she feels she has to attack her "sistahs" in order to break through. -- Re:Are you kidding? (Score:2) Little, if any, feminist works are man hating. I suppose that depends on how you define "feminist". There is a difference between philosophy that happens to be written by a woman, and feminist philosophy. I define feminist literature as "glorification of women at the expense of men." How else can it be defined? You mention "masculinist" literature, but that's nonsense. Philosophy or literature does not have gender; it's either good and thought-provoking or it's not. Why do you think that only a small minority of women in surveys identify themselves as feminists? It's because the radicals have taken over what was once a legitimate movement. Now it's all about a bunch of angry, bitter women who still think a woman choosing to raise her children full-time is shameful and a betrayal of women everywhere. That's not to say that all reasonable women who happen to call themselves "feminists" are that way. But that's what the so-called leaders are. -- Re:Are you kidding? (Score:2) Are you so angry and bitter that you have to resort to quoting out of context? I guess it's easier for you to be a sexist that way. Just for the record, I made a distinction between "feminist literature" and literature written by women. The standard, as I pointed out, is whether it's good literature. Like it or not, your "leaders" have defined modern feminism exactly as I stated it. Take issue with them, not me. -- Re:The Feminists Have Nuclear Weapons! (Score:2) Look, you and others keep asking for "references", and I resist this for a simple reason: I can't win. If I name some resident wacko, then it'll simply get dismissed as "well, yeah, that women's a wacko, but she is not representative of feminists". This said... I'm talking about the Susan Faludis and Patricia Ireland's of the world, whom the media holds up as leaders of the movement. Why do you think that in surveys most women don't identify themselves as feminists? It's because the definition has been conscripted by the wacko extremists. "Gay" used to be a really useful word, but it was conscripted by the homosexual population and now the definition has changed. Again, don't blame me for the actions of the extremists who corrupted the word. You want an example? I'll give you one. Why is abortion a feminist issue? It should not be. The issue is split almost exactly down the middle among women whether abortion should be legal, yet the feminist "leadership" thinks it's a core "obvious" issue. Where is the intellectual freedom among feminists that allow a pro-life stance? How many pro-life speakers are invited to feminist rallys? That would be a big, fat zero. Of course, I could bring up NOW's support of Clinton's sexual predatory practices. I guess sexual harrassment is OK, as long as it's done by someone NOW likes. [Ted Kennedy, anyone?] -- Are you kidding? (Score:4) Most articles comment on Stallman's disregard for personal grooming. And Torvalds gets no comment, because his dress is not unusual. Her dress and mannerisms are commented upon because they are unusual for this political position, not unlike a lot of stories in the early days of dot-com companies about the notoriously informal dress of management. This story is specifically about her possibly being not the right candidate for the position, so of course her presentation is going to be fair-game. Your oversensitivity is the only problem here. And by the way, misogynistic [dictionary.com] means "Of or characterized by a hatred of women." Even if we grant that the NYT was mocking Dyson unfairly, that hardly qualifies as "hatred of women". -- "Redefining" politics (Score:4) What many people don't realize is that a talent for politics is not automatically a bad thing. Politics is the art/science of administration and control of a governing entity. There is no question that the most efficient form of government is a benevolent dictatorship. However, it's also the most unstable because it has the tendency to lose the "benevolent" part. Many "geeks" are attracted to the idea of a dictatorship when an acknowledged trusted, smart person is in charge. It feels right; "just let them get some work done, and keep the idiots out of the way!" However, if we're interested in a long-term stable organization, then it's necessary to sacrifice efficiency for stability. The stability comes from balancing power with smart people (with honest, different views), and the idiots (with different views, but also with power), and the people in middle are what we call politicians. They strive to balance the opposite forces, find consensus, and move things forward. There is a certain personality that is required to do this, and for the most part, geeks are rarely this type. Before everyone has a knee-jerk reaction that says, "be thankful we have a smart person in charge, and get all the other idiots out of the way," consider that this is fine in the short run, but is NOT desirable and not stable in the long run. I will resist the urge to use the FSF as an example of an organization damaged by too much dictatorship, and too little politics. :) -- Re:Are you kidding? (Score:4) But you have that luxuiry, not being a woman. No, I have that luxury because I am a thinking human being. You've obviously bought deeply into the victimology of women created by the self-appointed leaders of the feminist movement. I've got news for you: Their only purpose is to enhance their power at your expense. Is it more difficult for woman to succeed? Yes, but not even CLOSE to as difficult to what it was in the 60s. The feminist movement is, for the most part, obsolete. That's why the writing get shriller and shriller every year, because they need to generate as much hatred of men as possible in order keep the power base intact. You say this as if it were funny. It is not. I said it because I know a lot more about the victim mindset than you think. When you realize that it really is funny, and why, you will be able to go a lot farther in life, and be a lot happier besides. -- Another internet-era cliche... (Score:4) Is it just me, or have I seen this phrase far too often relating to many endeavors in this internet age? Basically it translates, "so-and-so is really bad at something, but if it turns out that I'm wrong, then they're redefining it." Re:Another internet-era cliche... (Score:1) ? (Score:1) I don't recall having any input in the process, and neither does anyone I know. ==== Re:"Redefining" politics (Score:1) No, you have a democracy in the sense that it's commonly understood. To get back on topic, ICANN strikes me a being an oligarchy, a rule by a select few... This is similar to how Communist systems actually ran (i.e. a central "Party"). This is ironic considering how capitalist friendly Dyson's perceived to be. A dictatorship. The natural state of things when the democratic process is missing. ==== Re:Are you kidding? (Score:2) You don't seem to have read much feminist literature. Little, if any, feminist works are man hating. The justification for feminist literature is that it is necessary to compensate for what is essentially a male dominated world i.e. that standards works are basically pieces of masculinist literature. I think this is inadequate, and sure, feminist writings may be prejudiced in their treatment of men and women, but just as you kindly pointed out earlier that misogyny means "woman hating" and not just prejudice in general, similarly neither are feminist works in any way "man hating". ==== Re:"Redefining" politics (Score:2) Well I think there's plenty of question. Many "geeks" are attracted to the idea of a dictatorship when an acknowledged trusted, smart person is in charge. It feels right; "just let them get some work done, and keep the idiots out of the way!" Democracy doesn't require that everyone make decisions. That would be silly, since most people don't have the knowledge or ability to do so. Instead, democracies are set up so that people elect representatives. If the people in charge are supported by the public, if they prefer to have that group of people running things over any other group, then you have a democracy. So in the context of software "dictatorships", what you have is not a dictatorship, but a democracy i.e. Linus runs the show only because he has the support of the people. ==== Re:Are you kidding? (Score:2) If you choose your definitions well enough, then there's no room for further debate. I've a friend who's taking a class on Orientalism (how the west perceives the east), and one of the assigned readings is a feminist text on how female authors during the period helped to break down the myth of the orient as a lush paradise of harems. This, I think, is a legitimate contribution by feminism (although there are of course problems). But I'm not here to praise or even defend feminism; I think there are problems with it. I'm here to point out that your earlier assertion that feminism is about male hating is nonsensical. ==== Re:The Feminists Have Nuclear Weapons! (Score:2) Why do you persist in making unsubstantiated claims? You clearly have no clue but plenty of misconceptions about feminist writing. What a misogynist*. * Note that I'm deliberately misusing the term misogynist here, just as you're misusing the term feminist. ==== I thought it was pretty funny actually... (Score:1) One thing to remember... (Score:1) I understand that Dyson has made herself unpopular by some of her actions/comments during her tenure as interim chairwoman. I'm not too crazy about some of her comments myself. (The "people who are stupid" comment for example. I think if everyone wanted to vote for, say, CarrotTop, then he should be elected. I can think of a few people I would rather see in charge of ICANN. I'm not one of them. Frankly, I'm glad it's her job, because I wouldn't want it. Also, I'm also glad that she's making some mistakes; if she weren't, then I might be afraid of her. At least right now we know she isn't screwing anyone over - at least not very well. Re:One thing to remember... (Score:1) Democracy sucks, sometimes (Score:1) "Many opposed the interim board's proposal to elect permanent members through an indirect process. But Dyson, the interim chairwoman, defended the policy of using an electoral council instead of letting the world's Internet users vote directly for their board representatives. That was the best way, she told the group, to prevent the nomination of "people who are stupid." " She must be familiar with the outcome of Slashdot polls. Martha Stewart's got nothing on Esther Dyson (Score:1) In some ways, the Godwin guy's right - it's a tough job, and it took someone like Esther Dyson to give the organization credibility. Too bad that she's now also treated as the goat. Re:Another internet-era cliche... (Score:1) Re:Are you kidding? (Score:1) You've obviously bought deeply into the victimology of women created by the self-appointed leaders of the feminist movement. I've got news for you: Their only purpose is to enhance their power at your expense. Wow, that's a staggeringly broad statement. Do you have some references to back that up? Re:Are you kidding? (Score:1) References please. (And please find some that are relatively influential. By finding some nutcases, I could "prove" that Christians want to turn America into a version of Iran.) So does that mean... (Score:1) Eh? (Score:1) She has some point about the "stupid people" (Score:1) Direct representation can lead to people being chosen because they have a lot of influence, money ("Vote for me and you get a free thingie!"), or because they are make use of scares and rethorics. ("Vote for me! I'm against kiddie porn!") This two stage mechanism is meant to provide some stability, but why should it only be used by corporate types? Protesters should forget the knee-jerk reaction about direct democracy and use this process to get themselves more influence. Convince Esther and her crew that you are not "stupid people" and get yourself a place in that structure. Inez{R} P.S. I won't protest the fact that the US appoints someone to overview something international. After all, it is probably the only way to get some results in a few years instead of in a few tens of years... [sigh] Re:"Redefining" politics (Score:1) No, you don't. You have a Republic... The notion that the U.S. and other "free" countries are democracies was Cold War propaganda. To get back on topic, ICANN strikes me a being an oligarchy, a rule by a select few... This is similar to how Communist systems actually ran (i.e. a central "Party"). This is ironic considering how capitalist friendly Dyson's perceived to be. -- Des Courtney I bet (Score:2) In other words: Yes, Joe Random Geek, you too can become a politician / talk show host / #prefered high status job involving every now and then having to talk to another person# Yet more tired geek misogyny (Score:1) Let's face it, we don't get treated to descriptions of Stallman and Torvaaldes' dress sense and grooming. How much of this "friction" is due to Dyson, and how much of it is due to the fact that lots of self-styled "hackers" can't hack it when faced with an intelligent, articulate woman? And in either case, why can't the MY Times get rid of its annoying misogynistic habits? Re:Are you kidding? (Score:1) Unlike you, I am aware of the meaning of the word "misogynistic" without having to visit a lame website. And it is used in feminist theory to refer to patriarchal discourse aimed at marginalising women and removing them (treating them, if you will, as Other) from the main field of discourse.? Sadly, there is no "getridofyourtiredpatriarchalpreconceptions dot com", otherwise I could post a link and get +1, informative, too. Re:Are you kidding? (Score:2) Of course you do. But you have that luxuiry, not being a woman. By the way, the article was written by a woman. I guess she's just been corrupted by the male-dominated world, and she feels she has to attack her "sistahs" in order to break through. You say this as if it were funny. It is not. Re:Are you kidding? (Score:2) Christ, this year's crop of unthinking sexist slashbots is even denser than last year's Do it.. (Score:1) -meff Re:Yet more tired geek misogyny (Score:1) You freedom loving zealots don't seem to have read your own constitution. Perhaps slashdot could set up some "basic educational links" from the home page so that under-educated 14-year old linux zealots could learn a bit more about the real world beyond their screens. It's almost as if the slashdot readership has no sense of social responsibility. They are too busy playing Quake, and downloading warez and pr0n to see the big picture. Sexism like this is not just unpleasant, it is in fact illegal. Just because it is posted on a web forum does not make it any more legal. Not only under US law, but also under the higher international law as laid down by the UN, which the US agreed to be bound by. I look forward to the day when slashdot is held responsible for its crimes. Especially the warez trading which is killing legitimate software development. thank you dmg NY Times URL for non-registered users (Score:5) Esther Dyson in DC (Score:1) I was surprised Esther got the ICANN appointment given the personal experience recounted by my friend. At the time I thought perhaps Esther had had a bad day in DC. Now it appears more likely someone(s) didn't meet Esther and/or understand the job before making this placement. Too bad because ICANN is an important tool in hammering out a great many of the agreements-by-convention that keep the "net" running. Re:"Redefining" politics (Score:1) Politicing is a fundamental human activity that everyone practices and, even though we demonize it, politics will keep our civilization from being just another bug on the windshield of history. Dictatorships, even "benevolent" ones, are doomed to failure not because of any moral lack but because of their inherent inefficiency. Rule by compulsion works only as long as the compulsion is applied effectively and often. Look back into history, and you'll see the rusting hulks of dictatorships that either forgot this principle or ran out of resources applying it. The nature of our society demands that our leaders be good politicians. Leaders who are poor practitioners, no matter how well-meaning or technically adept, will fail.
http://slashdot.org/story/00/04/10/117232/portrait-of-icann-chairwoman-esther-dyson?sdsrc=nextbtmnext
CC-MAIN-2013-48
refinedweb
6,599
64.2
This module implements an algorithm to compute the diff between two sequences of lines. A basic example of diffInt on 2 arrays of integers: import experimental/diff echo diffInt([0, 1, 2, 3, 4, 5, 6, 7, 8], [-1, 1, 2, 3, 4, 5, 666, 7, 42]) Another short example of diffText to diff strings: import experimental/diff # 2 samples of text for testing (from "The Call of Cthulhu" by Lovecraft) let txt0 = """I have looked upon all the universe has to hold of horror, even skies of spring and flowers of summer must ever be poison to me.""" let txt1 = """I have looked upon all your code has to hold of bugs, even skies of spring and flowers of summer must ever be poison to me.""" echo diffText(txt0, txt1) - To learn more see Diff on Wikipedia. Types Procs proc diffInt(arrayA, arrayB: openArray[int]): seq[Item] {...}{.raises: [], tags: [].} Find the difference in 2 arrays of integers. arrayA A-version of the numbers (usually the old one) arrayB B-version of the numbers (usually the new one) Returns a sequence of Items that describe the differences.Source Edit proc diffText(textA, textB: string): seq[Item] {...}{.raises: [KeyError], tags: [].} Find the difference in 2 text documents, comparing by textlines. The algorithm itself is comparing 2 arrays of numbers so when comparing 2 text documents each line is converted into a (hash) number. This hash-value is computed by storing all textlines into a common hashtable so i can find duplicates in there, and generating a new number each time a new textline is inserted. textA A-version of the text (usually the old one) textB B-version of the text (usually the new one) Returns a seq of Items that describe the differences.Source Edit
https://nim-lang.org/docs/diff.html
CC-MAIN-2021-39
refinedweb
295
58.11
You have a Collection of Products. Every product effectively maps to an Item. Now I needed to bind this Product collection to a grid and display the item name. So basically it would be something like prod[i].Item.Name or any other property can be accessed for item. Basically the idea is to Iterate over an inner member inside every element of the parent collection. If you wanted to bind this with a GridView you can have very complex Databinding syntax for this by casting the items etc. Another option that I wanted to bring up here is creating an IEnumerable view using the parent collection. public class ProdItemList:IEnumerable { private List<Product> Products; public ProdItemList(List<Product> products) { this. Products = products; } public IEnumerator GetEnumerator() { foreach (Product product in Products) yield return product.Item; } } You can then bind the DataSource and it would effectively bind to the properties in Item. With something like ItemGrid.DataSource = new ProdItemList(products); I really liked this article regarding IEnuerable and 2.0. I am sure there are many approaches for this but just thought i'd put it down here for further discussion.
http://blogs.msdn.com/b/sajay/archive/2006/08/25/722132.aspx
CC-MAIN-2015-06
refinedweb
190
58.58
<img src="" /> Accessible Over lays that don't suck! Olay's one dependency is jQuery, so include that before Olay. bower install olay ...and get the JavaScript and CSS files from components/olay/olay.js and components/olay/olay.css respectively. Try running this code on the demo page. // Show "¡Olé!" for one second and then hide, alerting "Farewell, mí amigo.".var olay = '<h1>¡Olé!</h1>' duration: 1000;olay$elon'olay:hide' alert'Farewell, mí amigo.'; ;olayshow; 0- The number of milliseconds to display the olay before automatically invoking hide. A 0duration means the olay will be displayed indefinitely. 'js-olay-scale-up'- The transition to use. Since this property simply refers to the class that will be added to the $containerelement, it is very easy to create your own CSS transitions and namespace them with whatever transition class you'd like. Olay comes with: 'js-olay-scale-up' 'js-olay-scale-down' 'js-olay-slide' 'js-olay-flip' 250- The duration of the transition in milliseconds. IMPORTANT! The transitionDurationmust match transition-durationin your CSS to work properly. [27]- An array of keycodes that should hide the olay when pressed. Set to []or a falsy value to disable. true- A boolean specifiying whether the olay should be hidden when anything outside the $contentelement is clicked. false- If true, detachwill be used to remove the $containerrather than remove. This effectively _preserve_s the jQuery data associated with olay's DOM elements so they can be re-appended later. Use this option if you are going to be showing and hiding the same DOM elements repeatedly. The jQuery object that is added to and removed from <body>. It contains all of the elements necessary for the olay. It also keeps a record of its corresponding olay instance in $container.data('olay'). Style the container using the .js-olay-container selector in CSS. A <div> with display: table for reliable centering in those silly IE browsers. Naturally, a <div> with display: table-cell to live inside $table. The wrapper <div> for $el. Style the content wrapper using the .js-olay- content selector in CSS> A jQuery object representing the element that was passed in the constructor. As mentioned below, 'show' and 'hide' events are triggered on this object. Show the olay, appending $container to the DOM. Hide the olay. Note: $container is removed from the DOM with jQuery's remove method. Set the $el property and properly append it to $content. This allows the creation of "empty" olay instances to be populated later. Remove the $container from the DOM using jQuery's remove. This _destroy_s all jQuery data ( .data, events, etc.) that was associated with the $container and its children. This will be handled automatically and should only ever need to be called when preserve is true. Olay uses jQuery's event emitter to fire 'olay:show' and 'olay:hide' events when show or hide are invoked. See the example above for usage. Any element with the 'js-olay-hide' class inside an olay will hide its parent olay once clicked. This makes adding close/hide buttons as easy as <span class='js-olay-hide'>Close</span>.
https://www.npmjs.com/package/olay
CC-MAIN-2016-22
refinedweb
523
61.12
Eons ago in computer time, I left-off with an article called Guide to Win32 Paint for Intermediates. That article described the basics of the WM_PAINT message, as well as how to use most of the different types of Win32 Device Contexts (DC). This article will describe the Memory DC. The Metafile DC will still be ignored in this article. WM_PAINT My previous articles have been written with WTL. I still prefer to do Win32 development with WTL. However, the demonstration source code provided will be raw C/C++ Win32 calls and window management. This is to simplify the code that you are looking at, and eliminate a code dependency. Nine years ago, I started up a series on Win32 paint tutorials and indicated in the first article that it was the first in a series of five. I had completely forgotten that intention until I re-read the first article. I thought about revisiting this topic and finishing the series, but I quickly dismissed the thought thinking that too much time has passed and people are moving on to other technologies like .NET, DirectX, and OpenGL. I am still surprised to see how often these basic tutorials are visited and downloaded. I then noticed myself looking up basic information for other topics in development. I have recently done some GUI work with custom controls in Java. I was surprised to see how similar the process was to Win32 painting. (While proof-reading, I decided that I get surprised a lot.) What I have realized is that not much has actually changed in the last 9 years. There are new libraries that make interacting with multimedia easier than the basic Win32 APIs, however the concepts are the same. Also, some technologies such as DirectX may have a high learning curve to get up and running and gain the benefits of the extra power from the framework that may not even be necessary for a simple UI painting that a developer could want to help their application stand out from all of the similar programs. I have decided that I will continue to document the usage, capabilities, and my experience with the Win32 API calls, primarily GDI, but there is so much potential that exists, and not enough context to help people apply it to their programs to create amazing things. There are many other technologies for generating graphics that exist for Windows to create two dimensional vector-based and raster-based images: GDI+, DirectDraw 2D, DirectWrite. I would like to explore some of those technologies in the future. For now, let's continue on with the venerable GDI API. The memory DC is an important concept and tool in Win32 development. Most other DC types are directly related to a window, or some device such as a printer on the system. The memory DC is an object that exists only in memory. Some of the common uses for Memory DCs are to store intermediate images for display composition; load Bitmaps and other image types from files; create back-buffers for double and triple buffer painting to reduce flicker with animation. There is only one way to create a Memory Device Context: HDC CreateCompatibleDC( __in HDC hdc // Handle to an existing DC ); The memory DC that is created will be compatible with the device which the hdc value passed in represents. This means that raster operations like BitBlt and StretchBlt can be used to transfer the image in memory DC over to the same device hdc represents. hdc BitBlt StretchBlt When you are using Memory DCs for devices other than the display, be aware the device must support raster operations for it to be useful. The call to create may succeed, but attempting to use the resulting memory DC will fail. "Don't they all support raster operations?" Not necessarily, think of pen-based plotters. Meta-files are another device that is not useful with memory DCs. The meta-file is used to record GDI actions, and its playback is not rasterized. Regardless of device, you can determine if the device you are working with supports raster operations by calling GetDeviceCaps and requesting the support flags for RASTERCAPS. GetDeviceCaps RASTERCAPS If hdc is NULL, then a memory DC will be created that is compatible to the application's current display screen. One caveat to be aware of when this type of memory DC is created is that the memory DC is attached to the thread which created the memory DC. When the thread is destroyed, the memory DC will be destroyed as well. NULL When you are finished with the memory DC, use this function to release the resource: BOOL DeleteDC( __in HDC hdc // Handle to the DC, in this case, memory DC ); If you were to start using the new memory DC immediately after you call CreateCompatibleDC, you will most likely be disappointed. The memory DC is initialized with a mono-chromatic 1x1 pixel bitmap by default. If you are after a single pixel that can be black or white, you have exactly what you need. For everyone else, the next step will be to allocate a buffer for the memory DC to write to. CreateCompatibleDC This is the simplest method to create a buffer for a memory DC, and it is the only method I will cover in this article: HBITMAP CreateCompatibleBitmap( __in HDC hdc, // Handle to the DC __in int nWidth, // Desired width of the bitmap in pixels __in int nHeight // Desired height of the bitmap in pixels ); This will create something called a Device-Dependent Bitmap (DDB). That is a bitmap that is only compatible for the current device (display, printer, etc...), and therefore can only be selected into a DC that is compatible with this device. With the newly created bitmap, you can associate it with the memory DC with a call to SelectObject. SelectObject The other type of bitmap is a Device-Independent Bitmap (DIB). DIBs provide more advanced access to the bitmap resource, and are quite a bit more complicated to setup and initialize properly. They are general use bitmap objects that are independent of the bit format required for any device. When you want direct pixel access to your bitmap, a DIB is the way to go. Because I want to focus on memory DCs, I will save the DIB topic for a separate article. Here is what the sequence of commands should look like to create a memory DC with a buffer to write into: // Create a memory DC and Bitmap. HDC hDC ... // existing DC from a window or call to BeginPaint int width = 400; int height= 300; HDC hMemDC = ::CreateCompatibleDC(hDC); HBITMAP hBmp = ::CreateCompatibleBitmap(hDC, width, height); ::SelectObject(hMemDC, hBmp); Before we get too far away from code where I showed you what you need to start running, I want to make sure you are holding this new pair of scissors safely. Do not use a Memory DC with a call to CreateCompatibleBitmap. CreateCompatibleBitmap ... // You may be tempted to do this; DON'T: HDC hMemDC = ::CreateCompatibleDC(hDC); // DON'T DO THIS // | // V HBITMAP hBmp = ::CreateCompatibleBitmap(hMemDC, width, height); ... // TIP: Try to use the same DC to create // the Bitmap which you used to create the memory DC. Remember the part about how The memory DC is initialized with a mono-chromatic 1x1 pixel bitmap by default?! You should, I bolded it above. If you create a bitmap to be compatible with the memory DC, you will get a mono-chromatic bitmap of the size you requested. Usually, it is easy to recognize this mistake when you see the results on the screen. The trouble is, it may not paint to the screen only as black and white. It may use the System Foreground color and System Background color instead. This makes the issue a little more difficult to recognize, especially if you are compositing multiple bitmaps together before you BitBlt everything to the screen. There can be only one type of each GDI object selected into any type of DC at a time. The memory DC is unique, because it is the only type of DC that is possible to use an HBITMAP with a call to ::SelectObject. Unlike other GDI object types, the HBITMAP can only be selected into one DC at a time. Therefore, if you are using the same bitmap with multiple memory DCs, be sure to save the original HGDIOBJ pushed-out from the memory DC when you select your bitmap into the DC. Otherwise, your attempt to select the bitmap into a second memory DC will fail. HBITMAP ::SelectObject HGDIOBJ // Standard setup for creating memory DC and compatible bitmap buffer. HDC hMemDC = ::CreateCompatibleDC(hDC); HBITMAP hBmp = ::CreateCompatibleBitmap(hDC, width, height); ::SelectObject(hMemDC, hBmp); ... // Another memory DC is required later for something spectacular. // (Note: The same errors will occur with the following code // even if you are only attempting something medicore at best.) HDC hMemDC2 = ::CreateCompatibleDC(hDC); // Attempt to select the original bitmap that is currently held by hMemDC. ::SelectObject(hMemDC2, hBmp); // <- No Bueno: Call fails, returns NULL. The following code shows the proper handling of a single bitmap with multiple memory DCs. // Standard setup, getting pretty good with this part. HDC hMemDC = ::CreateCompatibleDC(hDC); HDC hMemDC2= ::CreateCompatibleDC(hDC); HBITMAP hBmp = ::CreateCompatibleBitmap(hDC, width, height); // Save off the object that is pushed out by selecting the new bitmap. HGDIOBJ hOldBmp= ::SelectObject(hMemDC, hBmp); ... // Select the original bitmap into this DC. // 1) Free the original memory DCs hold on the bitmap. ::SelectObject(hMemDC, hOldBmp); // <- returns same handle as hBmp. // 2) The bitmap is available to be selected into any compatible memory DC. ::SelectObject(hMemDC2, hBmp); There is a possible shortcut that you can use to release a memory DC's hold on a bitmap. This shortcut is valid only if you no longer need the original memory DC but you want to keep the bitmap object around. If you delete the memory DC, this will release the hold on the bitmap and there is no further work to do. You no longer have to restore the original bitmap handle into the memory DC to release the bitmap handle. // Standard memDC/bitmap setup ... ::DeleteDC(hMemDC); // hBmp is now available to be selected into a different memory DC if desired. ::SelectObject(hMemDC2, hBmp); The previous shortcut is actually a practice that would be a good habit to use all of the time. The HBITMAP cannot be deleted while it is selected into a memory DC. Therefore, if you attempt to call ::DeleteObject on your bitmap, without first restoring the original bitmap into the memory DC, or deleting the memory DC, you will have a GDI resource leak. ::DeleteObject // Standard memDC/bitmap setup ... ::DeleteObject(hBmp); // <- Call fails, hBmp is still held by hMemDC. // A resource leak will occur. // ( Pfft, Who cares? isn't that why you splurged // and bought 24 GB of RAM for your PC anyway?) ::DeleteDC(hMemDC); To avoid this situation entirely, a good practice is to release the memory DCs first. If you desire to destroy the bitmap at this time as well, there are no issues with freeing them at this point. // Standard memDC/bitmap setup ... // Simply swap the order of clean up from the example above. // Always free the memory DC first, then the bitmap to avoid common resource leaks. ::DeleteDC(hMemDC); ::DeleteObject(hBmp); // No resource leak. // ( Looks like you will have to find some // other way to frivolously waste resources.) There is one situation that you have to just accept the facts, and write all of the code in a specific order for all of the APIs to work properly; you want to keep the memory DC and free the bitmap. To do this, the original bitmap will need to be selected into the memory DC, then the desired bitmap can be destroyed. // Standard memDC/bitmap setup ... // Kill the bitmap, keep the memory DC // Make sure you save the original bitmap from the memory DC: HGDIOBJ hOldBmp= ::SelectObject(hMemDC, hBmp); // Select the original bitmap back into the memory DC. // This frees the BITMAP object to be destroyed. // Note: It does not have to be the original bitmap that gets selected back into hMemDC. // It only has to be a compatible bitmap different than hBmp. ::SelectObject(hMemDC, hOldBmp); ::DeleteObject(hBmp); // hMemDC is still valid at this point, and can accept another comaptible bitmap. // If no other bitmap is selected into the memory DC, you are back to the // mono-chromatic 1x1 pixel bitmap. After I reviewed my previous articles, I noticed a few functions that I have not mentioned yet that are very useful. This seems like a good place to introduce them. These functions can be used with any type of DC, they are not exclusive to memory DCs. MSDN indicates for the SelectObject method: "An application should always replace a new object with the original, default object after it has finished drawing with the new object. "An application should always replace a new object with the original, default object after it has finished drawing with the new object. Most times, you can get away with ignoring this and forget you ever saw it. However, if you are subclassing or creating custom controls and playing with the Windows messages that paint the controls (WM_ERASEBKGND, WM_CTLCOLOR_XXX, WM_PRINT ...), you will most likely want to follow this suggestion. Following this suggestion becomes very painful when you are swapping between multiple brushes, pens, regions, fonts, and modes. You have a very colorful picture to paint. WM_ERASEBKGND WM_CTLCOLOR_XXX WM_PRINT However, your code will look something like this: // Setup paint for first layer. HGDIOBJ hOldBrush = ::SelectObject(hDC, hBrush); HGDIOBJ hOldPen = ::SelectObject(hDC, hPen); HGDIOBJ hOldFont = ::SelectObject(hDC, hFont); HGDIOBJ hOldMan = ::SelectObject(hDC, hBmp); // ... Paint a motley display ::SelectObject(hDC, hOldBrush); ::SelectObject(hDC, hOldPen); ::SelectObject(hDC, hOldFont); ::SelectObject(hDC, hOldMan); Take the previous segment of code and intersperse other DCs used for compositing, function calls that may not play nice the way you are trying, and a couple hundred lines of paint code, things start to become painful to restore your DCs to the state they were given to you. Not to mention each time you call select object for a new DC, you need to come up with a new name for the old GDI object you will have to restore. // Take a snap-shot of the current state of the DC int SaveDC( __in HDC hdc, // Handle to the DC ); // Restore the DC to a previously saved state int RestoreDC( __in HDC hdc, // Handle to the DC __in int nSavedDC // Saved state claim ticket ); This pair of functions behaves similar to Push (SaveDC) and Pop (RestoreDC) behavior from a stack. The implementation itself is a stack. Each time you call SaveDC, another snap-shot of the DC configuration is saved on the stack. SaveDC will return an int that refers to that call of SaveDC. The return value is a cookie or a context ID. SaveDC RestoreDC int When you call RestoreDC to return back to a previous configuration, pass in the appropriate context ID and the DC will be returned to the state of the DC when you took the original snap-shot. You can call SaveDC multiple times. You do not have to call RestoreDC the same number of times. All of the saved snap-shots will be popped off the stack to reach the specified context ID you send in to RestoreDC. // Parents are leaving you home alone for the weekend // Take a snap-shot of the house int houseBeforeMotherLeft = ::SaveDC(hDC); // Start texting, get the word out "Party at my place" // Get your cash and friend's older brother to buy "refreshments" for the party ::SelectObject(hDC, hBrush); ::SelectObject(hDC, hPen); ::SelectObject(hDC, hFont); ::SelectObject(hDC, hBmp); // Get the party started, ... // And oh yeah, the word spread a little too far, But what a party ... // Cleanup is a snap, Mother will never know. ::RestoreDC(hDC, houseBeforeMomLeft); // Parents return return; // Note: The previous scenario would be very creepy if you are 40 years old // and live in your mom's basement, rather than are a highschool teenager. With a call to RestoreDC, only the state of the DC is restored. You will still be responsible to clean up any GDI objects that were created, otherwise you will have resource leaks in your application. If you call SaveDC many times, and never call RestoreDC, the saved states will persist until the specified DC is released. I created a small demonstration application that shows many of the ways that a memory DC can be used. The memory DC is used to cache a bitmap to eliminate a repetitive chunk of paint code that produces the same image each time the paint routine is called. (Note: The application does not cache all of the images possible in order to still be able to demonstrate the value of a back-buffer.) The application uses a memory DC to show how a double-buffer paint process is implemented. I also demonstrate the process of compositing multiple bitmaps to create a single final image. When I began writing the demonstration app, I was only after showing how a double-buffer paint scheme is implemented. That is a common use for memory DCs. However, when I was done with that phase, I wanted to make the flicker more apparent. I could get most of the flicker to disappear by making the window smaller on my screen. So I added a few random rectangles to the paint process, as well as uses of Gradient Fill and Alpha Blend. The application will display two separate sets of random shapes. One side will have rectangles, the other side ellipses. A wiper will slide across the screen from left to right. As it slides to the right, the right-side area will be reduced and less of the random shapes will be visible for this side. The left side will enlarge revealing more of the random shapes. When the wiper slides all of the way to the right, only the original left group of shapes will remain. The wiper will then start on the left-side again, and a new set of random shapes will be revealed on the left-side of the wiper. There are a few interactive commands to change the behavior of the demonstration. To paint both halves of the display, I created two Win32 Regions, one for the left and one for the right. One at a time, I set each region as the Clipping Region for the DC which was drawn too, and all of the shapes for that region were drawn. After painting both sets of shapes, with the two clipping regions set, this is what the image would look like: Ironically, the wiper was the first concept for the demo. I had the idea of an animation of a gleaming highlight on a control. Somewhere along the way, it turned into what it is now; and to top it all off, I have optimized this portion of the demo by pre-generating the image mask for the wiper and storing it in a cached bitmap. This allowed me to demonstrate another use for the memory DC. Don't worry, with all of the shape drawing performed each frame, there is still plenty of flicker for the direct-paint method. I learned a lot developing the wiper. Unfortunately, most of it was not related to memory DCs in any form at all. Therefore, I must save this story for a different article. I will say that the final solution for the effect I was going for, went against my intuition. I then found myself researching deeper into GradientFill and AlphaBlend. I also have decided that there were no easy to find articles or samples that clearly explained how to use these functions beyond the most obvious usage. So I think I may have found my next topic. GradientFill AlphaBlend The wiper that slides across the screen will appear in two colors to indicate the method for updating the display: Here is a close up sample of the wiper blended on top of both sets of shapes for the final appearance of the demonstration app. As I mentioned earlier in the article, the program is written using standard C/C++ access to Win32 API calls. I have included the project as a VS2008 project. For VS2010, you should be able to import and convert the project. For earlier versions of Visual Studio, you really should upgrade if possible. I have separated most of the code that was written for demonstration into a separate file from the generated project called MemDcUsage.cpp. I have encapsulated all of the demonstration code in the article:: namespace to help people. Here is a sample from the main window procedure calling the functions written for the article: article:: ::CreateCompatibleDC(...) ... // Handle Window Size changes. case WM_SIZE: case WM_SIZING: // Flush the back buffer, // The size of the image that needs to be painted has changed. article::FlushBackBuffer(); // Allow the default processing to take care of the rest. return DefWindowProc(hWnd, message, wParam, lParam); case WM_PAINT: { hdc = ::BeginPaint(hWnd, &ps); article::PaintAnimation(hWnd, hdc); ::EndPaint(hWnd, &ps); break; } ... There are probably not any functions or files that you will be able to take away from the demonstration and pop into your own applications. However, there are plenty of code snippets that you should be able to morph into something useful in the context of your application. These chunks of code I have marked like this //!!! as well as a short comment that describes what you should be able to take from this. //!!! //!!!.
https://www.codeproject.com/Articles/224754/Guide-to-Win32-Memory-DC?fid=1638893&df=90&mpp=25&sort=Position&spc=None&select=3967990&tid=3958090
CC-MAIN-2017-22
refinedweb
3,601
61.36
Arch Panel Sheet Arch User documentation Description This tool allows to build a 2D sheet, including any number of Arch Panel Cut objects, or any other 2D object such as those made by the Draft Workbench and Sketcher Workbench. The Panel Sheet is typically made to layout cuts to be made by a CNC machine. These sheets can then be exported to a DXF file. The above image shows how Panel Sheets appear when exported to DXF. Usage - Optionally, select one or more Arch Panel Cut objects or any other 2D object that lies on the XY plane. - Press the Arch Panel Sheet button, or press P then S keys. - Adjust the desired properties. Options - After the panel sheet is created, with or without child objects, Any other child object can be added/removed to/from the panel sheet by double-clicking it in the tree view and adding or removing objects from its Group folder - Double-clicking on the panel in the tree view also allows you to move the objects contained in this sheet, or move its tag - It is possible to automatically make panels composed of more than one sheet of a material, by raising its Sheets property - Panel Sheets can display a margin, that is useful to make sure a certain space is always present between inner objects and the border of the sheet - When Panel sheets are exported to DXF, the outlines, inner holes, tags of their inner children are placed on different layers, as shown on the above image Properties Data - DataHeight: The height of the sheet - DataWidth: The width of the sheet - DataFill Ratio: The percentage of the sheet area that is filled by cuts (automatic) - DataTag Text: The text to display - - DataGrain Direction: This allows you to inform the main direction of the panel fiber (clockwise direction, 0° means up) View - ViewMargin: A margin that can be displayed inside the panel border - ViewShow Margin: Turns the display of the margin on/off - ViewShow Grain: Shows a fiber texture (Make Face must be set to True) Scripting See also: Arch API and FreeCAD Scripting Basics. The Panel sheet tool can be used in macros and from the Python console by using the following function: Sheet = makePanelSheet(panels=[], name="PanelSheet") - Creates a Sheetobject from panels, which is a list of Arch Panel objects. Example: import FreeCAD, Draft, Arch Rect = Draft.makeRectangle(500, 200) Polygon = Draft.makePolygon(5, 750) p1 = FreeCAD.Vector(1000, 0, 0) p2 = FreeCAD.Vector(2000, 400, 0) p3 = FreeCAD.Vector(1250, 800, 0) Wire = Draft.makeWire([p1, p2, p3], closed=True) Panel1 = Arch.makePanel(Rect, thickness=36) Panel2 = Arch.makePanel(Polygon, thickness=36) Panel3 = Arch.makePanel(Wire, thickness=36) FreeCAD.ActiveDocument.recompute() Cut1 = Arch.makePanelCut(Panel1) Cut2 = Arch.makePanelCut(Panel2) Cut3 = Arch.makePanelCut(Panel3) Cut1.ViewObject.LineWidth = 3 Cut2.ViewObject.LineWidth = 3 Cut3.ViewObject.LineWidth = 3 FreeCAD.ActiveDocument.recompute() Sheet = Arch.makePanelSheet([Cut1, Cut2, Cut3]) Tutorials -
https://wiki.freecadweb.org/index.php?title=Arch%20Panel%20Sheet
CC-MAIN-2021-49
refinedweb
485
63.49
Metronome objects are used to synchronize blocks of musical material (i.e., by making sure they start playing together), e.g., for live coding applications. It is very hard otherwise (you need a steady hand, and luck) to start things together at the exact same time. Of course, Metronome objects can be used for other things (e.g., GUI animation). A metronome has a tempo (e.g., 60 BPM), and a time signature (e.g., 4/4). A program may have several metronomes active at the same time. Metronome is included in the music library, so, you need the following in your program: from music import * Use the following function to create a Metronome object: Once a Metronome, m, has been created, the following functions are available:
https://jythonmusic.me/metronome/
CC-MAIN-2020-05
refinedweb
127
74.79
What I meant was that if you write <s:Button/> without declaring what s is (with xmlns:s="..." on that tag or on an enclosing tag) then you don't have valid XML. - Gordon -----Original Message----- From: David Francis Buhler [mailto:davidbuhler@gmail.com] Sent: Wednesday, March 14, 2012 1:55 PM To: flex-dev@incubator.apache.org Subject: Re: Flex and names spaces in MXML Omitting the xmlns attributes in MXML would make MXML valid XML, it just wouldn't prevent namespace collisions (because there is no namespace prefix). On Wed, Mar 14, 2012 at 4:47 PM, Justin Mclean <justin@classsoftware.com> wrote: > Hi, > >> Omitting the xmlns attributes in MXML would make MXML not be valid XML. > > That's probably an issue. > > Does anything currently require them to be 100% valid XML with namespaces (compared to well formed XML missing namespaces)? > > Thanks, > Justin
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201203.mbox/%3C149F8129B58B2D418508E63117D9C5419B0DCEFD5C@nambx05.corp.adobe.com%3E
CC-MAIN-2016-22
refinedweb
145
73.88
manually build x and beyond Manually Building X and Beyond The following are notes on manually building latest git-hosted X.org on development DragonFly BSD. We try something ill-advised, we build a separate version of X.org in another directory, say /opt/xbeta, on the same machine where we have installed the latext pkgsrc. We build a newer, separate, X.org so that we can test newer graphics hardware such as a Radeon HD4550 (r600), but we keep the older pkgsrc X.org because we like to use our faster test machine both for development and as our day-to-day workstation. The risk is that it is very easy to link into the new test libraries the wrong libraries from pkgsrc. Disclaimer The following notes describe alterations to one's system that may leave the graphical user interface based on X.org unusable. Use at one's own risk, and we certainly provide no warranty and make no guarantee of fitness. Because the risk of damaging one's system is so great and because there are other sites such as that do provide build scripts, we do not provide ready-to-run scripts. Despite the above warnings, the purpose of building a separate X.org in a separate directory is to hopefully limit possible damage. Notes on pkgsrc and interactions with DragonFly BSD GNU m4 1.4.14 and bison 2.4.2 Some packages have a bug where POSIX spawn definitions are incorrectly redefined for DragonFly BSD, so that forking subprocesses is broken. Patch both devel/m4 and devel/bison using the patch idea from: Unfortunately the problems are in the work directories so that one needs the knowledge from to even get the patches to be applied. Install pkgtools/pkgdiff for easier development of patches to pkgsrc work directories. gstreamer 0.10 and a 128-bit division bug on gcc 4.1.2 for x86_64 The combination of gcc 4.1.2 on DragonFlyBSD for x86_64 (amd64) appears to have a bug where 128-bit integer division is mistakenly optimized to internal functions that do not exist, causing link errors. The current workaround is to either specify the alternate base gcc 4.4 compiler using CCVER=gcc44 or to manually patch the offending source code to ignore autoconf checks for types such as uint128_t or _uint128t. As of 2010-05-09, we find it necessary on x86_64 to use the following patch for pkgsrc gstreamer0.10, an eventual dependency of the full xfce desktop: ---. See for example the following bug report for more details: Manually building new X.org *. Mesa As of 2010-05-09, known problem with GNU m4 pkgsrc on DragonFly ` At one time we patched mesa configure to substitute in the version; however, we now believe it is better to patch pkgsrc GNU m4 as described above. C99 fpclassify() At one time a patch similar to below was required, but now it appears to have been patched in the upstream source. fpclassify() is a function introduced in C99. Detecting reliably which features of C99 are supported is always an adventure because few OSes if any implement all of this standard.) } #elif defined(__APPLE__) || defined(__CYGWIN__) || defined(__FreeBSD__) || \ - (defined(__sun) && defined(__C99FEATURES__)) + defined(__DragonFly__) || (defined(__sun) && defined(__C99FEATURES__)) /* fpclassify is available. */ xserver If one uses the patches alluded to in the pkgsrc section, one can avoid the following build error: YACC parser.c bison: m4 subprocess failed: Operation not permitted gmake[3]: *** [parser.c] Error 1 Developer documentation appears to now be enabled by default; therefore, if one is bootstrapping a new tree, one might want to use the option to autogen.sh to not build developer documentation: I experienced the following build error: GEN Xserver-spec.txt No way to convert HTML to text found. Corrected using the autogen.sh flag: --disable-devel-docs xinit diff --git a/xinit.c b/xinit.c index 313806e..0d31637 100644 --- a/xinit.c +++ b/xinit.c @@ -48,6 +48,12 @@ in this Software without prior written authorization from The Open Group. #endif #endif +/* Graphics Image Formats libpng version 1.4.2 can be installed using autogen.sh and then configure, in contrast to the X.org projects that pass parameters from autogen.sh to run configure automatically. Independent JPEG Group's library version 8b can be installed using configure alone (there is no autogen.sh). LibTIFF has a recommended dependency on jpeg. LibTIFF version 3.9.2 can be installed using autogen.sh and then configure. Running gmake check after building any of the above three projects appears to produce passes tests. XML libxml2 autogen.sh calls configure 11 tests fail after gmake check Cairo Seems to require export LDFLAGS="-L${PREFIX}/lib -Wl,-rpath -Wl,${PREFIX}/lib -L${PREFIX}/lib/X11 -Wl,-rpath -Wl,${PREFIX}/lib/X11" export CPPFLAGS="-I${PREFIX}/include -I${PREFIX}/X11/include" When using gcc 4.1.2, gmake check gives: LINK check-link ./.libs/libcairo.so: undefined reference to `__umodti3' ./.libs/libcairo.so: undefined reference to `__udivti3' When using CCVER=gcc44 to force gcc 4.4, one obtains the worse: CHECK cairo.h cc1: error: unrecognized command line option "-Wlogical-op" CHECK cairo-deprecated.h cc1: error: unrecognized command line option "-Wlogical-op" ... The problem is 128-bit division using gcc 4.1.2 as discussed earlier. One can examine src/cairo-wideint.c and see the telltale code enclosed in an if test for HAVE_UINT128_T with 128-bit division if defined. Pango Install gnome-common first, required. If Cairo is not built with CCVER=gcc44, obtain build error similar to CCLD pango-view /opt/xcatch/lib/libcairo.so: undefined reference to `__umodti3' /opt/xcatch/lib/libcairo.so: undefined reference to `__udivti3' gmake[3]: *** [pango-view] Error 1 Following patch corrects an API change not propagated to the test for gmake check Not including patch itself because there are tabs. But edit file pango/pangoft2.def and change pango_fc_font_create_metrics_for_context to pango_fc_font_create_base_metrics_for_context GObject Introspection cElementTree is changed to ElementTree in files giscanner/girparser.py, giscanner/glibtransformer.py, and giscanner/scannermain.py. Using pkg_alternatives for a python wrapper fails as a script is generated with #! /usr/pkg/bin/python import os import sys A shell script cannot use another shell script as its interpreter. What works for now is to manually create /usr/pkg/bin/python as a symlink to /usr/pkg/bin/python2.6. gtk+ gobject-introspection is listed as a dependency but can be avoided. Use option --disable-introspection to avoid an error checking for gobject-introspection... yes ./configure: ${INTROSPECTION_GIRDIR/...}: Bad substitution
http://www.dragonflybsd.org/docs/developer/manually_build_x_and_beyond/
CC-MAIN-2014-15
refinedweb
1,093
50.43
Image Credits: Karol Majek. Check out his YOLO v3 real time detection video here This is Part 3 of the tutorial on implementing a YOLO v3 detector from scratch. In the last part, we implemented the layers used in YOLO's architecture, and in this part, we are going to implement the network architecture of YOLO in PyTorch, so that we can produce an output given an image. Our objective will be to design the forward pass of the network. The code for this tutorial is designed to run on Python 3.5, and PyTorch 0.4. It can be found in it's entirety at this Github repo. This tutorial is broken into 5 parts: Part 1 : Understanding How YOLO works Part 2 : Creating the layers of the network architecture Part 3 (This one): Implementing the the forward pass of the network Part 4 : Objectness Confidence Thresholding and Non-maximum Suppression Part 5 : Designing the input and the output pipelines Prerequisites - Part 1 and Part 2 of the tutorial. - Basic working knowledge of PyTorch, including how to create custom architectures with nn.Module, nn.Sequentialand torch.nn.parameterclasses. - Working with images in PyTorch Defining The Network As I've pointed out earlier, we use nn.Module class to build custom architectures in PyTorch. Let us define a network for our detector. In the darknet.py file, we add the following class. class Darknet(nn.Module): def __init__(self, cfgfile): super(Darknet, self).__init__() self.blocks = parse_cfg(cfgfile) self.net_info, self.module_list = create_modules(self.blocks) Here, we have subclassed the nn.Module class and named our class Darknet. We initialize the network with members, blocks, net_info and module_list. Implementing the forward pass of the network The forward pass of the network is implemented by overriding the forward method of the nn.Module class. forward serves two purposes. First, to calculate the output, and second, to transform the output detection feature maps in a way that it can be processed easier (such as transforming them such that detection maps across multiple scales can be concatenated, which otherwise isn't possible as they are of different dimensions). def forward(self, x, CUDA): modules = self.blocks[1:] outputs = {} #We cache the outputs for the route layer forward takes three arguments, self, the input x and CUDA, which if true, would use GPU to accelerate the forward pass. Here, we iterate over self.blocks[1:] instead of self.blocks since the first element of self.blocks is a net block which isn't a part of the forward pass. Since route and shortcut layers need output maps from previous layers, we cache the output feature maps of every layer in a dict outputs. The keys are the the indices of the layers, and the values are the feature maps As was the case with create_modules function, we now iterate over module_list which contains the modules of the network. The thing to notice here is that the modules have been appended in the same order as they are present in the configuration file. This means, we can simply run our input through each module to get our output. write = 0 #This is explained a bit later for i, module in enumerate(modules): module_type = (module["type"]) Convolutional and Upsample Layers If the module is a convolutional or upsample module, this is how the forward pass should work. if module_type == "convolutional" or module_type == "upsample": x = self.module_list[i](x) Route Layer / Shortcut Layer If you look the code for route layer, we have to account for two cases (as described in part 2). For the case in which we have to concatenate two feature maps we use the torch.cat function with the second argument as 1. This is because we want to concatenate the feature maps along the depth. (In PyTorch, input and output of a convolutional layer has the format `B X C X H X W. The depth corresponding the the channel dimension). elif module_type == "route": layers = module["layers"] layers = [int(a) for a in layers] if (layers[0]) > 0: layers[0] = layers[0] - i if len(layers) == 1: x = outputs[i + (layers[0])] else: if (layers[1]) > 0: layers[1] = layers[1] - i map1 = outputs[i + layers[0]] map2 = outputs[i + layers[1]] x = torch.cat((map1, map2), 1) elif module_type == "shortcut": from_ = int(module["from"]) x = outputs[i-1] + outputs[i+from_] YOLO (Detection Layer) The output of YOLO is a convolutional feature map that contains the bounding box attributes along the depth of the feature map. The attributes bounding boxes predicted by a cell are stacked one by one along each other. So, if you have to access the second bounding of cell at (5,6), then you will have to index it by map[5,6, (5+C): 2*(5+C)]. This form is very inconvenient for output processing such as thresholding by a object confidence, adding grid offsets to centers, applying anchors etc. Another problem is that since detections happen at three scales, the dimensions of the prediction maps will be different. Although the dimensions of the three feature maps are different, the output processing operations to be done on them are similar. It would be nice to have to do these operations on a single tensor, rather than three separate tensors. To remedy these problems, we introduce the function predict_transform Transforming the output The function predict_transform lives in the file util.py and we will import the function when we use it in forward of Darknet class. Add the imports to the top of util.py from __future__ import division import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import numpy as np import cv2 predict_transform takes in 5 parameters; prediction (our output), inp_dim (input image dimension), anchors, num_classes, and an optional CUDA flag def predict_transform(prediction, inp_dim, anchors, num_classes, CUDA = True): predict_transform function takes an detection feature map and turns it into a 2-D tensor, where each row of the tensor corresponds to attributes of a bounding box, in the following order. Here's the code to do the above transformation. batch_size = prediction.size(0) stride = inp_dim // prediction.size(2) grid_size = inp_dim // stride bbox_attrs = 5 + num_classes num_anchors = len(anchors) prediction = prediction.view(batch_size, bbox_attrs*num_anchors, grid_size*grid_size) prediction = prediction.transpose(1,2).contiguous() prediction = prediction.view(batch_size, grid_size*grid_size*num_anchors, bbox_attrs) The dimensions of the anchors are in accordance to the height and width attributes of the net block. These attributes describe the dimensions of the input image, which is larger (by a factor of stride) than the detection map. Therefore, we must divide the anchors by the stride of the detection feature map. anchors = [(a[0]/stride, a[1]/stride) for a in anchors] Now, we need to transform our output according to the equations we discussed in Part 1. Sigmoid the x,y coordinates and the objectness score. #Sigmoid the centre_X, centre_Y. and object confidencce prediction[:,:,0] = torch.sigmoid(prediction[:,:,0]) prediction[:,:,1] = torch.sigmoid(prediction[:,:,1]) prediction[:,:,4] = torch.sigmoid(prediction[:,:,4]) Add the grid offsets to the center cordinates prediction. #Add the center offsets grid = np.arange(grid_size) a,b = np.meshgrid(grid, grid) x_offset = torch.FloatTensor(a).view(-1,1) y_offset = torch.FloatTensor(b).view(-1,1) if CUDA: x_offset = x_offset.cuda() y_offset = y_offset.cuda() x_y_offset = torch.cat((x_offset, y_offset), 1).repeat(1,num_anchors).view(-1,2).unsqueeze(0) prediction[:,:,:2] += x_y_offset Apply the anchors to the dimensions of the bounding box. #log space transform height and the width anchors = torch.FloatTensor(anchors) if CUDA: anchors = anchors.cuda() anchors = anchors.repeat(grid_size*grid_size, 1).unsqueeze(0) prediction[:,:,2:4] = torch.exp(prediction[:,:,2:4])*anchors Apply sigmoid activation to the the class scores prediction[:,:,5: 5 + num_classes] = torch.sigmoid((prediction[:,:, 5 : 5 + num_classes])) The last thing we want to do here, is to resize the detections map to the size of the input image. The bounding box attributes here are sized according to the feature map (say, 13 x 13). If the input image was 416 x 416, we multiply the attributes by 32, or the stride variable. prediction[:,:,:4] *= stride That concludes the loop body. Return the predictions at the end of the function. return prediction Detection Layer Revisited Now that we have transformed our output tensors, we can now concatenate the detection maps at three different scales into one big tensor. Notice this was not possible prior to our transformation, as one cannot concatenate feature maps having different spatial dimensions. But since now, our output tensor acts merely as a table with bounding boxes as it's rows, concatenation is very much possible. An obstacle in our way is that we cannot initialize an empty tensor, and then concatenate a non-empty (of different shape) tensor to it. So, we delay the initialization of the collector (tensor that holds the detections) until we get our first detection map, and then concatenate to maps to it when we get subsequent detections. Notice the write = 0 line just before the loop in the function forward. The write flag is used to indicate whether we have encountered the first detection or not. If write is 0, it means the collector hasn't been initialized. If it is 1, it means that the collector has been initialized and we can just concatenate our detection maps to it. Now, that we have armed ourselves with the predict_transform function, we write the code for handling detection feature maps in the forward function. At the top of your darknet.py file, add the following import. from util import * Then, in the forward function. elif module_type == 'yolo': anchors = self.module_list[i][0].anchors #Get the input dimensions inp_dim = int (self.net_info["height"]) #Get the number of classes num_classes = int (module["classes"]) #Transform x = x.data x = predict_transform(x, inp_dim, anchors, num_classes, CUDA) if not write: #if no collector has been intialised. detections = x write = 1 else: detections = torch.cat((detections, x), 1) outputs[i] = x Now, simply return the detections. return detections Testing the forward pass Here's a function that creates a dummy input. We will pass this input to our network. Before we write this function, save this image into your working directory . If you're on linux, then type. wget Now, define the function at the top of your darknet.py file as follows: def get_test_input(): img = cv2.imread("dog-cycle-car.png") img = cv2.resize(img, (416,416)) #Resize to the input dimension img_ = img[:,:,::-1].transpose((2,0,1)) # BGR -> RGB | H X W C -> C X H X W img_ = img_[np.newaxis,:,:,:]/255.0 #Add a channel at 0 (for batch) | Normalise img_ = torch.from_numpy(img_).float() #Convert to float img_ = Variable(img_) # Convert to Variable return img_ Then, we type the following code: model = Darknet("cfg/yolov3.cfg") inp = get_test_input() pred = model(inp, torch.cuda.is_available()) print (pred) You will see an output like. ( 0 ,.,.) = 16.0962 17.0541 91.5104 ... 0.4336 0.4692 0.5279 15.1363 15.2568 166.0840 ... 0.5561 0.5414 0.5318 14.4763 18.5405 409.4371 ... 0.5908 0.5353 0.4979 ⋱ ... 411.2625 412.0660 9.0127 ... 0.5054 0.4662 0.5043 412.1762 412.4936 16.0449 ... 0.4815 0.4979 0.4582 412.1629 411.4338 34.9027 ... 0.4306 0.5462 0.4138 [torch.FloatTensor of size 1x10647x85] The shape of this tensor is 1 x 10647 x 85. The first dimension is the batch size which is simply 1 because we have used a single image. For each image in a batch, we have a 10647 x 85 table. The row of each of this table represents a bounding box. (4 bbox attributes, 1 objectness score, and 80 class scores) At this point, our network has random weights, and will not produce the correct output. We need to load a weight file in our network. We'll be making use of the official weight file for this purpose. Downloading the Pre-trained Weights Download the weights file into your detector directory. Grab the weights file from here. Or if you're on linux, wget Understanding the Weights File The official weights file is binary file that contains weights stored in a serial fashion. Extreme care must be taken to read the weights. The weights are just stored as floats, with nothing to guide us as to which layer do they belong to. If you screw up, there's nothing stopping you to, say, load the weights of a batch norm layer into those of a convolutional layer. Since, you're reading only floats, there's no way to discriminate between which weight belongs to which layer. Hence, we must understand how the weights are stored. First, the weights belong to only two types of layers, either a batch norm layer or a convolutional layer. The weights for these layers are stored exactly in the same order as they appear in the configuration file. So, if a convolutional is followed by a shortcut block, and then the shortcut block by another convolutional block, You will expect file to contain the weights of the previous convolutional block, followed by those of the latter. When the batch norm layer appears in a convolutional block, there are no biases. However, when there's no batch norm layer, bias "weights" have to read from the file. The following diagram sums up how the weight stores the weights. Let us write a function load weights. It will be a member function of the Darknet class. It'll take one argument other than self, the path of the weightsfile. def load_weights(self, weightfile): The first 160 bytes of the weights file store 5 int32 values which constitute the header of the file. #Open the weights file fp = open(weightfile, "rb") #The first 5 values are header information # 1. Major version number # 2. Minor Version Number # 3. Subversion number # 4,5. Images seen by the network (during training) header = np.fromfile(fp, dtype = np.int32, count = 5) self.header = torch.from_numpy(header) self.seen = self.header[3] The rest of bits now represent the weights, in the order described above. The weights are stored as float32 or 32-bit floats. Let's load rest of the weights in a np.ndarray. weights = np.fromfile(fp, dtype = np.float32) Now, we iterate over the weights file, and load the weights into the modules of our network. ptr = 0 for i in range(len(self.module_list)): module_type = self.blocks[i + 1]["type"] #If module_type is convolutional load weights #Otherwise ignore. Into the loop, we first check whether the convolutional block has batch_normalise True or not. Based on that, we load the weights. if module_type == "convolutional": model = self.module_list[i] try: batch_normalize = int(self.blocks[i+1]["batch_normalize"]) except: batch_normalize = 0 conv = model[0] We keep a variable called ptr to keep track of where we are in the weights array. Now, if batch_normalize is True, we load the weights as follows. if (batch_normalize): bn = model[1] #Get the number of weights of Batch Norm Layer num_bn_biases = bn.bias.numel() #Load the weights bn_biases = torch.from_numpy(weights[ptr:ptr + num_bn_biases]) ptr += num_bn_biases bn_weights = torch.from_numpy(weights[ptr: ptr + num_bn_biases]) ptr += num_bn_biases bn_running_mean = torch.from_numpy(weights[ptr: ptr + num_bn_biases]) ptr += num_bn_biases bn_running_var = torch.from_numpy(weights[ptr: ptr + num_bn_biases]) ptr += num_bn_biases #Cast the loaded weights into dims of model weights. bn_biases = bn_biases.view_as(bn.bias.data) bn_weights = bn_weights.view_as(bn.weight.data) bn_running_mean = bn_running_mean.view_as(bn.running_mean) bn_running_var = bn_running_var.view_as(bn.running_var) #Copy the data to model bn.bias.data.copy_(bn_biases) bn.weight.data.copy_(bn_weights) bn.running_mean.copy_(bn_running_mean) bn.running_var.copy_(bn_running_var) If batch_norm is not true, simply load the biases of the convolutional layer. else: #Number of biases num_biases = conv.bias.numel() #Load the weights conv_biases = torch.from_numpy(weights[ptr: ptr + num_biases]) ptr = ptr + num_biases #reshape the loaded weights according to the dims of the model weights conv_biases = conv_biases.view_as(conv.bias.data) #Finally copy the data conv.bias.data.copy_(conv_biases) Finally, we load the convolutional layer's weights at last. #Let us load the weights for the Convolutional layers num_weights = conv.weight.numel() #Do the same as above for weights conv_weights = torch.from_numpy(weights[ptr:ptr+num_weights]) ptr = ptr + num_weights conv_weights = conv_weights.view_as(conv.weight.data) conv.weight.data.copy_(conv_weights) We're done with this function and you can now load weights in your Darknet object by calling the load_weights function on the darknet object. model = Darknet("cfg/yolov3.cfg") model.load_weights("yolov3.weights") That is all for this part, With our model built, and weights loaded, we can finally start detecting objects. In the next part, we will cover the use of objectness confidence thresholding and Non-maximum suppression to produce our final set of detections.
https://blog.paperspace.com/how-to-implement-a-yolo-v3-object-detector-from-scratch-in-pytorch-part-3/
CC-MAIN-2022-21
refinedweb
2,806
58.38
I remember starting out with Ruby and feeling intimidated by the idea of using threads. I hadn’t come across them in any of the Ruby code I’d seen so far, and didn’t MRI’s Global Interpreter Lock mean that writing threaded code would yield marginal benefits anyway? Well, it turns out that writing threaded code in Ruby need not be scary, and there are use cases where leveraging threads can make your code more performant without adding a lot of additional complexity. Before going any further I should mention that a resource I found super useful when leveling up my knowledge of Ruby threading was Jesse Storimer’s excellent eBook Working with Ruby Threads - I can’t recommend it enough. The Global Interpreter Lock. Note that this is not the case for JRuby or Rubinius which do not have a GIL and offer true multi-threading. My understanding is limited here but as I see it the existance of the GIL provides some guarantees and removes certain issues around concurrency within MRI. It’s important to note however that even with the GIL it’s very possible to write code which isn’t threadsafe in MRI. So when does using threads make sense? To ensure all citizens are treated fairly the underlying operating system handles context switching between threads, i.e. when to pause execution of one thread and start or resume execution of another thread. We said above that the GIL prevents multiple Ruby threads within a process from executing concurrently, but a typical Ruby program will spend a significant amount of time not executing Ruby code, specifically waiting on blocking I/O. This could be waiting for HTTP requests, database queries or file system operations to complete. While waiting for these operations to complete, the GIL will allow another thread to execute. We can take advantage of this and perform other work, which doesn’t depend on the result, while we wait for the I/O to finish. Some examples of Ruby projects you may already know which make use of threads for performance reasons are the Puma web server and Bundler. An Example: Performing HTTP requests concurrently At thoughtbot we have a simple search service which takes a search term and searches across a few other internal services and presents the results in a segmented way. The searching is performed via API calls over HTTPS to the various services. The results from one service don’t in any way affect the results from another service, but we were performing the searches serially, waiting for each one to complete before starting the next. This meant that, at a minimum, our response times would be the combined API response times from the different services plus time spent handling the incoming request, stitching together the results and returning a response to the end user. This seemed like a great case for multi-threading, performing the various HTTP requests concurrently. We still need them all to complete before returning the results to the user but our wait time should now become the length of time taken for the slowest API request to complete. At a high level this should get our timeline waiting for API requests from looking like this: to this: The API request dispatching happens in this gather_search_results method: def gather_result_sets search_services.map do |name, search| ResultSet.new(name, search.results) end end This iterates over each of our search services, calling search.results (which is where the API request happens) and the method returns a list of ResultSet instances, a thin wrapper around the search results. It’s a pretty small change to spawn a new Thread for each search: def gather_result_sets search_services.map do |name, search| Thread.new { ResultSet.new(name, search.results) } end.map(&:value) end Calling Thread.new with a block creates a new thread separate from the main thread’s execution and executes the passed block in that new thread. (That’s right, in Ruby we’re always executing in the context of a thread. Generally, unless we’re creating our own child threads we’re in the context of the main thread, which we can access using Thread.main. We can also access the current context using Thread.current.) Our first map returns a list of Thread instances, which we then map over calling #value on each. The #value method does two things. First it causes the main thread to wait for our child thread to complete using #join. Creating threads without calling join on them will cause the main thread to continue without waiting and possibly exit before the child threads have finished, killing them in the process. Secondly #value returns the result (return value) of the block, or raises the exception (if any) which caused the thread to terminate. Benchmarking I ran a benchmark to compare the single threaded version to the multi-threaded version. This shows a good speed up (comparing the “real” column): user system total real single threaded 0.050000 0.000000 0.050000 ( 2.733866) multi threaded 0.050000 0.010000 0.060000 ( 1.193998) As we predicted before, the overall time of the multi-threaded version correlates with the time taken for the slowest of the underlying services to return. Summing Up I hope this example has illustrated that multithreading in Ruby MRI using the Thread class can be a simple and effective way of improving the performance of I/O bound code in some scenarios. The example we looked at was ideal in that the work of each thread was completely independent. This is definitely a scenario that I’ve come across a bunch of times, and for me the trade off between added complexity and the performance benefits are worth it. Because the work of each thread was independent we didn’t have to worry about synchronizing anything across threads. Sometimes this does become necessary, for example when different threads must access a shared resource, and we have to use tools such as the Mutex class to ensure that a context switch doesn’t happen while executing a given block of code. This is where the complexity (and fun!) of writing multi-threaded code can increase.
https://thoughtbot.com/blog/untangling-ruby-threads?utm_source=rubyweekly&utm_medium=email
CC-MAIN-2021-43
refinedweb
1,035
59.53
See this example: def hello(a:String, b:String) = println(a + ":" + b) val m1 = hello("aaa", _ ) m1("bbb") val m1 = hello("aaa", _: String) hello String Scala's type inference is flow based. Methods and functions need explicit parameter types, which are used to infer other types. The parameter types cannot be inferred from method or function body. Sometimes, however, the parameter types are known from external context, and then don't have to be labeled. Two examples, val f: String => Unit = hello("aaa", _) val s = Seq(1,2).map(_+1) // Seq[Int].map expects a function of Int argument type Below is a quote from Martin Odersky about the limitations of Scala's type inference compared to, say, ML and Haskell. Challenges include Scala's overloading, record selection, and subtyping, as well as the need to keep things simple,. Source: comment under post Universal Type Inference is a Bad Thing.
https://codedump.io/share/bWzUEowWOLJw/1/why-scala-can39t-infer-the-type-in-a-partial-method
CC-MAIN-2016-50
refinedweb
154
62.78
class CookieServiceProvider extends ServiceProvider (View source) Create a new service provider instance. Register the service provider. Merge the given configuration with the existing configuration. Load the given routes file if routes are not already cached. Register a view file namespace. Register a translation file namespace. Register a JSON translation file path. Register a database migration path. Register paths to be published by the publish command. Ensure the publish array for the service provider is initialized. Add a publish group / tag to the service provider. Get the paths to publish. Get the paths for the provider or group (or both). Get the paths for the provider and group. Get the service providers available for publishing. Get the groups available for publishing. Register the package's custom Artisan commands. Get the services provided by the provider. Get the events that trigger this service provider to register. Determine if the provider is deferred. © Taylor Otwell Licensed under the MIT License. Laravel is a trademark of Taylor Otwell.
https://docs.w3cub.com/laravel~5.8/api/5.8/illuminate/cookie/cookieserviceprovider
CC-MAIN-2021-10
refinedweb
164
72.42
A base class for I/O IP raw socket communication. More... #include <io_ip_socket.hh> A base class for I/O IP raw socket communication. Each protocol 'registers' for I/O and gets assigned one object of this class. Constructor for a given address family and protocol. Helper method to close a single socket. Does not remove from socket map structure. Deletes memory associated with 'fd'. Enable/disable the "Header Included" option (for IPv4) on the outgoing protocol socket. If enabled, the IP header of a raw packet should be created by the application itself, otherwise the kernel will build it. Note: used only for IPv4. In RFC-3542, IPV6_PKTINFO has similar functions, but because it requires the interface index and outgoing address, it is of little use for our purpose. Also, in RFC-2292 this option was a flag, so for compatibility reasons we better not set it here; instead, we will use sendmsg() to specify the header's field values. Enable/disable multicast loopback when transmitting multicast packets. If the multicast loopback is enabled, a transmitted multicast packet will be delivered back to this host (assuming the host is a member of the same multicast group). Enable/disable receiving information about a packet received on the incoming protocol socket. If enabled, values such as interface index, destination address and IP TTL (a.k.a. hop-limit in IPv6), and hop-by-hop options will be received as well. Returns XORP_ERROR on error.
http://xorp.org/releases/current/docs/kdoc/html/classIoIpSocket.html
CC-MAIN-2019-22
refinedweb
243
50.12
This is just a write up of an answer to a question on the Dynamic Data forum, so here's the solution. First of all we need an attribute class: [AttributeUsage(AttributeTargets.Class)] public class HideTableInDefaultAttribute : Attribute { public Boolean Hide { get; private set; } public HideTableInDefaultAttribute(Boolean hide) { Hide = hide; } // this will allow us to have a default set to false public static HideTableInDefaultAttribute Default = new HideTableInDefaultAttribute(false); } Listing 1 - HideTableIndefaultAttribute And now the code in the Default.aspx: protected void Page_Load(object sender, EventArgs e) { System.Collections.IList visibleTables = Global.DefaultModel(); // Hiding tables var tablesMain = (from t in Global.DefaultModel.VisibleTables where !t.Attributes.OfType<HideTableInDefaultAttribute>(). DefaultIfEmpty(HideTableInDefaultAttribute.Default).First().Hide orderby t.DisplayName select t).ToList(); Menu1.DataSource = tablesMain; Menu1.DataBind(); } Listing 2 - Default.aspx Page_Load event handler Note in the where clause the DefaultIfEmpty which allows you to specify the default value to return if no value found. This means that you will always get a value and the default value will have the Hide property set to false because of the Default keyword in the Attribute definition. Now some sample metadata: // note the differance in the result of using // ScaffoldTable(false) over HideTableInDefault(true) [ScaffoldTable(false)] public partial class Category { } [HideTableInDefault(true)] public partial class EmployeeTerritory { } [HideTableInDefault(true)] public partial class Product { } Listing 3 - sample metadata Figure 1 - Limited Tables list from Northwind Figure 2 - See how the Products column has a foreign key link Figure 3 - Note that the Category column has no link If you look at each of the above Figures you will see the differences between a ScaffoldTable(false) and a HideTableInDefault(true). It's also worth noting that if you omit a table from the Model you either get a ForeignKey column of no column if it was a children column, so this makes using the above two methods the best way of hiding tables in the Default.aspx page. Download Project 14 comments: hi, do you know how to convert dreamweaver to C#? and, how to do roll over in C#? e.g. mouse cursor roll over the buttom and a drop down box will appear. thanks!+ In Dreamweaver are you using a server tecnologyu like ColdFusion if not its just HTML. On my blog I'm using c# but the tecnology is ASP.Net 3.5 and c# is the language, what I sugest you do is have a look at some of the tutorial videos on Hope this helps :D Steve Thanks Steve, very great post! but...sorry....I've a problem. This is the error that appears when I run your code into my .net 4.0 entity framework application in the last line of default.aspx page where there is Menu1.DataBind() "The entity type VB$AnonymousType_0]] does not belong to any registered model." Where is my error? Can you help me? Thanks a lot. Betta This is a very old sample Betta, if you send me an e-mail to my e-mail (top right of this page) I will replay with a sample that works in DD4, I cannot supply a VB sample though sorry. Steve :D Hi Betta, I have a working sample now in VS2010 DD4 I think the issue is that I also made some changes to the Default.aspx page and have not shown it here :( I will do an update today. Steve :D Hi Betta, I've updated the article now. Steve :D Thaaaaaaaaaannnnnnnnnnnkkkkkkkkkkkkkkkkkkkksssssssssssssssss Steeeeeeveeeeeeeeeeeeeeeeeeeeeeee you have made me happy!!!!!! all work fine now. you are a great man!!! If you have made other fine samples about dd4 or mvc2 I'll be happy to see its. Write to me where I can find your fantastic samples. Tanks another time. Betta Your morethan welcome Betta :) Steve :D Steve, I was having problems with the hiding the tables using your code in 4.0. I got around the issues by creating a predicate. private static bool FindVisible(MetaTable mt) { return !mt.Attributes.OfType(). DefaultIfEmpty(HideTableInDefaultAttribute.Default).First().Hide; } I then changed the first line of the page_load to : System.Collections.IList visibleTables = ASP.global_asax.DefaultModel.VisibleTables.FindAll(FindVisible); and left the Menu1.DataSource = visibleTables; it all works now. could have never made it this far without your help. Thanks! Mark Hi Mark much has changed in DD since I wrote this post and I have moved on to using a MetaDataSiteMapProvider to use a menu system on my current sites. Steve Steve, I have Questions a bundle of them 1. My Table Names have a Prefix when the names appear in the Menu looks weird Is their anyway to Alias them with something that is more readable? 2. How to Add a Browse button with the field where I plan to store the Path Name? 3. I am hiding a few tables which are more related to the Admin rather then the regular user. How can I show the hidden tables when I go the Admin Menu? Hi there, in answer to your questions; 1. Have a look at my article here Dynamic Data – Custom Metadata Providers you couls adapt this to remove the prefixes automatically by adding you own DisplayName attribute at runtime. 2. Not sure what you mean here. 3. See Securing Dynamic Data 4 Several years after the post and it's still saving people like me. Nice job. Thanks, I will have a new open source project soon with a new project template and ALL my bits consolidated in it. steve
http://csharpbits.notaclue.net/2008/10/dynamic-data-hiding-table-on.html?showComment=1276175498621
CC-MAIN-2019-35
refinedweb
914
64.81
Hello, I am trying to write a basic data logging program where if a switch is flipped, a timer starts (from 0 and going up in 1 sec increments) and then if another input goes high, that is recorded alongside the time it occurred at. I’m having 2 issues. One, I am new to Arduino and didn’t even know there was a data logging shield and already purchased an SD shield so it doesn’t have the RTC Real Time clock on it. Is there any way to make this SD shield work for data logging where it accurately keeps track of the timer without the RTC? And two, my current program doesn’t update the pin values once the program starts (i.e. I open Serial monitor and a switch reads 0, then I flip the switch and it still reads 0, but if I close the Serial Monitor and reopen it then the switch reads 1). I can’t figure out how to get it to continuously update…I think something from my loop should be in my setup or vice versa. The current code doesn’t even have a timer in it because I tried a basic counter with a 1 sec delay but it doesn’t work, so right now I have it saying that if I hit the START switch then it will start displaying 0’s and then log when a TRIP occurs. Any help would be greatly appreciated, thank you! #include <SPI.h> #include <SD.h> const int chipSelect = 8; //SD shield CS pin for Arduino Uno const int STARTpin = 2; const int TRIPpin = 4 ; int START=digitalRead(STARTpin); int TRIP=digitalRead(TRIPpin); File dataFile; void setup() { // Open serial communications and wait for port to open: Serial.begin(9600); Serial.print("Initializing SD card..."); pinMode(10, OUTPUT); //default pin 10 must be set to output // see if the card is present and can be initialized: if (!SD.begin(chipSelect)) { Serial.println("Card failed, or not present"); // don't do anything more: while (1) ; } Serial.println("card initialized."); // Open up the file we're going to log to! dataFile = SD.open("datalog.txt", FILE_WRITE); if (! dataFile) { Serial.println("error opening datalog.txt"); // Wait forever since we cant write data while (1) ; } } void loop() { //Serial.println(START); //Serial.println(TRIP); if(START==1){ // make a string for assembling the data to log: String dataString = "0"; if(TRIP==1){ dataString += String(START); } dataFile.println(dataString); // print to the serial port too: Serial.println(dataString); // If you want to speed up the system, remove the call to flush() and it // will save the file only every 512 bytes - every time a sector on the // SD card is filled with data. dataFile.flush(); // Take 1 measurement every 500 milliseconds delay(500); }
https://forum.arduino.cc/t/data-logging-with-sd-shield-vs-data-logging-shield/275188
CC-MAIN-2021-31
refinedweb
465
60.75
Talk:Tag:landuse=meadow Discuss Tag:landuse=meadow: landuse=meadow or natural=meadow? Why is "natural=meadow" documented here, when "landuse=meadow" is listed on Key:landuse page? -- Harry Wood 01:26, 12 August 2009 (UTC) - I have an interest in mapping natural habitats, and wanted to use the meadow key, but was confused by the lack of a definition and why it was bundled into Landuse (while heath, marsh, scrub, etc, were in Natural). It appears to have been created without discussion. The wiki & email lists dont help in providing a solution, and are full of every possible argument. The most useful suggestion was to simply copy what originally happened and just go ahead and create the tag, and then use it to replace "landuse=meadow" in Map Features. But a few days after creating this wiki page unexpected work obligations meant I put it all to one side. Now I clearly have to follow this up but my views have slightly changed. I now think meadow should be part of an overall natural=grassland key. That doesn't deal with the problem this page exists but is not on the Map Features page. So I either now have to put natural=meadow into the Map Features page and see what it stirs up, or start creating a set of tags based around natural habitats. --Jamicu 20:32, 24 August 2009 (UTC) - Yes but there's just the small matter that landuse=meadow has been used 9532 times in the database [1], while natural=meadow has been used 92 times [2]. - You've created a nice bit of documentation including a photo. It would make sense to use this as the landuse=meadow documentation. landuse=meadow is a longstanding tag. It was probably listed (but not documented) way back before we had much of a process in place. - Maybe natural=meadow would be better but... It's a bit too late to change it. - -- Harry Wood 21:53, 24 August 2009 (UTC) - Well User:Cartinus has now created a page for landuse=meadow. Luckily that's just a skeleton page with hardly any description, so I can delete that page and do a move this page to landuse=meadow, i.e. move on top of it with the nice content we have on this page. I'm going to go ahead and do that, before we end up with two different pages written in a lot of detail. I've also made use of a PD image User:Cartinus found on here.-- Harry Wood 12:37, 4 August 2010 (BST) - We may make the difference, like in "landuse=forest" and "natural=wood". "landuse=meadow" should be used for a maintained land of grass (e.g. when the grass is harvested for a cattle food), and "natural=meadow" for a land where a grass is growing itself. --Surly 15:24, 12 October 2009 (UTC) - Well we could do that. Or we could avoid creating more and more top level tags with subtle differences between them. Jamicu suggested an overall natural=grassland, which would've been a good way to encompass a few different things. But it's difficult to move towards that kind of rationalisation ("deprecating" tags etc) What we can do though, is avoid documenting more tags. -- Harry Wood 12:37, 4 August 2010 (BST) Landuse? Why does meadow fall under landuse? How does one actively manage meadow land? Also, as far as I can tell the closest equivalent in the natural category is heath, which doesn't seem to be quite the same, since the description mentions bushes. --DanHomerick 02:56, 11 September 2009 (UTC) - Because in Western Europe (where still most tags are invented) most meadows would turn into woodland if they are not mown for hay production or grazed by livestock. If you want to tag a natural meadow (e.g. an high alpine meadow), then you might be better of to use natural=meadow. There are already 94 places in Europe alone that are tagged as such and it is documented on it's own page. --Cartinus 12:34, 14 October 2009 (UTC) - It surprises me to see you making that distinction in this discussion, and yet you totally failed to make an mention of this on the tag doc itself. If we don't link to similar tags and explain the distinction, then it just looks as though the pages were created in error, without awareness of eachother. I really want to establish it as a procedure, that tag docs should include a "similar tags" section. - In any case, I've just performed a move/merge to bring the two pages together here. landuse=meadow is by far the more commonly used tag, and has been listed on Map Features for years. - If anyone thinks natural=meadow is different enough from the definition of landuse=meadow to justify it's own tag, then they can propose the addition of natural=meadow. I don't think it's a useful distinction. At least not useful enough to justify its own tag. It's comparable to the distinction between landuse=forest and natural=wood which is quite possibly the most confusing and unhelpful tagging idea ever! (spare a thought for all the mappers who don't care whether it's "managed" or not) Let's not do the same for meadows. - -- Harry Wood 12:56, 4 August 2010 (BST) - I don't think that natural=meadow is different enough from the landuse=meadow. I think tagging "landuse=*" for a land that is not used is incorrect. Very incorrect! But because of the "landuse=meadow" became very common, we have to keep it in the Map features (until it will be deprecated) and have to establish new tag "natural=meadow" for a natural grassland. - This land of grass near the airport is natural. It has no economic role. It is not used by people in any manner. So why one must to tag it as "landuse"? - And please don't meddle in the foreign wiki namespace, if you are not aware of the foreign country's specificity. (I'm talking about your deletion of "natural=meadow" in the "RU:Vegetation" page. --Surly 11:37, 5 August 2010 (BST) - Well it's normal procedure after a page move to try and fix some of the incoming links. I did think twice about meddling with the foreign pages, but it links to the English page which I've just moved, so seemed helpful. D'you not think it's helpful for it to match the english version? - I agree with you natural=meadow would have been a better tag, because of the whole "use" thing. A meadow can be thought of as a "use of land", but sometimes they're entirely natural. - But in any case, the problem is that the widely used and accepted tag is landuse=meadow. It's been that for years. Moving forward, we could carefully introduce the idea of a natural=meadow tag as a proposal, and either ask people to transition to it as a replacement tag, or the proposal could define it as a different tag making the distinction clear. I'd argue that the second idea would be unhelpful (as in woods/forests) The first idea would probably be more hassle than it's worth just to correct a minor bit of wording. - -- Harry Wood 12:07, 5 August 2010 (BST) - A meadow can be thought of as a "use of land", but sometimes they're entirely natural. — Yes! Consider a renderer (yet hypothetical), that paints wild areas in one colour; and paints areas transformed by human activities, in the other colour. That renderer need no parsing values of tags. Only the key of the tag: "landuse" or "natural". So in case of "landuse=meadow" and wild grassland it will result in incorrect colouring. And in case of "natural=meadow" and maintained grassland the same mistake occures. --Surly 06:23, 6 August 2010 (BST) - How to make distinction clear? Firstly, mappers ought to not vandalize the exisiting tagging, if they don't know real kind of the meadow. Then, look at the meadow. If there are mowers, or ricks of hay, or the grass is trimmed evenly — it is maintained meadow. If the meadow looks wild, it is probably the natural meadow. --Surly 06:36, 6 August 2010 (BST) Narrow definition The problem with this merge of documentation is that I doubt that the areas tagged with landuse=meadow over the years actually conform to the very narrow definition meadow we see now on this page. --Cartinus 21:14, 4 August 2010 (BST) - It is quite narrow. I like it though. It pretty much defines what a meadow is. We could loosen it a little to better match current usage though. Also we could improve it by explaining which tags you should use in the various non-meadow cases. -- Harry Wood 21:30, 4 August 2010 (BST) - To me (and I suspect a lot more people) landuse=meadow meadow=agricultural is any agricultural grassland that is mainly used to grow grass to feed animals that are not grazing "in place". This has nothing to do with observing in which time of the year the grass is cut, nor with the amount of other non-woody plants growing there. If you do a dictionary search you'll find enough references that don't even exclude grassland that is mainly used for pasture. --Cartinus 13:20, 5 August 2010 (BST) I've done some digging. Here are the results: People want to create and are creating maps like below, where there is a clear distinction between the tilled areas and the agricultural grasslands. So they use (e.g.) JOSM and create the areas. Then then want to tag them. Looking in the presets menu, they find a section about farming related landuse: These mappers are of course not native English speakers. Since for years there was no definition of meadow in the wiki, they will probably have used some online translation service or dictionary to find out what a meadow is. That returns something that nicely conforms with those green patches they want to tag. Lucky for them it even gets rendered in green on both the Mapnik and Osmarender layers. So how are farmers using these agricultural grasslands? They cut the grass two to four times a year. They try to do that before the grass flowers and sets seed, because then they get better quality feed. They definitely don't want too much wild plants mixed in-between their carefully selected high yield grass cultivars. On top of that: Mixing cutting grass and pasturage (after each other) on the same lot is quite common. So what has this agricultural grassland that people have been mapping for some years with landuse=meadow in common with the definition currently in place? Nada, zilch, nothing. (Ok, there is grass growing there.) So by moving this definition to this page, the tag was effectively redefined. AFAIK this is something that is generally not seen as a good thing in OSM, unless there is a very good reason for it. --Cartinus 19:52, 20 August 2010 (BST) Pasture What is the difference (or similarities) between landuse=meadow and a pasture? Should a pasture be tagged with landuse=pasture or with landuse=meadow with meadow=pasture? --Skippern 20:15, 5 August 2010 (BST) - So pasture is covered by landuse=farm right? I've added a 'Similar tags' section which mentions landuse=farm and "pasture". Ideally we'd define how to distinguish and decide between the tags. Sometimes it's just a vague matter of judgement, but worth trying nail down a way of deciding if possible -- Harry Wood 20:43, 5 August 2010 (BST) - In the English language pasture is grassland that is grazed. A meadow is grassland where grass is grown to make hay. Since more and more people live in cities, this distinction is disappearing and the word meadow is increasingly used for both kinds of grassland by ordinary people. This afternoon there were 7 uses of landuse=pasture and 5208 uses of landuse=meadow+crop=native_pasture in the database. Don't know who thought up that last one. --Cartinus 22:53, 5 August 2010 (BST) - Since I am not a native english speaker, than I am uncertain of the differences of various of these epxressions in english, and it doesn't make it easier with the english tendency of using some of these words on names in public parks. - In Norwegian (my native language) we have the words eng (cultivated land, used for grass and hay production), beitemark (grazed land), and byteng (uncultivated grazed land, usually shared between several farms). I was just asking this as I have had the impression that both pasture and meadow means all of these different types of land. In portuguese (language of my current residence) I have the impression that the same word means all of these definitions. It is therefor important that these tags are clearly defined for non-english speakers to understand their meanings. --Skippern 07:55, 6 August 2010 (BST) - I propose and use landuse=pasture as grassland for grazing, not for mowing. I can also live with landuse=meadow in combination with meadow=pasture if the community decides for that. (I have difficulties with telling cattle and horses a "crop".) But I feel that *some* means must be found to distinguish between meadow and pasture. --Segatus 16:32, 28 July 2013 (BST)
http://wiki.openstreetmap.org/wiki/Talk:Tag:landuse%3Dmeadow
CC-MAIN-2017-09
refinedweb
2,258
62.68
Haskell is—perhaps infamously—a lazy language. The basic idea of laziness is pretty easy to sum up in one sentence: values are only computed when they're needed. But the implications of this are more subtle. In particular, it's important to understand some crucial topics if you want to write memory- and time-efficient code: seq deepseq This blog post was inspired by some questions around writing efficient conduit code, so I'll try to address some of that directly at the end. The concepts, though, are general, and will transfer to not only other streaming libraries, but non-streaming data libraries too. NOTE This blog post will mostly treat laziness as a problem to be solved, as opposed to the reality: laziness is sometimes an asset, and sometimes a liability. I'm focusing on the negative exclusively, because our goal here is to understand the rough edges and how to avoid them. There are many great things about laziness that I'm not even hinting at. I trust my readers to add some great links to articles speaking on the virtues of laziness in the comments :) Let's elaborate on my one liner above: Values are only computed when they're needed Values are only computed when they're needed Let's explore this by comparison with a strict language: C. #include <stdio.h> int add(int x, int y) { return x + y; } int main() { int five = add(1 + 1, 1 + 2); int seven = add(1 + 2, 1 + 3); printf("Five: %d\n", five); return 0; } Our function add is strict in both of its arguments. And its result is also strict. This means that: add 1 + 1 1 + 2 2 3 5 five 1 + 2 1 + 3 7 seven printf Let's compare that to the equivalent Haskell code: add :: Int -> Int -> Int add x y = x + y main :: IO () main = do let five = add (1 + 1) (1 + 2) seven = add (1 + 2) (1 + 3) putStrLn $ "Five: " ++ show five There's something called strictness analysis which will result in something more efficient than what I'll describe here in practice, but semantically, we'll end up with the following: 1 + 2 putStrLn Compared to the C (strict) evaluation, there is one clear benefit: we don't bother wasting time evaluating the seven value at all. That's three addition operations bypassed, woohoo! And in a real world scenario, instead of being three additions, that could be a seriously expensive operation. However, it's not all rosey. Creating a thunk does not come for free: we need to allocate space for the thunk, which costs both allocation, and causes GC pressure for freeing them afterwards. Perhaps most importantly: the thunked version of an expression can be far more costly than the evaluated version. Ignoring some confusing overhead from data constructors (which only make the problem worse), let's compare our two representations of five. In C, five takes up exactly one machine word*. In Haskell, our five thunk will take up roughly: * Or perhaps less, as int is probably only 32 bits, and you're probably on a 64 bit machine. But then you get into alignment issues, and registers... so let's just say one machine word. int + 1 Now in practice, it's not going to work out that way. I mentioned the strictness analysis step, which will say "hey, wait a second, it's totally better to just add two numbers than allocate a thunk, I'mma do that now, kthxbye." But it's vital when writing Haskell to understand all of these places where laziness and thunks can creep in. Let's look at how we can force Haskell to be more strict in its evaluation. Likely the easiest way to do this is with bang patterns. Let's look at the code first: {-# LANGUAGE BangPatterns #-} add :: Int -> Int -> Int add !x !y = x + y main :: IO () main = do let !five = add (1 + 1) (1 + 2) !seven = add (1 + 2) (1 + 3) putStrLn $ "Five: " ++ show five This code now behaves exactly like the strict C code. Because we've put a bang (!) in front of the x and y in the add function, GHC knows that it must force evaluation of those values before evaluating it. Similarly, by placing bangs on five and seven, GHC must evaluate these immediately, before getting to putStrLn. ! x y As with many things in Haskell, however, bang patterns are just syntactic sugar for something else. And in this case, that something else is the seq function. This function looks like: seq :: a -> b -> b You could implement this type signature yourself, of course, by just ignoring the a value: a badseq :: a -> b -> b badseq a b = b However, seq uses primitive operations from GHC itself to ensure that, when b is evaluated, a is evaluated too. Let's rewrite our add function to use seq instead of bang patterns: b add :: Int -> Int -> Int add x y = let part1 = seq x part2 part2 = seq y answer answer = x + y in part1 -- Or more idiomatically add x y = x `seq` y `seq` x + y What this is saying is this: part1 part2 answer x + y Of course, that's a long way to write this out, and the pattern is common enough that people will usually just use seq infix as demonstrated above. EXERCISE What would happen if, instead of in part1, the code said in part2? How about in answer? in part1 in part2 in answer There is always a straightforward translation from bang patterns to usage of let. We can do the same with the main function: let main main :: IO () main = do let five = add (1 + 1) (1 + 2) seven = add (1 + 2) (1 + 3) five `seq` seven `seq` putStrLn ("Five: " ++ show five) It's vital to understand how seq is working, but there's no advantage to using it over bang patterns where the latter are clear and easy to read. Choose whichever option makes the code easiest to read, which will often be bang patterns. So far, you've just had to trust me about the evaluation of thunks occurring. Let's see a method to more directly observe evaluation. The trace function from Debug.Trace will print a message when it is evaluated. Take a guess at the output of these programs: trace Debug.Trace #!/usr/bin/env stack -- stack --resolver lts-9.3 script import Debug.Trace add :: Int -> Int -> Int add x y = x + y main :: IO () main = do let five = trace "five" (add (1 + 1) (1 + 2)) seven = trace "seven" (add (1 + 2) (1 + 3)) putStrLn $ "Five: " ++ show five Versus: #!/usr/bin/env stack -- stack --resolver lts-9.3 script {-# LANGUAGE BangPatterns #-} import Debug.Trace add :: Int -> Int -> Int add x y = x + y main :: IO () main = do let !five = trace "five" (add (1 + 1) (1 + 2)) !seven = trace "seven" (add (1 + 2) (1 + 3)) putStrLn $ "Five: " ++ show five Think about this before looking at the answer... OK, hope you had a good think. Here's the answer: Five: 5 five `seq` seven `seq` putStrLn ("Five: " ++ show five) x `seq` y All that said: as long as your expressions are truly pure, you will be unable to observe the difference between x and y evaluating first. Only the fact that we used trace, which is an impure function, allowed us to observe the order of evaluation. QUESTION Does the result change at all if you put bangs on the add function? Why do bangs there affect (or not affect) the output? This is all well and good, but the more standard way to demonstrate evaluation order is to use bottom values, aka undefined. undefined is special in that, when it is evaluated, it throws a runtime exception. (The error function does the same thing, as do a few other special functions and values.) To demonstrate the same thing about seven not being evaluated without the bangs, compare these two programs: undefined error #! The former completes without issue, since seven is never evaluated. However, in the latter, we have a bang pattern on seven. What GHC does here is: add (1 + 2) undefined (1 + 2) + undefined QUESTION Returning to the question above: does it look like bang patterns inside the add function actually accomplish anything? Think about what the output of this program will be: #! To compare this behavior to a strict language, we need a language with something like runtime exceptions. I'll use Rust's panics: fn add(x: isize, y: isize) -> isize { println!("adding: {} and {}", x, y); x + y } fn main() { let five = add(1 + 1, 1 + 2); let seven = add(1 + 2, panic!()); println!("Five: {}", five); } Firstly, to Rust's credit: it gives me a bunch of warnings about how this program is dumb. Fair enough, but I'm going to ignore those warnings and charge ahead with it. This program will first evaluate the add(1 + 1, 1 + 2) expression (which we can see in the output of adding: 2 and 3). Then, before it ever enters the add function the second time, it needs to evaluate both 1 + 2 and panic!(). The former works just fine, but the latter results in a panic being generated and short-circuiting the rest of our function. add(1 + 1, 1 + 2) adding: 2 and 3 panic!() If we want to regain Haskell's laziness properties, there's a straightforward way to do it: use a closure. A closure is, essentially, a thunk. The Rust syntax for creating a closure is |args| body. We can create closures with no arguments to act like thunks, which gives us: |args| body fn add<X, Y>(x: X, y: Y) -> isize where X: FnOnce() -> isize, Y: FnOnce() -> isize { let x = x(); let y = y(); println!("adding: {} and {}", x, y); x + y } fn main() { let five = || add(|| 1 + 1, || 1 + 2); let seven = || add(|| 1 + 2, || panic!()); println!("Five: {}", five()); } Again, the Rust compiler complains about the unused seven, but this program succeeds in running, since we never run the seven closure. Still not up to speed with Rust? Let's use everyone's favorite language: Javascript: function add(x, y) { return x() + y(); } function panic() { throw "Panic!" } var five = ignored => add(ignored => 1 + 1, ignored => 1 + 2); var seven = ignored => add(ignored => 1 + 2, panic); console.log("Five: " + five()); Alright, to summarize until now: This is all good, and make sure you have a solid grasp of these concepts before continuing. Consider rereading the sections above. Here's something we didn't address: what, exactly, does it mean to evaluate or force a value? To demonstrate the problem, let's implement an average function. We'll use a helper datatype, called RunningTotal, to capture both the cumulative sum and the number of elements we've seen so far. RunningTotal data RunningTotal = RunningTotal { sum :: Int , count :: Int }] We're going to run this with run time statistics turned on so we can look at memory usage: $ stack ghc average.hs && ./average +RTS -s $ stack ghc average.hs && ./average +RTS -s Lo and behold, our memory usage is through the roof! ) We're allocating a total of 258MB, and keeping 95MB in memory at once. For something that should just be a tight inner loop, that's ridiculously large. You're probably thinking right now "shouldn't we use that seq stuff or those bang patterns?" Certainly that makes sense. And in fact, it looks really trivial to solve this problem with a single bang to force evaluation of the newly constructed rt before recursing back into go. For example, we can add {-# LANGUAGE BangPatterns #-} to the top of our file and then define go as: rt go {-# LANGUAGE BangPatterns #-} go !rt [] = printAverage rt go (RunningTotal sum count) (x:xs) = let rt = RunningTotal (sum + x) (count + 1) in go rt xs Unfortunately, this results in exactly the same memory usage as we had before. In order to understand why this is happening, we need to look at something called weak head normal form. Note in advance that there's a great Stack Overflow answer on this topic for further reading. We've been talking about forcing values and evaluating expressions, but what exactly that means hasn't been totally clear. To start simple, what will the output of this program be? main = putStrLn $ undefined `seq` "Hello World" You'd probably guess that it will print an error about undefined, since it will try to evaluate undefined before it will evaluate "Hello World", and because putStrLn is strict in its argument. And you'd be correct. But let's try something a little bit different: "Hello World" main = putStrLn $ Just undefined `seq` "Hello World" If you assume that "evaluate" means "fully evaluate into something with no thunks left," you'll say that this, too, prints an undefined error. But in fact, it happily prints out "Hello World" with no exceptions. What gives? It turns out that when we talk about forcing evaluation with seq, we're only talking about evaluating to weak head normal form (WHNF). For most data types, this means unwrapping one layer of constructor. In the case of Just undefined, it means that we unwrap the Just data constructor, but don't touch the undefined within it. (We'll see a few ways to deal with this differently below.) Just undefined Just It turns out that, with a standard data constructor*, the impact of using seq is the same as pattern matching the outermost constructor. If you want to monomorphise, for example, you can implement a function of type seqMaybe :: Maybe a -> b -> b and use it in the main example above. Go ahead and give it a shot... answer below. seqMaybe :: Maybe a -> b -> b * Hold your horses, we'll talk about newtypes later and then you'll understand this weird phrasing. newtype seqMaybe :: Maybe a -> b -> b seqMaybe Nothing b = b seqMaybe (Just _) b = b main :: IO () main = do putStrLn $ Just undefined `seqMaybe` "Hello World" putStrLn $ undefined `seqMaybe` "Goodbye!" Let's up the ante again. What do you think this program will print? main = do putStrLn $ error `seq` "Hello" putStrLn $ (\x -> undefined) `seq` "World" putStrLn $ error "foo" `seq` "Goodbye!" You might think that error `seq` ... would be a problem. After all, isn't error going to throw an exception? However, error is a function. There's no exception getting thrown, or no bottom value being provided, until error is given its String argument. As a result, evaluating does not, in fact, generate an error. The rule is: any function applied to too few values is automatically in WHNF. error `seq` ... String A similar logic applies to (\x -> undefined). Although it's a lambda expression, its type is a function which has not been applied to all arguments. And therefore, it will not throw an exception when evaluated. In other words, it's already in WHNF. (\x -> undefined) However, error "foo" is a function fully applied to its arguments. It's no longer a function, it's a value. And when we try to evaluate it to WHNF, its exception blows up in our face. error "foo" EXERCISE Will the following throw exceptions when evaluated? (+) undefined Just undefined undefined 5 (error "foo" :: Int -> Double) Having understood WHNF, let's return to our example and see why our first bang pattern did nothing to help us: In WHNF, forcing evaluation is the same as unwrapping the constructor, which we are already doing in the second clause! The problem is that the values contained inside the RunningTotal data constructor are not being evaluated, and therefore are accumulating thunks. Let's see two ways to solve this: go rt [] = printAverage rt go (RunningTotal !sum !count) (x:xs) = let rt = RunningTotal (sum + x) (count + 1) in go rt xs Instead of putting the bangs on the RunningTotal value, I'm putting them on the values within the constructor, forcing them to be evaluated at each loop. We're no longer accumulating a huge chain of thunks, and our maximum residency drops to 44kb. (Total allocations, though, are still up around 192mb. We need to play around with other optimizations outside the scope of this post to deal with the total allocations, so we're going to ignore this value for the rest of the examples.) Another approach is: go rt [] = printAverage rt go (RunningTotal sum count) (x:xs) = let !sum' = sum + x !count' = count + 1 rt = RunningTotal sum' count' in go rt xs This one instead forces evaluation of the new sum and count before constructing the new RunningTotal value. I like this version a bit more, as it's forcing evaluation at the correct point: when creating the value, instead of on the next iteration of the loop when destructing it. Moral of the story: make sure you're evaluating the thing you actually need to evaluate, not just its container! The fact that seq only evaluates to weak head normal form is annoying. There are lots of times when we would like to fully evaluate down to normal form (NF), meaning all thunks have been evaluated inside our values. While there is nothing built into the language to handle this, there is a semi-standard (meaning it ships with GHC) library to handle this: deepseq. It works by providing an NFData type class the defines how to reduce a value to normal form (via the rnf method). NFData rnf {-# LANGUAGE BangPatterns #-} import Control.DeepSeq data RunningTotal = RunningTotal { sum :: Int , count :: Int } instance NFData RunningTotal where rnf (RunningTotal sum count) = sum `deepseq` count `deepseq` () rt `deepseq` go rt xs main :: IO () main = printListAverage [1..1000000] This has a maximum residency, once again, of 44kb. We define our NFData instance, which includes an rnf method. The approach of simply deepseqing all of the values within a data constructor is almost always the approach to take for NFData instances. In fact, it's so common, that you can get away with just using Generic deriving and have GHC do the work for you: Generic {-# LANGUAGE DeriveGeneric #-} import GHC.Generics (Generic) import Control.DeepSeq data RunningTotal = RunningTotal { sum :: Int , count :: Int } deriving Generic instance NFData RunningTotal The true beauty of having NFData instances is the ability to abstract over many different data types. We can use this not only to avoid space leaks (as we're doing here), but also to avoid accidentally including exceptions inside thunks within a value. For an example of that, check out the tryAnyDeep function from the safe-exceptions library. EXERCISE Define the deepseq function yourself in terms of rnf and seq. These approaches work, but they are not ideal. The problem lies in our definition of RunningTotal. What we want to say is that, whenever you have a value of type RunningTotal, you in fact have two Ints. But because of laziness, what we're actually saying is that a RunningTotal value could contain two Ints, or it could contain thunks that will evaluate to Ints, or thunks that will throw exceptions. Int Instead, we'd like to make it impossible to construct a RunningTotal value that has any laziness room left over. And to do that, we can use strictness annotations in our definition of the data type: data RunningTotal = RunningTotal { sum :: !Int , count :: !Int } deriving Generic] All we've done is put bangs in front of the Ints in the definition of RunningTotal. We have no other references to strictness or evaluation in our program. However, by placing the strictness annotations on those fields, we're saying something simple and yet profound: Whenever you evaluate a value of type RunningTotal, you must also evaluate the two Ints it contains As we mentioned above, our second go clause forces evaluation of the RunningTotal value by taking apart its constructor. This act now automatically forces evaluation of sum and count, which we previously needed to achieve via a bang pattern. sum count There's one other advantage to this, which is slightly out of scope but worth mentioning. When dealing with small values like an Int, GHC will automatically unbox strict fields. This means that, instead of keeping a pointer to an Int inside RunningTotal, it will keep the Int itself. This can further reduce memory usage. You're probably asking a pretty good question right now: "how do I know if I should use a strictness annotation on my data fields?" This answer is slightly controversial, but my advice and recommended best practice: unless you know that you want laziness for a field, make it strict. Making your fields strict helps in a few ways: Let's define three very similar data types: data Foo = Foo Int data Bar = Bar !Int newtype Baz = Baz Int Let's play a game, and guess the output of the following potential bodies for main. Try to work through each case in your head before reading the explanation below. case undefined of { Foo _ -> putStrLn "Still alive!" } case Foo undefined of { Foo _ -> putStrLn "Still alive!" } case undefined of { Bar _ -> putStrLn "Still alive!" } case Bar undefined of { Bar _ -> putStrLn "Still alive!" } case undefined of { Baz _ -> putStrLn "Still alive!" } case Baz undefined of { Baz _ -> putStrLn "Still alive!" } Case (1) is relatively straightforward: we try to unwrap one layer of data constructor (the Foo) and find a bottom value. So this thing throws an exception. The same thing applies to (3). Foo (2) does not throw an exception. We have a Foo data constructor in our expression, and it contains a bottom value. However, since there is no strictness annotation on the Int in Foo, uwnrapping the Foo does not force evaluation of the Int, and therefore no exception is thrown. By contrast, in (4), we do have a strictness annotation, and therefore caseing on Bar throws an exception. case Bar What about newtypes? What we know about newtypes is that they have no runtime representation. Therefore, it's impossible for the Baz data constructor to be hiding an extra layer of bottomness. In other words, Baz undefined and undefined are indistinguishable. That may sound like Bar at first, but interestingly it's not. Baz Baz undefined You see, unwrapping a Baz constructor can have no effect on runtime behavior, since it was never there in the first place. The pattern match inside (5), therefore, does nothing. It is equivalent to case undefined of { _ -> putStrLn "Still alive!" }. And since we're not inspecting the undefined at all (because we're using a wildcard pattern and not a data constructor), no exception is thrown. case undefined of { _ -> putStrLn "Still alive!" } Similarly, in case (6), we've applied a Baz constructor to undefined, but since it has no runtime representation, it may as well not be there. So once again, no exception is thrown. EXERCISE What is the output of the program main = Baz undefined `seq` putStrLn "Still alive!"? Why? main = Baz undefined `seq` putStrLn "Still alive!" It can be inconvenient, as you may have noticed already, to use seq and deepseq all over the place. Bang patterns help, but there are other ways to force evaluation. Perhaps the most common is the $! operator, e.g.: $! mysum :: [Int] -> Int mysum list0 = go list0 0 where go [] total = total go (x:xs) total = go xs $! total + x main = print $ mysum [1..1000000] This forces evaluation of total + x before recursing back into the go function, avoiding a space leak. (EXERCISE: do the same thing with a bang pattern, and with the seq function.) total + x The $!! operator is the same, except instead of working with seq, it uses deepseq and therefore evaluates to normal form. $!! import Control.DeepSeq average :: [Int] -> Double average list0 = go list0 (0, 0) where go [] (total, count) = fromIntegral total / count go (x:xs) (total, count) = go xs $!! (total + x, count + 1) main = print $ average [1..1000000] Another nice helper function is force. What this does is makes it that, when the expression you're looking at is evaluated to WHNF, it's actually evaluated to NF. For example, we can rewrite the go function above as: force go [] (total, count) = fromIntegral total / count go (x:xs) (total, count) = go xs $! force (total + x, count + 1) EXERCISE Define these convenience functions and operators yourself in terms of seq and deepseq. Alright, I swear that's all of the really complicated stuff. If you've absorbed all of those details, the rest of this just follows naturally and introduces a little bit more terminology to help us understand things. Let's start off slowly: what's the output of this program: data List a = Cons a (List a) | Nil main = Cons undefined undefined `seq` putStrLn "Hello World" Well, using our principles from above: Cons undefined undefined is already in WHNF, since we've got the outermost constructor available. So this program prints "Hello World", without any exceptions. Cool. Now let's realize that Cons is the same as the : data constructor for lists, and see that the above is identical to: Cons undefined undefined Cons : main = (undefined:undefined) `seq` putStrLn "Hello World" This tells me that lists are a lazy data structure: I have a bottom value for the first element, a bottom value for the rest of the list, and yet this first cell is not bottom. Let's try something a little bit different: data List a = Cons a !(List a) | Nil main = Cons undefined undefined `seq` putStrLn "Hello World" This is going to explode in our faces! We are now strict in the tail of the list. However, the following is fine: data List a = Cons a !(List a) | Nil main = Cons undefined (Cons undefined Nil) `seq` putStrLn "Hello World" With this definition of a list, we need to know all the details about the list itself, but the values can remain undefined. This is called spine strict. By contrast, we can also be strict in the values and be value strict: data List a = Cons !a !(List a) | Nil main = Cons undefined (Cons undefined Nil) `seq` putStrLn "Hello World" This will explode in our faces, as we'd expect. There's one final definition of list you may be expecting, one strict in values but not in the tail: data List a = Cons !a (List a) | Nil In practice, I'm aware of no data structures in Haskell that follow this pattern, and therefore it doesn't have a name. (If there are such data structures, and this does have a name, please let me know, I'd be curious about the use cases for it.) So standard lists are lazy. Let's look at a few other data types: The vectors in Data.Vector (also known as boxed vectors) are spine strict. Assuming an import of import qualified Data.Vector as V, what would be the results of the following programs? Data.Vector import qualified Data.Vector as V main = V.fromList [undefined] `seq` putStrLn "Hello World" main = V.fromList (undefined:undefined) `seq` putStrLn "Hello World" main = V.fromList undefined `seq` putStrLn "Hello World" The first succeeds: we have the full spine of the vector defined. The fact that it contains a bottom value is irrelevant. The second fails, since the spine of the tail of the list is undefined, making the spine undefined. And finally the third (of course) fails, since the entire list is undefined. Now let's look at unboxed vectors. Because of inference issues, we need to help out GHC a little bit more. So starting with this head of a program: import qualified Data.Vector.Unboxed as V fromList :: [Int] -> V.Vector Int fromList = V.fromList What happens with the three cases above? main = fromList [undefined] `seq` putStrLn "Hello World" main = fromList (undefined:undefined) `seq` putStrLn "Hello World" main = fromList undefined `seq` putStrLn "Hello World" As you'd expect, (2) and (3) have the same behavior as with boxed vectors. However, (1) also throws an exception, since unboxed vectors are value strict, not just spine strict. The same applies to storable and primitive vectors. Unfortunately, to my knowledge, there is no definition of a strict, boxed vector in a public library. Such a data type would be useful to help avoid space leaks (such as the original question that triggered this blog post). If you look at the containers and unordered-containers packages, you may have noticed that the Map-like modules come in Strict and Lazy variants (e.g., Data.HashMap.Strict and Data.HashMap.Lazy) while the Set-like modules do not (e.g., Data.IntSet). This is because all of these containers are spine strict, and therefore must be strict in the keys. Since a set only has keys, no separate values, it must also be value strict. Strict Lazy Data.HashMap.Strict Data.HashMap.Lazy Data.IntSet A map, by contrast, has both keys and values. The lazy variants of the map-like modules are spine-strict, value-lazy, whereas the strict variants are both spine and value strict. EXERCISE Analyze the Data.Sequence.Seq data type and classify it as either lazy, spine strict, or value strict. Data.Sequence.Seq A function is considered strict in one of its arguments if, when the function is applied to a bottom value for that argument, the result is bottom. As we saw way above, + for Int is strict in both of its arguments, since: undefined + x is bottom, and x + undefined is bottom. undefined + x x + undefined By contrast, the const function, defined as const a b = a, is strict in its first argument and lazy in its second argument. const const a b = a The : data constructor for lists is lazy in both its first and second argument. But if you have data List = Cons !a !(List a) | Nil, Cons is strict in both its first and second argument. data List = Cons !a !(List a) | Nil A common place to end up getting tripped up by laziness is dealing with folds. The most infamous example is the foldl function, which lulls you into a false sense of safety only to dash your hopes and destroy your dreams: foldl mysum :: [Int] -> Int mysum = foldl (+) 0 main :: IO () main = print $ mysum [1..1000000] This is so close to correct, and yet uses 53mb of resident memory! The solution is but a tick away, using the strict left fold foldl' function: foldl' import Data.List (foldl') mysum :: [Int] -> Int mysum = foldl' (+) 0 main :: IO () main = print $ mysum [1..1000000] Why does the Prelude expose a function (foldl) which is almost always the wrong one to use? Prelude But the important thing to note about almost all functions that claim to be strict is that they are only strict to weak head normal form. Pulling up our average example from before, this still has a space leak: average import Data.List (foldl') average :: [Int] -> Double average = divide . foldl' add (0, 0) where divide (total, count) = fromIntegral total / count add (total, count) x = (total + x, count + 1) main :: IO () main = print $ average [1..1000000] My advice is to use a helper data type with strict fields. But perhaps you don't want to do that, and you're frustrated that there is no foldl' that evaluates to normal form. Fortunately for you, by just throwing in a call to force, you can easily upgrade a WHNF fold into a NF fold: import Data.List (foldl') import Control.DeepSeq (force) average :: [Int] -> Double average = divide . foldl' add (0, 0) where divide (total, count) = fromIntegral total / count add (total, count) x = force (total + x, count + 1) main :: IO () main = print $ average [1..1000000] Like a good plumber, force patches that leak right up! One of the claims of streaming data libraries (like conduit) is that they promote constant memory usage. This may make you think that you can get away without worrying about space leaks. However, all of the comments about WHNF vs NF mentioned above apply. To prove the point, let's do average badly with conduit: import Conduit average :: Monad m => ConduitM Int o m Double average = divide <$> foldlC add (0, 0) where divide (total, count) = fromIntegral total / count add (total, count) x = (total + x, count + 1) main :: IO () main = print $ runConduitPure $ enumFromToC 1 1000000 .| average You can test the memory usage of this with: $ stack --resolver lts-9.3 ghc --package conduit-combinators -- Main.hs -O2 $ ./Main +RTS -s $ stack --resolver lts-9.3 ghc --package conduit-combinators -- Main.hs -O2 $ ./Main +RTS -s EXERCISE Make this program run in constant resident memory, by using: Look at this super strict program. It's got a special value-strict list data type. I've liberally sprinkled bang patterns and calls to seq throughout. I've used $!. How much memory do you think it uses? #!/usr/bin/env stack -- stack --resolver lts-9.3 script {-# LANGUAGE BangPatterns #-} data StrictList a = Cons !a !(StrictList a) | Nil strictMap :: (a -> b) -> StrictList a -> StrictList b strictMap _ Nil = Nil strictMap f (Cons a list) = let !b = f a !list' = strictMap f list in b `seq` list' `seq` Cons b list' strictEnum :: Int -> Int -> StrictList Int strictEnum low high = go low where go !x | x == high = Cons x Nil | otherwise = Cons x (go $! x + 1) double :: Int -> Int double !x = x * 2 evens :: StrictList Int evens = strictMap double $! strictEnum 1 1000000 main :: IO () main = do let string = "Hello World" string' = evens `seq` string putStrLn string Look carefully, read the code well, and make a guess. Ready? Good. It uses 44kb of memory. "What?!" you may exclaim. "But this thing has to hold onto a million Ints in a strict linked list!" Ehh... almost. It's true, our program is going to do a hell of a lot of evaluation as soon as we force the evens value. And as soon as we force the string' value in main, we'll force evens. evens string' However, our program never actually forces evaluation of either of these! If you look carefully, the last line in the program uses the string value. It never looks at string' or evens. When executing our program, GHC is only interested in performing the IO actions it is told to perform by the main function. And main only says something about putStrLn string. string IO putStrLn string This is vital to understand. You can build up as many chains of evaluation using seq and deepseq as you want in your program. But ultimately, unless you force evaluation via some IO action of the value at the top of the chain, it will all remain an unevaluated thunk. EXERCISES putStrLn string putStrLn string' Sebastian Graf has written a great post called analyzing this blog post. It goes into much more detail on how GHC analyzes and optimizes cases of strictness. Or to quote the author himself, "In this post, I want to offer another, more surgical approach to plugging space leaks that works hand in hand with optimizations carried out by the compiler." If that's something you're interested in, I recommend checking it out. Do you like this blog post and need help with DevOps, Rust or functional programming? Contact us. Share this
https://www.fpcomplete.com/blog/2017/09/all-about-strictness/
CC-MAIN-2020-50
refinedweb
5,863
71.44
nested for loops, need more assignment help.... Brandi Love Ranch Hand Joined: Sep 19, 2003 Posts: 133 posted Oct 28, 2003 16:59:00 0 Okay, I have to write a program that displays the averages of four different sets of grades using a nested for loop. I wasn't quite positive whether or not I was doing it right, but I gave it a shot. Heres the code and the errors I keep getting... public class ExperimentAverages { public static void main(String[] args) { double gr1, gr2, gr3, gr4, gr5, gr6, total, average; total = gr1 + gr2 + gr3 + gr4 + gr5 + gr6; average = total/6; for (gr1=23.2; gr2=31.5; gr3=16.9; gr4=27.5; gr5=25.4; gr6=28.6) { sytem.out.println("The total is " + total ". The average is " + average + ".") } for (gr1=34.8; gr2=45.2; gr3=27.9; gr4=36.8; gr5=33.4; gr6=39.4) { sytem.out.println("The total is " + total ". The average is " + average + ".") } for (gr1=19.4; gr2=16.8; gr3=10.2; gr4=20.8; gr5=18.9; gr6=13.4) { sytem.out.println("The total is " + total ". The average is " + average + ".") } for (gr1=36.9; gr2=39.5; gr3=49.2; gr4=45.1; gr5=42.7; gr6=50.6) { sytem.out.println("The total is " + total ". The average is " + average + ".") } } } C:\WINDOWS\Desktop\CS lab\ExperimentAverages.java:13: ')' expected for (gr1=23.2; gr2=31.5; gr3=16.9; gr4=27.5; gr5=25.4; gr6=28.6) ^ C:\WINDOWS\Desktop\CS lab\ExperimentAverages.java:13: ';' expected for (gr1=23.2; gr2=31.5; gr3=16.9; gr4=27.5; gr5=25.4; gr6=28.6) ^ C:\WINDOWS\Desktop\CS lab\ExperimentAverages.java:13: incompatible types found : double required: boolean for (gr1=23.2; gr2=31.5; gr3=16.9; gr4=27.5; gr5=25.4; gr6=28.6) ^ 3 errors Any help as to how I might improve this, or at least get it so that it runs would be most appreciated. Michael Fitzmaurice Ranch Hand Joined: Aug 22, 2001 Posts: 168 posted Oct 28, 2003 17:18:00 0 Hmmm, I think you need to do a bit of reading on what a for loop is and how you write one. A typical nested for loop can be seen in the code snippet below. Compile this code and run it - do you understand what is going on here? public class LoopTest { public static void main(String[] args) { for (int i = 0; i < 10; i++) { for (int j = 0; j < 3; j++) { System.out.println("i is " + i + ", j is " + j); } } } } Where are the scores you are averaging coming from? Are they input by the user? Michael "One good thing about music - when it hits, you feel no pain" <P>Bob Marley Brandi Love Ranch Hand Joined: Sep 19, 2003 Posts: 133 posted Oct 28, 2003 17:36:00 0 I know what a nested for loop is, I'm just not too sure how to apply it to the problem. Its basically a for loop within a for loop. The scores are already given, so I have to somehow set them as doubles somewhere within the program. John Smith Ranch Hand Joined: Oct 08, 2001 Posts: 2937 posted Oct 28, 2003 18:04:00 0 for (gr1=23.2; gr2=31.5; gr3=16.9; gr4=27.5; gr5=25.4; gr6=28.6) Not sure what you are trying to accomplish here, but it's definitely not a nested "if". Look again at Michael's code, and think about the problem that you are trying to solve. fred rosenberger lowercase baba Bartender Joined: Oct 02, 2003 Posts: 11808 18 I like... posted Oct 29, 2003 07:13:00 0 a for-loop has a very specific structure. inside the parens there is an initialization, a test condition, and a 'what to do at the end of each loop'. each is separated by a semicolon - therefore, you should only have 2 semicolons. in your example, the compiler is confused because you have something like 5... it can't figure out what you are doing. a for loop is often used to look at every element of an array... so, maybe you could use a single for-loop to add all the values of one set of grades, if they were in an array. then after the loop, you could calculate the total and average. but why do they say a nested loop? well, you actually have a set of (set of grades). this sounds like a 2D arrray. one loop (outer) would iterate over the set of sets, and the inner loop would iterate over the set of grades. I'd try and just write the inner loop, over one set of grades, and the calucation first. once i was pretty sure i had that part working, and understood it, try and wrap that whole thing in another loop. this might then also require some tweaking of the inner loop, since the data would now be in a 2d instead of a 1d array. There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors William Barnes Ranch Hand Joined: Mar 16, 2001 Posts: 986 I like... posted Oct 29, 2003 08:53:00 0 I love this place. Please ignore post, I have no idea what I am talking about. Joel McNary Bartender Joined: Aug 20, 2001 Posts: 1824 I like... posted Oct 29, 2003 15:53:00 0 William: Now, now, now, Be Nice! Brandi: Fred is right in his description of the problem. You are attempting to write a for loop to calculate each set of grades. Instead, write one foor loop to calculate one set and then place that loop inside another which iterates over the individual sets. It is possible to have more than one initializer in a for loop, but it is not recommended that you do so. Instead, put your grades into an array (or a List if you prefer) and then iterate over the array. for(int index = 0; index < array.length; index++){ // ^initializer ^conditional check ^increment operator double aDouble = array[index]; //Do stuff } Piscis Babelis est parvus, flavus, et hiridicus, et est probabiliter insolitissima raritas in toto mundo. Yan Lee Ranch Hand Joined: Sep 15, 2003 Posts: 94 posted Oct 29, 2003 18:12:00 0 Hi Brandi: One way you can use nested loops to solve the problem at hand is: 1. create a 2-D array of 4 rows where each row holds a set of 4 grades 2. for each row, you can sum the contents of the elements of the row 3. so you need 2 loops, one for traversing the rows, and other for travesing the columns public class MyClass { //1. declare 2-d array variable and initialise it with 4 values in each row //int [][]myArray={....} int sum=0; public MyClass() { //2. here comes the ensted llop for(int i=0; i<myArray.length;i++) { for(int j=0;j<4;j++) {//sum the contents here } System.out.println("Sum of the"+ sum); System.out.println("Average of set"+(sum/4)); System.out.println("----------------------------------"); } } public static void main( String []args) { MyClass mc=new MyClass();} } Hope that this helps I agree. Here's the link: subject: nested for loops, need more assignment help.... Similar Threads Understanding this Array? Illegal start of type; identifier expected problem. join() method; guarenteed behaviour java:12: cannot find symbol <identifier> expected All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/394696/java/java/nested-loops-assignment
CC-MAIN-2015-27
refinedweb
1,269
73.58
Circus provides four hooks that can be used to trigger actions when a watcher is starting or stopping. A typical use case is to control that all the conditions are met for a process to start. Let’s say you have a watcher that runs Redis and a watcher that runs a Python script that works with Redis. With Circus you can order the startup by using the priority option: [watcher:queue-worker] cmd = python -u worker.py priority = 2 [watcher:redis] cmd = redis-server priority = 1 With this setup, Circus will start Redis then the queue worker. But Circus does not really control that Redis is up and running. It just starts the process it was asked to start. What we miss here is a way to control that Redis is started, and fully functional. A function that controls this could be: import redis import time def check_redis(*args, **kw): time.sleep(.5) # give it a chance to start r = redis.StrictRedis(host='localhost', port=6379, db=0) r.set('foo', 'bar') return r.get('foo') == 'bar' This function can be plugged into Circus as a after_start hook: [watcher:queue-worker] cmd = python -u worker.py hooks.before_start = mycoolapp.myplugins.check_redis priority = 2 [watcher:redis] cmd = redis-server priority = 1 Once Circus has started the redis watcher, it will start the queue-worker watcher, since it follows the priority ordering. Just before starting the second watcher, it will run the check_redis function, and in case it returns False will abort the watcher starting process. Available hooks are: A hook must follow this signature: def hook(watcher, arbiter, hook_name): ... Where watcher is the Watcher class instance, arbiter the Arbiter one, and hook_name the hook name. You can ignore those but being able to use the watcher and/or arbiter data and methods can be useful in some hooks. Everytime a hook is run, its result is notified as an event in Circus. There are two events related to hooks:
http://circus.readthedocs.org/en/0.6/hooks/
CC-MAIN-2015-27
refinedweb
329
64.71
Defines a stage of a panoramic experience. More... #include <PanoramicStage.h> Defines a stage of a panoramic experience. What and how to be played and how to move from one stage to the next one. Image stage data. The media layout. The media type. Movie stage data. An optional name. Collects all stage navigations to switch stage. An optional texture used for the navigation. The texture must have a mono layout even if the MediaLayout is stereoscopic. The texture could have a different size compared to the Media but the aspect ratio should match. Be sure to disable sRGB as your masks should not be Gamma corrected. Set Compression settings to Userinterface2D (RGBA) and the Mip Gen Settings to NoMipmaps; "Never Stream" should be set to true. An additional yaw rotation (in degrees) to apply to the mesh sphere. This can be useful to align two stages together. It is ignored if FPanoramicStageNavigation::YawRotationSetFlag is true.
https://www.unamedia.com/ue4-stereo-panoramic-player/api/class_u_panoramic_stage.html
CC-MAIN-2021-17
refinedweb
156
70.7
I need to pass some data over from Go to an '300 es' shader. The data consists of two uint16s packed into a uint32. Each uint16 represents a half-precision float (float16). I found some PD Java code that looks like it will do the job, but I am struggling with porting the last statement, which uses a couple of zero-extend right shifts (I think the other shifts are fine i.e. non-negative). Since Go is a bit clever with extending, the solution to the port is eluding me. I did think maybe the first one could be changed into a left shift, since it just seems to be positioning a single bit for addition? but the final shift blows my mind out the water :) btw I hope I got the bracketing right, since the operator precedence seems to be different between Go and Java regarding '-' and '>>'... I need to go the other way around next, but that is hopefully easier without right shifts... famous last words! Java code: // returns all higher 16 bits as 0 for all results public static int fromFloat( float fval ) { int fbits = Float.floatToIntBits( fval ); int sign = fbits >>> 16 & 0x8000; // sign only int val = ( fbits & 0x7fffffff ) + 0x1000; // rounded value if( val >= 0x47800000 ) // might be or become NaN/Inf { // avoid Inf due to rounding if( ( fbits & 0x7fffffff ) >= 0x47800000 ) { // is or must become NaN/Inf if( val < 0x7f800000 ) // was value but too large return sign | 0x7c00; // make it +/-Inf return sign | 0x7c00 | // remains +/-Inf or NaN ( fbits & 0x007fffff ) >>> 13; // keep NaN (and Inf) bits } return sign | 0x7bff; // unrounded not quite Inf } if( val >= 0x38800000 ) // remains normalized value return sign | val - 0x38000000 >>> 13; // exp - 127 + 15 if( val < 0x33000000 ) // too small for subnormal return sign; // becomes +/-0 val = ( fbits & 0x7fffffff ) >>> 23; // tmp exp for subnormal calc return sign | ( ( fbits & 0x7fffff | 0x800000 ) // add subnormal bit + ( 0x800000 >>> val - 102 ) // round depending on cut off >>> 126 - val ); // div by 2^(1-(exp-127+15)) and >> 13 | exp=0 } My partial port: func float32toUint16(f float32) uint16 { fbits := math.Float32bits(f) sign := uint16((fbits >> 16) & 0x00008000) rv := (fbits & 0x7fffffff) + 0x1000 if rv >= 0x47800000 { if (fbits & 0x7fffffff) >= 0x47800000 { if rv < 0x7f800000 { return sign | 0x7c00 } return sign | 0x7c00 | uint16((fbits&0x007fffff)>>13) } return sign | 0x7bff } if rv >= 0x38800000 { return sign | uint16((rv-0x38000000)>>13) } if rv < 0x33000000 { return sign } rv = (fbits & 0x7fffffff) >> 23 return sign | uint16(((fbits&0x7fffff)|0x800000)+(0x800000>>(rv-102))>>(126-rv)) //these two shifts are my problem } func pack16(f1 float32, f2 float32) uint32 { ui161 := float32toUint16(f1) ui162 := float32toUint16(f2) return ((uint32(ui161) << 16) | uint32(ui162)) } I found what looked like even more efficient code, with no branching, but understanding the mechanics of how that works to be able to port it is a bit ;) beyond my rusty (not the language) skills. Cheers [Edit] The code appears to work with the values I am currently using (it's hard to be precise since I have no experience debuging a shader). So I guess my question is about the correctness of my port, especially the final two shifts. [Edit2] In the light of day I can see I already got the precedence wrong in one place and fixed the above example. changed: return sign | uint16(rv-(0x38000000>>13)) to: return sign | uint16((rv-0x38000000)>>13) User contributions licensed under CC BY-SA 3.0
https://windows-hexerror.linestarve.com/q/so63940172-converting-golang-float32-to-half-precision-float-glsl-float16-as-uint16
CC-MAIN-2021-39
refinedweb
554
56.02
import "github.com/grailbio/bigslice/exec" Package exec implements compilation, evaluation, and execution of Bigslice slice operations. bigmachine.go buffer.go combiner.go command.go compile.go config.go eval.go graph.go index.go local.go session.go slicemachine.go slicestatus.go store.go task.go topn.go tracer.go DefaultMaxLoad is the default machine max load. DoShuffleReaders determines whether reader tasks should be shuffled in order to avoid potential thundering herd issues. This should only be used in testing when deterministic ordering matters. TODO(marius): make this a session option instead. ErrTaskLost indicates that a Task was in TaskLost state. ProbationTimeout is the amount of time that a machine will remain in probation without being explicitly marked healthy. Eval simultaneously evaluates a set of task graphs from the provided set of roots. Eval uses the provided executor to dispatch tasks when their dependencies have been satisfied. Eval returns on evaluation error or else when all roots are fully evaluated. TODO(marius): we can often stream across shuffle boundaries. This would complicate scheduling, but may be worth doing. type CompileEnv struct { // Writable is true if this environment is writable. It is only exported so // that it can be gob-{en,dec}oded. Writable bool // TaskCached indicates whether a task's results can be read from cache. It // is only exported so that it can be gob-{en,dec}oded. TaskCached map[TaskName]bool } CompileEnv is the environment for compilation. This environment should capture all external state that can affect compilation of an invocation. It is shared across compilations of the same invocation (e.g. on worker nodes) to guarantee consistent compilation results. This is a requirement of bigslice's computation model, as we assume that all nodes share the same view of the task graph. func (e *CompileEnv) Freeze() Freeze freezes the state, marking e no longer writable. func (e CompileEnv) IsCached(n TaskName) bool IsCached returns whether the task named n is cached. func (e CompileEnv) IsWritable() bool IsWritable returns whether this environment is writable. func (e CompileEnv) MarkCached(n TaskName) MarkCached marks the task named n as cached. type Executor interface { // Name returns a human-friendly name for this executor. Name() string // Start starts the executor. It is called before evaluation has started // and after all funcs have been registered. Start need not return: // for example, the Bigmachine implementation of Executor uses // Start as an entry point for worker processes. Start(*Session) (shutdown func()) // Run runs a task. The executor sets the state of the task as it // progresses. The task should enter in state TaskWaiting; by the // time Run returns the task state is >= TaskOk. Run(*Task) // Reader returns a locally accessible ReadCloser for the requested task. Reader(*Task, int) sliceio.ReadCloser // Discard discards the storage resources held by a computed task. // Discarding is best-effort, so no error is returned. Discard(context.Context, *Task) // Eventer returns the eventer used to log events relevant to this executor. Eventer() eventlog.Eventer // HandleDebug adds executor-specific debug handlers to the provided // http.ServeMux. This is used to serve diagnostic information relating // to the executor. HandleDebug(handler *http.ServeMux) } Executor defines an interface used to provide implementations of task runners. An Executor is responsible for running single tasks, partitioning their outputs, and instantiating readers to retrieve the output of any given task. An Option represents a session configuration parameter value. Local configures a session with the local in-binary executor. MachineCombiners is a session option that turns on machine-local combine buffers. If turned on, each combiner task that belongs to the same shard-set and runs on the same machine combines values into a single, machine-local combine buffer. This can be a big performance optimization for tasks that have low key cardinality, or a key-set with very hot keys. However, due to the way it is implemented, error recovery is currently not implemented for such tasks. func Bigmachine(system bigmachine.System, params ...bigmachine.Param) Option Bigmachine configures a session using the bigmachine executor configured with the provided system. If any params are provided, they are applied to each bigmachine allocated by Bigslice. Eventer configures the session with an Eventer that will be used to log session events (for analytics). MaxLoad configures the session with the provided max machine load. Parallelism configures the session with the provided target parallelism. Status configures the session with a status object to which run statuses are reported. TracePath configures the path to which a trace event file for the session will be written on shutdown. A Result is the output of a Slice evaluation. It is the only type implementing bigslice.Slice that is a legal argument to a bigslice.Func. Discard discards the storage resources held by the subgraph of tasks used to compute r. This should be used to discard results that are no longer needed. If the results are needed by another computation, they will be recomputed. Discarding is best-effort, so no error is returned. Scanner returns a scanner that scans the output. If the output contains multiple shards, they are scanned sequentially. You must call Close on the returned scanner when you are done scanning. You may get and scan multiple scanners concurrently from r. Scope returns the merged metrics scope for the entire task graph represented by the result r. Scope relies on the local values in the scopes of the task graph, and thus are not precise. TODO(marius): flow and merge scopes along with data to provide precise metrics. Session represents a Bigslice compute session. A session shares a binary and executor, and is valid for the run of the binary. A session can run multiple bigslice functions, allowing for iterative computing. A session is started by the Start method. Some executors use may launch multiple copies of the binary: these additional binaries are called workers and Start in these Start does not return. All functions must be created before Start is called, and must be created in a deterministic order. This is provided by default when functions are created as part of package initialization. Registering toplevel functions this way is both safe and encouraged: var Computation = bigslice.Func(func(..) (slice Slice) { // Build up the computation, parameterized by the function. slice = ... slice = ... return slice }) // Possibly in another package: func main() { sess := exec.Start() if err := sess.Run(ctx, Computation, args...); err != nil { log.Fatal(err) } // Success! } Start creates and starts a new bigslice session, configuring it according to the provided options. Only one session may be created in a single binary invocation. The returned session remains valid for the lifetime of the binary. If no executor is configured, the session is configured to use the bigmachine executor. Discard discards the storage resources held by the subgraph given by roots. This should be used to discard tasks whose results are no longer needed. If the task results are needed by another computation, they will be recomputed. Discarding is best-effort, so no error is returned. MaxLoad returns the maximum load on each allocated machine. func (s *Session) Must(ctx context.Context, funcv *bigslice.FuncValue, args ...interface{}) *Result Must is a version of Run that panics if the computation fails. Parallelism returns the desired amount of evaluation parallelism. func (s *Session) Run(ctx context.Context, funcv *bigslice.FuncValue, args ...interface{}) (*Result, error) Run evaluates the slice returned by the bigslice func funcv applied to the provided arguments. Tasks are run by the session's executor. Run returns when the computation has completed, or else on error. It is safe to make concurrent calls to Run; the underlying computation will be performed in parallel. Shutdown tears down resources associated with this session. It should be called when the session is discarded. Status returns the session's status aggregator. type Store interface { // Create returns a writer that populates data for the given // task name and partition. The data is not be available // to Open until the returned closer has been closed. // // TODO(marius): should we allow writes to be discarded as well? Create(ctx context.Context, task TaskName, partition int) (writeCommitter, error) // Open returns a ReadCloser from which the stored contents of the named task // and partition can be read. If the task and partition are not stored, an // error with kind errors.NotExist is returned. The offset specifies the byte // position from which to read. Open(ctx context.Context, task TaskName, partition int, offset int64) (io.ReadCloser, error) // Stat returns metadata for the stored slice. Stat(ctx context.Context, task TaskName, partition int) (sliceInfo, error) // Discard discards the data stored for task and partition. Subsequent calls // to Open for the given (task, partition) will fail. ReadClosers that // already exist may start returning errors, depending on the // implementation. If no such (task, partition) is stored, returns a non-nil // error. Discard(ctx context.Context, task TaskName, partition int) error } Store is an abstraction that stores partitioned data as produced by a task. type Task struct { slicetype.Type // Invocation is the task's invocation, i.e. the Func invocation // from which this task was compiled. Invocation execInvocation // Name is the name of the task. Tasks are named uniquely inside each // Bigslice session. Name TaskName // Do starts computation for this task, returning a reader that // computes batches of values on demand. Do is invoked with readers // for the task's dependencies. Do func([]sliceio.Reader) sliceio.Reader // Deps are the task's dependencies. See TaskDep for details. Deps []TaskDep // Partitioner is used to partition the task's output. It will only // be called when NumPartition > 1. Partitioner bigslice.Partitioner // NumPartition is the number of partitions that are output by this task. // If NumPartition > 1, then the task must also define a partitioner. NumPartition int // Combiner specifies an (optional) combiner to use for this task's output. // If a Combiner is not Nil, CombineKey names the combine buffer used: // each combine buffer contains combiner outputs from multiple tasks. // If CombineKey is not set, then per-task buffers are used instead. Combiner slicefunc.Func CombineKey string // Pragma comprises the pragmas of all slice operations that // are pipelined into this task. bigslice.Pragma // Slices is the set of slices to which this task directly contributes. Slices []bigslice.Slice // Group stores an ordered list of peer tasks. If Group is nonempty, // it is guaranteed that these sets of tasks constitute a shuffle // dependency, and share a set of shuffle dependencies. This allows // the evaluator to perform optimizations while tracking such // dependencies. Group []*Task // Scopes is the metrics scope for this task. It is populated with the // metrics produced during execution of this task. Scope metrics.Scope sync.Mutex // Status is a status object to which task status is reported. Status *status.Task // contains filtered or unexported fields } A Task represents a concrete computational task. Tasks form graphs through dependencies; task graphs are compiled from slices. Tasks also maintain executor state, and are used to coordinate execution between concurrent evaluators and a single executor (which may be evaluating many tasks concurrently). Tasks thus embed a mutex for coordination and provide a context-aware conditional variable to coordinate runtime state changes. All returns all tasks reachable from t. The returned set of tasks is unique. Broadcast notifies waiters of a state change. Broadcast must only be called while the task's lock is held. Err returns an error if the task's state is >= TaskErr. When the state is > TaskErr, Err returns an error describing the task's failed state, otherwise, t.err is returned. Error sets the task's state to TaskErr and its error to the provided error. Waiters are notified. Errorf formats an error message using fmt.Errorf, sets the task's state to TaskErr and its err to the resulting error message. GraphString returns a schematic string of the task graph rooted at t. Head returns the head task of this task's phase. If the task does not belong to a phase, Head returns the task t. Phase returns the phase to which this task belongs. Set sets the task's state to the provided state and notifies any waiters. State returns the task's current state. String returns a short, human-readable string describing the task's state. func (t *Task) Subscribe(s *TaskSubscriber) Subscribe subscribes s to be notified of any changes to t's state. If s has already been subscribed, no-op. func (t *Task) Unsubscribe(s *TaskSubscriber) Unsubscribe unsubscribes previously subscribe s. s will on longer receive task state change notifications. No-op if s was never subscribed. Wait returns after the next call to Broadcast, or if the context is complete. The task's lock must be held when calling Wait. WaitState returns when the task's state is at least the provided state, or else when the context is done. WriteGraph writes a schematic string of the task graph rooted at t into w. type TaskDep struct { // Head holds the underlying task that represents this dependency. // For shuffle dependencies, that task is the head task of the // phase, and the evaluator must expand the phase. Head *Task Partition int // Expand indicates that the task's dependencies for a given // partition should not be merged, but rather passed individually to // the task implementation. Expand bool // CombineKey is an optional label that names the combination key to // be used by this dependency. It is used to name a single combiner // buffer from which is read a number of combined tasks. // // CombineKeys must be provided to tasks that contain combiners. CombineKey string } A TaskDep describes a single dependency for a task. A dependency comprises one or more tasks and the partition number of the task set that must be read at run time. NumTask returns the number of tasks that are comprised by this dependency. Task returns the i'th task comprised by this dependency. type TaskName struct { // InvIndex is the index of the invocation for which the task was compiled. InvIndex uint64 // Op is a unique string describing the operation that is provided // by the task. Op string // Shard and NumShard describe the shard processed by this task // and the total number of shards to be processed. Shard, NumShard int } A TaskName uniquely names a task by its constituent components. Tasks with 0 shards are taken to be combiner tasks: they are machine-local buffers of combiner outputs for some (non-overlapping) subset of shards for a task. IsCombiner returns whether the named task is a combiner task. String returns a canonical representation of the task name, formatted as: {n.Op}@{n.NumShard}:{n.Shard} {n.Op}_combiner TaskState represents the runtime state of a Task. TaskState values are defined so that their magnitudes correspond with task progression. const ( // TaskInit is the initial state of a task. Tasks in state TaskInit // have usually not yet been seen by an executor. TaskInit TaskState = iota // TaskWaiting indicates that a task has been scheduled for // execution (it is runnable) but has not yet been allocated // resources by the executor. TaskWaiting // TaskRunning is the state of a task that's currently being run or // discarded. After a task is in state TaskRunning, it can only enter a // larger-valued state. TaskRunning // TaskOk indicates that a task has successfully completed; // the task's results are available to dependent tasks. // // All TaskState values greater than TaskOk indicate task // errors. TaskOk // TaskErr indicates that the task experienced a failure while // running. TaskErr // TaskLost indicates that the task was lost, usually because // the machine to which the task was assigned failed. TaskLost ) String returns the task's state as an upper-case string. TaskSubscriber is subscribed to a Task using Subscribe. It is then notified whenever the Task state changes. This is useful for efficiently observing the state changes of many tasks. func NewTaskSubscriber() *TaskSubscriber NewTaskSubscriber returns a new TaskSubscriber. It needs to be subscribed to a Task with Subscribe for it to be notified of task state changes. func (s *TaskSubscriber) Notify(task *Task) Notify notifies s of a task whose state has changed. func (s *TaskSubscriber) Ready() <-chan struct{} Ready returns a channel that is closed if a subsequent call to Tasks will return a non-nil slice. func (s *TaskSubscriber) Tasks() []*Task Tasks returns the tasks whose state has changed since the last call to Tasks. Package exec imports 55 packages (graph) and is imported by 4 packages. Updated 2020-09-18. Refresh now. Tools for package owners.
https://godoc.org/github.com/grailbio/bigslice/exec
CC-MAIN-2020-40
refinedweb
2,728
59.09
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hello, we want to assign an issue to a user from a custom field (From anpother plugin (CRM). The field only displays the full display name) using the behaviours/scriptrunner plugin. see image: (Attaching images here was not possible?! -> Server Error) I think we need to get the "userName" / "userkey" (e.g.: FF) from the custom field with the "userdisplayName" (e.g.: Fred Flintstone) to set the Assignee afterwards. I hope someone could help. Or is the "Issue Alternative Assignee"-Plugin capable of that? (we have Jira V7.1.1) kind regards Stefan yes this "findUserNames" seems to be what we need - but we are too unskilled to implement this in our behaviour. Can you (or someone else) please give us an example what classes we have to import and how the syntax should be (we are no programmers) ? This is the (not working) serverside script we have right now: import static com.atlassian.jira.issue.IssueFieldConstants.* import com.atlassian.jira.user.ApplicationUsers //Line below does not work //getFieldById(ASSIGNEE).setFormValue(getFieldById("customfield_12239").getValue()) //Line below is for testing only getFieldById("comment").setFormValue(getFieldById("customfield_12239").getValue()) thx in advance Could you add your supplementary questions as comments rather than as answers? I think you are looking for an example of how to look up a user by display name, so here is one (I don't know what the code above is supposed to do): import com.atlassian.jira.bc.user.search.UserSearchService import com.atlassian.jira.component.ComponentAccessor def userSearchService = ComponentAccessor.getComponent(UserSearchService) def users = userSearchService.findUsersByFullName("Mr Admin") if (users) { users.first() // an ApplicationUser (in jira 7) } else { // no users found with that display name } oh sorry, will do next time/s. this is exactly what we need =) An example. Thank you & kind regards Stefan I don't know anything about that plugin, but most likely it will give you an ApplicationUser object if you call issue.getCustomFieldValue on that CF. That would be the way to go rather than trying to look up a key by display name. Hello Jamie, thanks for your answer. We already did that, but the field only returns a string (equal to the full display username). So we need somehow to get the user behind that string. When we write the value in a comment field with: we get Form field ID: customfield_12239, value: Fred Flintstone When we try to put this value in the assigne field its not working. regards You will need to use one of these methods.
https://community.atlassian.com/t5/Marketplace-Apps-questions/get-quot-userName-quot-quot-userkey-quot-from-custom-field-with/qaq-p/417353
CC-MAIN-2018-22
refinedweb
440
58.48