text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Replacing get_absolute_url
This page is a work in progress - I'm still figuring out the extent of the problem before I start working out a solution.
The problem
It's often useful for a model to "know" it's URL. This is especially true for sites that follow RESTful principles, where any entity within the site should have one and only one canonical URL.
It's also useful to keep URL logic in the same place as much as possible. Django's {% url %} template tag and reverse() function solve a slightly different problem - they resolve URLs for view functions, not for individual model objects, and treat the URLconf as the single point of truth for URLs. {% url myapp.views.profile user.id %} isn't as pragmatic as {{ user.get_absolute_url }}, since if we change the profile-view to take a username instead of a user ID in the URL we'll have to go back and update all of our templates.
Being able to get the URL for a model is also useful outside of the template system. Django's admin, syndication and sitemaps modules all attempt to derive a URL for a model at various points, currently using the get_absolute_url method.
The current mechanism for making model's aware of their URL is the semi-standardised get_absolute_url method. If you provide this method on your model class, a number of different places in Django will use it to create URLs. You can also over-ride this using settings.ABSOLUTE_URL_OVERRIDES.
Unfortunately, get_absolute_url is mis-named. An "absolute" URL should be expected to include the protocol and domain, but in most cases get_absolute_url just returns the path. It was proposed to rename get_absolute_url to get_url_path, but this doesn't make sense either as some objects DO return a full URL from get_absolute_url (and in fact some places in Django check to see if the return value starts with http:// and behave differently as a result).
From this, we can derive that there are actually two important URL parts for a given model:
- The full URL, including protocol and domain. This is needed for the following cases:
- links in e-mails, e.g. a "click here to activate your account" link
- URLs included in syndication feeds
- links used for things like "share this page on del.icio.us" widgets
- links from the admin to "this object live on the site" where the admin is hosted on a separate domain or subdomain from the live site
- The path component of the URL. This is needed for internal links - it's a waste of bytes to jam the full URL in a regular link when a path could be used instead.
A third type of URL - URLs relative to the current page - is not being considered here because of the complexity involved in getting it right. That said, it would be possible to automatically derive a relative URL using the full path and a request-aware template tag.
So, for a given model we need a reliable way of determining its path on the site AND its full URL including domain. The path can be derived from the full URL, and sometimes vice versa depending on how the site's domain relates to the model objects in question.
Django currently uses django.contrib.sites in a number of places to attempt to derive a complete URL from just a path, but this has its own problems. The sites framework assumes the presence of a number of things: a django_site table, a SITE_ID in the settings and a record corresponding to that SITE_ID. This arrangement does not always make sense - consider the case of a site which provides a unique subdomain for every one of the site's users (simonwillison.myopenid.com for example). Additionally, making users add a record to the sites table when they start their project is Yet Another Step, and one that many people ignore. Finally, the site system doesn't really take development / staging / production environments in to account. Handling these properly requires additional custom code, which often ends up working around the sites system entirely.
Finally, it's important that places that use get_absolute_url (such as the admin, sitemaps, syndication etc) always provide an over-ridable alternative. Syndication feeds may wish to include extra hit-tracking material on URLs, admin sites may wish to link to staging or production depending on other criteria etc. At the moment some but not all of these tools provide over-riding mechanisms, but without any consistency as to what they are called or how they work.
It bares repeating that the problem of turning a path returned by get_absolute_url in to a full URL is a very real one: Django actually solves it in a number of places, each one taking a slightly different approach, none of which are really ideal. The fact that it's being solved multiple times and in multiple ways suggests a strong need for a single, reliable solution.
Current uses of get_absolute_url()
By grepping the Django source code, I've identified the following places where get_absolute_url is used:
grep -r get_absolute_url django | grep -v ".svn" | grep -v '.pyc'
- contrib/admin/options.py: Uses hasattr(obj, 'get_absolute_url') to populate 'has_absolute_url' and 'show_url' properties which are passed through to templates and used to show links to that object on the actual site.
- contrib/auth/models.py: Defines get_absolute_url on the User class to be /users/{{ username }}/ - this may be a bug since that URL is not defined by default anywhere in Django.
- contrib/comments/models.py: Defines get_absolute_url on the Comment and FreeComment classes, to be the get_absolute_url of the comment's content object + '#c' + the comment's ID.
- contrib/flatpages/models.py: Defined on FlatPage model, returns this.url (which is managed in the admin)
- contrib/sitemaps/init.py: Sitemap.location(self, obj) uses obj.get_absolute_url() by default to figure out the URL to include in the sitemap - designed to be over-ridden
- contrib/syndication/feeds.py: The default Feed.item_link(self, item) method (which is designed to be over-ridden) uses get_absolute_url, and raises an informative exception if it's not available. It also uses its own add_domain() function along with current_site.domain, which in turn uses Site.objects.get_current() and falls back on RequestSite(self.request) to figure out the full URL (both Site and RequestSite come from the django.contrib.sites package).
- db/models/base.py: Takes get_absolute_url in to account when constructing the model class - this is where settings.ABSOLUTE_URL_OVERRIDES setting has its affect.
- views/defaults.py: The thoroughly magic shorcut(request, content_type_id, object_id) view, which attempts to figure out a full URL to something based on a content_type and an object_id, makes extensive use of get_absolute_url - including behaving differently if the return value starts with http://.
- views/generic/create_update.py: Both create and update views default to redirecting the user to get_absolute_url() if and only if post_save_redirect has not been configured for that view.
Finally, in the documentation:
- docs/contributing.txt - mentioned in coding standards, model ordering section
- docs/generic_views.txt
- docs/model-api.txt - lots of places, including "It's good practice to use get_absolute_url() in templates..."
- docs/settings.txt - in docs for ABSOLUTE_URL_OVERRIDES
- docs/sitemaps.txt
- docs/sites.txt - referred to as a "convention"
- docs/syndication_feeds.txt
- docs/templates.txt: - in an example
- docs/unicode.txt - "Taking care in get_absolute_url..."
- docs/url_dispatch.txt
And in the tests:
ABSOLUTE_URL_OVERRIDES is not tested.
get_absolute_url is referenced in:
- tests/regressiontests/views/models.py
- tests/regressiontests/views/tests/defaults.py
- tests/regressiontests/views/tests/generic/create_update.py
- tests/regressiontests/views/urls.py
The solution
I'm currently leaning towards two complementary methods:
- get_url_path() - returns the URL's path component, starting at the root of the site - e.g. "/blog/2008/Aug/11/slug/"
- get_url() - returns the full URL, including the protocol and domain - e.g."
Users should be able to define either or both of these methods. If they define one but not the other, the default implementation of the undefined method can attempt to figure it out based on the method that IS defined. This should actually work pretty well - get_url_path() is trival to derive from get_url(), whereas for sites that only exist on one domain get_url() could simply glue that domain (defined in settings.py, or derived from SITE_ID and the sites framework) on to get_url_path().
I don't think this needs to be all that complicated, and in fact the above scheme could allow us to delete a whole bunch of weird special case code scattered throughout Django.
Update 11th September 2008: Here's a prototype implementation (as a mixin class):
The code for the prototype mixin is as follows:
from django.conf import settings import urlparse class UrlMixin(object): def get_url(self): if hasattr(self.get_url_path, 'dont_recurse'): raise NotImplemented try: path = self.get_url_path() except NotImplemented: raise # Should we look up a related site? #if getattr(self._meta, 'url_by_site'): prefix = getattr(settings, 'DEFAULT_URL_PREFIX', '') return prefix + path get_url.dont_recurse = True def get_url_path(self): if hasattr(self.get_url, 'dont_recurse'): raise NotImplemented try: url = self.get_url() except NotImplemented: raise bits = urlparse.urlparse(url) return urlparse.urlunparse(('', '') + bits[2:]) get_url_path.dont_recurse = True
And you use it like this:
from django.db import models from django_urls.base import UrlMixin class ArticleWithPathDefined(models.Model, UrlMixin): slug = models.SlugField() def get_url_path(self): return '/articles/%s/' % self.slug class AssetWithUrlDefined(models.Model, UrlMixin): domain = models.CharField(max_length=30) filename = models.CharField(max_length = 30) def get_url(self): return '' % (self.domain, self.filename) | https://code.djangoproject.com/wiki/ReplacingGetAbsoluteUrl?version=8 | CC-MAIN-2015-32 | refinedweb | 1,579 | 55.84 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
I used DrawLine2D() to draw the "line", but it was blocked by the other objects. I hope "line" not obscured by any objects. (it's Tool_Plugin)
this is code:()
####Add code
front_v = [c4d.Vector(mx,my,0)]
current_v = [c4d.Vector(mx,my,0)]
num = 0
bd.SetPen(c4d.Vector(0,0,1.0))
####
while result==c4d.MOUSEDRAGRESULT_CONTINUE:
mx += dx
my += dy
####Add code
current_v[num] = c4d.Vector(mx,my,0)
num += 1
####
#continue if user doesnt move the mouse anymore
if dx==0.0 and dy==0.0:
result, dx, dy, channel = win.MouseDrag()
num -= 1
continue
c4d.DrawViews(c4d.DA_ONLY_ACTIVE_VIEW|c4d.DA_NO_THREAD|c4d.DA_NO_ANIMATION)
result, dx, dy, channel = win.MouseDrag()
####Add code
for index in xrange(num):
bd.DrawLine2D(front_v[index],current_v[index])
front_v.append(c4d.Vector(mx,my,0))
current_v.append(c4d.Vector(mx,my,0))
####
c4d.DrawViews(c4d.DA_ONLY_ACTIVE_VIEW|c4d.DA_NO_THREAD|c4d.DA_NO_ANIMATION)
return True
Thanks for any help!
Hello,
You have to use the Draw function from the tooldata
You can store the points in an array and than use that array in your draw function.
try to check if the mouse have move "enough" instead of 0 to avoid creating too much points in the array.
if dx < 1.0 and dy < 1.0:
result, dx, dy, channel = win.MouseDrag()
continue
and once you have your array of points you can juste iterate it and draw lines
def Draw(self, doc, data, bd, bh, bt, flags):
if not flags:
# Sets the pen color
bd.SetPen(c4d.Vector(0, 0, 1.0))
# Iterates through the array getting columns 1_2, 2_3, ... n-1_n
for pa,pb in zip(self.data,self.data[1:]):
# Draw a line point by point
bd.DrawLine2D(pa, pb)
return c4d.TOOLDRAW_NONE
Cheers
Manuel
@m_magalhaes Thanks for your help, I tried to store and then draw, but the result of this is that the lines are visible after drawing. My goal is to draw lines in real time, so I use DrawLine2D() and DrawView() in the While,make the view continuously draw the line.
But my main problem has not been solved yet.
1 (the main problem), how to make the lines I draw are not obscured by the cube. as pic above
2 (minor problem), close Horizon in the window filter, the line will be abnormal, is this a bug or do I have to open Horizon? as pic below
ok,
1 - you can call c4d.DrawViews in your while routine in mouseInput and i've change my code in my Draw function so it check for flags to draw the line.
2 - can you share your mouseInput() and Draw() function so i have a chance to reproduce this bug ?
cheers
Manuel
@m_magalhaes Thanks ,it works well,I modified the code according to your answer.
1 draw line not obscured by any object.solved
2 before i just draw line in MouseImput() (not add Draw() funtion),as above pic/code, Now i draw line in Draw(),it works well.solved | https://plugincafe.maxon.net/topic/11530/drawline-obscured-by-objects | CC-MAIN-2021-25 | refinedweb | 541 | 68.26 |
All,
After spending a couple hours with the latest version of Opera, reading posts, not finding any answers, seeing numerous posts on this topic…. The issue is when a user (coming from one of the various mainstream browsers) tries to enter an FQDN into Omnibox and the result is a device or namespace being sent to a 3rd party. In today’s environments where concerns around security are on the rise, this can be considered a form of data exfiltration. eg: exposing sensitive namespace(s), device names, etc. to a 3rd party, in addition could be in violation of a security policy.
Unfortunately, there does not seem to be any configuration option to protect against this behavior. At least in Chrome (which Opera appears to share roots for Omnibox), you can specify a new search engine and make it the default: URL being “” which simply causes the request to fail. Opera prevents creation of a new search being configured as the default, thus removing any possibility of protecting against this condition. Furthermore, Chrome and many browsers remember visited sites and it's faster to start typing the FQDN and then hit <enter> on the first entry (as you've entered enough data for the entry to match) vs. scanning through bookmarks (time savings).
Unless there’s a solution that isn’t clearly documented, the only option appears to be using one of the default search engines and blackhole’ing the entire namespace in one’s DNS servers. eg: create a zone ( as example) and make bing the default search engine. This at least prevents Opera’s behavior of trying to submit data to a 3rd party. It’s a crude workaround, but protects against this behavior.
If there’s a configuration option somewhere to protect against this behavior, would like to know.
Thanks! | https://forums.opera.com/user/nobody2 | CC-MAIN-2020-34 | refinedweb | 303 | 60.14 |
Share code for beginners like me the Python help function
I have just been playing around with Pythons help function. While obvious for many, not all of us. I made a stupid example below just to give the idea. But if in your classes and functions you use doc strings, can help you to get an overview of what you are writing. Sometimes I write classes with so many methods, because I chop and change. This simple Python function can help put things into perspective.
I know it is not about Pythonista directly. But many learning Python and Pythonistia here.
# coding: utf-8 class SolveTheWorldsProblems(object): ''' Description: its not possible Args: the meaning of life Returns: -optimisim Raises: more questions that can be answered ''' def __init__(self, meaning_of_life): ''' the comment for the __init__ method ''' self.meaning_of_life = meaning_of_life def result(self): ''' the comment for the result method ''' return ('optimism') if __name__ == '__main__': stwp = SolveTheWorldsProblems(666) print help(stwp)
- Webmaster4o
Didn't know about that. Interesting.
I forgot to mention the obvious, is what I normally associate help() with. That is help(class.method).
@omz, I mention this but understand very low on the priority list, but 2 things I think would be nice in the editor.
A toggle switch in the popup sheet of the file class/methods/functions to show hide the prototype. It is nice and clean just to see the method/func name, but sometimes would save time if could see the prototype also.
Not as important. But when a user class selected and your quick help button is selected that you could provide the same information as help() does in the popup.
Again, I understand these are not burning issues. But ultimately anything that can help coding on small screens a bonus. | https://forum.omz-software.com/topic/2291/share-code-for-beginners-like-me-the-python-help-function | CC-MAIN-2021-17 | refinedweb | 295 | 64.91 |
//**************************************
//INCLUDE files for :Send a TCP packet to a server
//**************************************
/* The Includes */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <netdb.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/in_systm.h>
#include <netinet/ip.h>
#include <netinet/tcp.h>
#include <arpa/inet.h>
//**************************************
// Name: Send a TCP packet to a server
// Description:/* The purpose of this article is to help out people who know the basics of C but want to start learning TCP controls in C. This program will connect to a server and send a TCP packet containing "La la la la". */
// By: Markus Delves (from psc cd)
//
// Inputs:usage: program_name <ip address> <port>
//
// Returns:The program will tell you if it was successful or not
//
// Side Effects:None known
//**************************************
int sock;
// Start main with command line arguments
int main(int argc, char *argv[]) {
// Get ready for the TCP stuff
struct hostent *he; // Used for DNS lookup
struct sockaddr_in blah; // inet addr stuff
// Create a varible for our packet
// Remember, TCP packets max at 1024
char packet[1024];
// A varible to hold the servers' address
char *address;
// A varible for the port
int port;
// Extra vars
int i;
// Make sure two arguments were supplied
if (argc != 3) {
// Print how-to use the program then exit
fprintf(stderr, "usage: %s <ip address> <port>\n",argv[0]);
return(-1);
}
// We know there are two arguments
// so let's use them.
address = argv[1];
port = atoi(argv[2]);
// Create the unconnected socket
sock = socket (AF_INET, SOCK_STREAM, 0);
//Set some settings
blah.sin_family = AF_INET; //we're using inet
blah.sin_port = htons (port) //set the port
he = gethostbyname (address); //set the address
fprintf(stderr, "Attempting a connection with %s on port %d\n", address, port);
// Is the ip/hostname working?
if (!he)
{
if ((blah.sin_addr.s_addr = inet.addr (address)) == ADDR_NONE)
return(-1);
} else {
bcopy (he->h_addr, (struct in_addr *) &blah.sin_addr, he->h_length);
}
// Did they accept us?
if (connect (sock, (struct sockaddr *) &blah, sizeof
(blah)) < 0)
{
fprintf(stderr, "Connection refused by remote host.\n");
return(-1);
}
//Create the packet
sprintf(packet, "La la la la");
//And send it
write (sock, packet, strlen(packet));
close (sock); // Close the connection
fprintf(stderr, "Operation Completed. Exiting...");
}
<. | http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=717&lngWId=3 | CC-MAIN-2018-30 | refinedweb | 367 | 66.03 |
About python : Actual-meaning-of-shellTrue-in-subprocess
Question Detail
I am calling different processes with the subprocess module. However, I have a question.
In the following codes:
callProcess = subprocess.Popen([‘ls’, ‘-l’], shell=True)
and
callProcess = subprocess.Popen([‘ls’, ‘.
Question Answer
The benefit of not calling via the shell is that you are not invoking a ‘mystery(‘echo $HOME’)
Traceback (most recent call last):
…
OSError: [Errno 2] No such file or directory
>>>
>>> subprocess.call(‘echo
……………………………………………………
An example where things could go wrong with Shell=True is shown here
>>> from subprocess import call
>>> filename = input(“What file would you like to display?\n”)
What file would you like to display?
non_existent; rm -rf / # THIS WILL DELETE EVERYTHING IN ROOT PARTITION!!!
>>> call(“cat ” + filename, shell=True) # Uh-oh. This will end badly…
Check the doc here: subprocess.call()
……………………………………………………
Executing programs through the shell means that all user input passed to the program is interpreted according to the syntax and semantic rules of the invoked shell. At best, this only causes inconvenience to the user, because the user has to obey these rules. For instance, paths containing special shell characters like quotation marks or blanks must be escaped. At worst, it causes security leaks, because the user can execute arbitrary programs.
shell=True is sometimes convenient to make use of specific shell features like word splitting or parameter expansion. However, if such a feature is required, make use of other modules are given to you (e.g. os.path.expandvars() for parameter expansion or shlex for word splitting). This means more work, but avoids other problems.
In short: Avoid shell=True by all means.
……………………………………………………
The other answers here adequately explain the security caveats which are also mentioned in the subprocess documentation. But in addition to that, the overhead of starting a shell to start the program you want to run is often unnecessary and definitely silly for situations where you don’t actually use any of the shell’s functionality. Moreover, the additional hidden complexity should scare you, especially if you are not very familiar with the shell or the services it provides.
Where the interactions with the shell are nontrivial, you now require the reader and maintainer of the Python script (which may or may not be your future self) to understand both Python and shell script. Remember the Python motto “explicit is better than implicit”; even when the Python code is going to be somewhat more complex than the equivalent (and often very terse) shell script, you might be better off removing the shell and replacing the functionality with native Python constructs. Minimizing the work done in an external process and keeping control within your own code as far as possible is often a good idea simply because it improves visibility and reduces the risks of — wanted or unwanted — side effects.
Wildcard expansion, variable interpolation, and redirection are all simple to replace with native Python constructs. A complex shell pipeline where parts or all cannot be reasonably rewritten in Python would be the one situation where perhaps you could consider using the shell. You should still make sure you understand the performance and security implications.
In the trivial case, to avoid shell=True, simply replace
subprocess.Popen(“command -with -options ‘like this’ and\\ an\\ argument”, shell=True)
with
subprocess.Popen([‘command’, ‘-with’,’-options’, ‘like this’, ‘and an argument’])
Notice how the first argument is a list of strings to pass to execvp(), and how quoting strings and backslash-escaping shell metacharacters is generally not necessary (or useful, or correct).
Maybe see also When to wrap quotes around a shell variable?
If you don’t want to figure this out yourself, the shlex.split() function can do this for you. It’s part of the Python standard library, but of course, if your shell command string is static, you can just run it once, during development, and paste the result into your script.
As an aside, you very often want to avoid Popen if one of the simpler wrappers in the subprocess package does what you want. If you have a recent enough Python, you should probably use subprocess.run.
With check=True it will fail if the command you ran failed.
With stdout=subprocess.PIPE it will capture the command’s output.
With text=True (or somewhat obscurely, with the synonym universal_newlines=True) it will decode output into a proper Unicode string (it’s just bytes in the system encoding otherwise, on Python 3).
If not, for many tasks, you want check_output to obtain the output from a command, whilst checking that it succeeded, or check_call if there is no output to collect.
I’ll close with a quote from David Korn: “It’s easier to write a portable shell than a portable shell script.” Even subprocess.run(‘echo “$HOME”‘, shell=True) is not portable to Windows.
……………………………………………………
Anwser above explains it correctly, but not straight enough.
Let use ps command to see what happens.
import time
import subprocess
s = subprocess.Popen([“sleep 100”], shell=True)
print(“start”)
print(s.pid)
time.sleep(5)
s.kill()
print(“finish”)
Run it, and shows
start
832758
finish
You can then use ps -auxf > 1 before finish, and then ps -auxf > 2 after finish. Here is the output
1
cy 71209 0.0 0.0 9184 4580 pts/6 Ss Oct20 0:00 | \_ /bin/bash
cy 832757 0.2 0.0 13324 9600 pts/6 S+ 19:31 0:00 | | \_ python /home/cy/Desktop/test.py
cy 832758 0.0 0.0 2616 612 pts/6 S+ 19:31 0:00 | | \_ /bin/sh -c sleep 100
cy 832759 0.0 0.0 5448 532 pts/6 S+ 19:31 0:00 | | \_ sleep 100
See? Instead of directly running sleep 100. it actually runs /bin/sh. and the pid it prints out is actually the pid of /bin/sh. After if you call s.kill(), it kills /bin/sh but sleep is still there.
2
cy 69369 0.0 0.0 533764 8160 ? Ssl Oct20 0:12 \_ /usr/libexec/xdg-desktop-portal
cy 69411 0.0 0.0 491652 14856 ? Ssl Oct20 0:04 \_ /usr/libexec/xdg-desktop-portal-gtk
cy 832646 0.0 0.0 5448 596 pts/6 S 19:30 0:00 \_ sleep 100
So the next question is , what can /bin/sh do? Every linux user knows it, heard it, and uses it. But i bet there are so many people who doesn’t really understand what is shell indeed. Maybe you also hear /bin/bash, they’re similar.
One obvious function of shell is for users convenience to run linux application. because of shell programm like sh or bash, you can directly use command like ls rather than /usr/bin/ls. it will search where ls is and runs it for you.
Other function is it will interpret string after $ as environment variable. You can compare these two python script to findout yourself.
subprocess.call([“echo $PATH”], shell=True)
subprocess.call([“echo”, “$PATH”])
And the most important, it makes possible to run linux command as script. Such as if else are introduced by shell. it’s not native linux command
……………………………………………………
let’s assume you are using shell=False and providing the command as a list. And some malicious user tried injecting an ‘rm’ command.
You will see, that ‘rm’ will be interpreted as an argument and effectively ‘ls’ will try to find a file called ‘rm’
>>> subprocess.run([‘ls’,’-ld’,’/home’,’rm’,’/etc/passwd’])
ls: rm: No such file or directory
-rw-r–r– 1 root root 1172 May 28 2020 /etc/passwd
drwxr-xr-x 2 root root 4096 May 29 2020 /home
CompletedProcess(args=[‘ls’, ‘-ld’, ‘/home’, ‘rm’, ‘/etc/passwd’], returncode=1)
shell=False is not a secure by default, if you don’t control the input properly. You can still execute dangerous commands.
>>> subprocess.run([‘rm’,’-rf’,’/home’])
CompletedProcess(args=[‘rm’, ‘-rf’, ‘/home’], returncode=0)
>>> subprocess.run([‘ls’,’-ld’,’/home’])
ls: /home: No such file or directory
CompletedProcess(args=[‘ls’, ‘-ld’, ‘/home’], returncode=1)
>>>
I am writing most of my applications in container environments, I know which shell is being invoked and i am not taking any user input.
So in my use case, I see no security risk. And it is much easier creating long string of commands. Hope I am not wrong. | https://howtofusion.com/about-python-actual-meaning-of-shelltrue-in-subprocess.html | CC-MAIN-2022-40 | refinedweb | 1,394 | 65.32 |
I would see support of all argument kinds support in any proposal for a new callable: positional only args, named args, keyword-only, *args and **kwargs. The exact notation in probably less important than missing functionality.
On Sat, Nov 28, 2020, 18:50 Abdulla Al Kathiri alkathiri.abdulla@gmail.com wrote:
I don’t know if this has been discussed before.
Similar to PEP 645 idea of writing "Optional[type]" as “type?”, I propose we write "Callable[[type1, type2, ...], type3]” as “[type1, type2, … -> type3]”. Look at the two examples below and see which one looks better to the eyes:
def func1(f: typing.Callable[[str, int], str], arg1: str, arg2: int) -> str: return f(arg1, arg2)
def func2(f: [str, int-> str], arg1: str, arg2: int) -> str: return f(arg1, arg2)
There is less clutter especially if we have nested Callables.
e.g., f: Callable[[str, int], Callable[[int,…], str]] becomes f: [str, int -> [int, ... -> str]]
Callable without zero arguments.. f: Callable[[], str] would become f: [ -> str]
Equivalent to Callable alone without caring about arguments and the return value would be [… -> typing.Any] or [… -> ]
Let’s say we have a function that accepts a decorator as an argument. This might not be useful to do, but I want to show case how it would be easier to read. The old way would be:
def decorator(f: Callable[…, int]) -> Callable[…, tuple[int, str]]: def wrapper(*args, **kwargs) -> tuple[int, str]: text = “some text” res = f(*args, **kwargs) return res, text return wrapper
def function(decorator: Callable[[Callable[…, int]], Callable[…, tuple[int, str]]], decorated_on: Callable[…, int]) -> Callable[…, tuple[int, str]]: wrapper = decorator(decorated_on) return wrapper
The new way is as follows:
def decorator(f: [… -> int]) -> [… -> tuple[int, str]]: def wrapper(*args, **kwargs) -> tuple[int, str]: text = “some text” res = f(*args, **kwargs) return rest, text return wrapper
def function(decorator: [ [… -> int] -> [… -> tuple[int, str]]], decorated_on: [… -> int]) -> [… -> tuple[int, str]]: wrapper = decorator(decorated_on) return wrapper
I saw something similar in Pylance type checker (VSC extension) when you hover over an annotated function that has Callable as an argument or a return value, but they don’t seem to use brackets to mark the beginning and end of the callable, which could be hard to follow mentally (see screenshot below)
Personally, I think it would be easier if Pylance wrote the hint like the following:
(function) function: (decorator: [p0:[*args: Any, **kwargs: Any -> int ]] -> [*args: Any, **kwargs: Any -> tuple[int, str]], decorated_on: [*args: Any, **kwargs: Any -> int]) -> [*args: Any, **kwargs: Any -> tuple[int, str]]
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org Message archived at... Code of Conduct: | https://mail.python.org/archives/list/python-ideas@python.org/message/GMTGXCV2WLAXQSQRFJCWSCXO4UTSLV5W/ | CC-MAIN-2021-17 | refinedweb | 445 | 57.71 |
THIS PUBLICATION IS NOT TO BE SOLD. It is a free educational service in the public interest, published by the United Church of God, an International Association.
The Book of Revelation: Is It Relevant Today?
The Book of Revelation Unveiled
Contents
The Book of Revelation:
Is It Relevant Today?
3 The Book of Revelation: Is It Relevant Today?
I.
5 Keys to Understanding Revelation 17 God’s Church in Prophecy 25 The Book of Revelation’s Divine Authority 28 The Seals of the Prophetic Scroll 39 The Day of the Lord Finally Arrives 48 Satan’s War Against the People of God 60 The Destruction of Satan’s Kingdom
Photo illustration by Shaun Venish/Corel Professional Photos
73 The Everlasting Kingdom of God
Many people believe the book of Revelation is all about bad news. Although it reveals where our actions and decisions will take us, it also shows how mankind will finally experience a world of peace.
Keys to Understanding Revelation
The Book of Revelation Unveiled
Keys to
Understanding Revelation
W
U.S. Air Force
Where is mankind’s race to develop ever-more-destructive weapons taking us? The book of Revelation describes how we will ultimately reap what we have sown, bringing on ourselves unimaginable human suffering before God intervenes..
hy, ‘My lord, what shall be the end of these things?’ And he said, ‘Go
The Book of Revelation Unveiled
Keys to Understanding Revelation
then broke those seals and opened the scroll. “And I saw in the right hand of Him who sat on the throne [God the Father] … But one of the elders said to me, ‘Do.
The Story Flow of the Book of Revelation
Chapter Outline of the Book of Revelation
Seven seals
Chapter
Story Flow)
1 2-3 4-5 6 7
Introduction Prelude—setting First six seals 144,000 and the great multitude
8-10 11 12 13 14 15-16 17-18 19 20 21-22
Seventh seal opened: The trumpet plagues The two witnesses The seven last plagues The return of Jesus Christ The Millennium The new heaven and new earth
Seven trumpets
Insets* Message to the seven churches
The true Church The two beasts The three messages The false church
*Several chapters in the book are insets. Although most of the book’s chapters flow in chronological order, these chapters describe background events and conditions that are not part of the story flow and may span centuries.
Keys to Understanding Revelation
The Book of Revelation Unveiled
Emperor Nero had already falsely branded Christians as the perpetrators of the great fire in Rome. Their future looked grim.
Religious and political setting of Revelation iStockphoto
10
Keys to Understanding Revelation
The Book of Revelation Unveiled
of the Holy Scriptures regarded his existence and power as an unquestionable reality. They reveal him as the unseen driving influence behind evil and suffering. (For clear evidence of his existence, download or request
The apostle John was imprisoned on the island of Patmos, where he wrote the book of Revelation. Christians that the book of Revelation includes both “the things which are, and the things which will take place after this” (Revelation 1:19). Its prophetic fulfillments began in the days of the apostles and extend to our day and beyond..
Scott Ashley
The Day of the Lord in prophecy
11
As The Bible Knowledge Commentary explains: ,
12
Keys to Understanding Revelation
The Book of Revelation Unveiled
just before, the return of Christ. He will oversee the final destruction of the satanic system labeled in Revelation as Babylon the Great., ‘Who.
“Then the Lord will go forth and fight against those nations, as He fights in the day of battle. And in that day His feet will stand on the Mount of Olives, which faces Jerusalem on the east.”
Pleas of God’s people answered
iStockphoto
Zechariah graphically described Christ’s return: “Behold, the day of the Lord is coming … I will gather all the nations to battle against Jeru- salem …
13, ‘
14
The Book of Revelation Unveiled
upon the golden altar which was before the throne” (Revelation 8:3). What prayer does God hear from His true servants over and over again? “And they cried with a loud voice, saying, ‘Howu- salem …” ).
Keys to Understanding Revelation
15
The temple in Jerusalem was the center of ancient Israel’s worship of God. God’s presence was manifested there.,
16
The Book of Revelation Unveiled
God’s Church in Prophecy
God’s Church.
17
F
in Prophecy
or?,” page 18.)).
Revelation was written specifically for God’s servants, the Church of God. So it should come as no surprise that the Church itself is the primary topic of discussion in the first three chapters.
18
The Book of Revelation Unveiled
What Is the Church?
M
any people have misconceptions about what the word church means. Most equate it with a building. But throughout the Scriptures, church and congregation refer to people, never to a building. In fact, we find several verses in the New Testament where the “church” (people) were meeting inside certain members’ homes (buildings) in the local area (Romans 16:3-5;). It is a spiritually transformed body of believers not limited to a particular locale, organization or denomination. The Holman Bible Dictionary, in its article “Church,” explains the background of the word), soldiers (Numbers 22:4), or the people of God (Deuteronomy 9:10).)] that was common in the Old Testament for the people of God reveals their understanding of the continuity that links the Old and New Testaments. The early Christians, Jew and Gentile, understood themselves to be people of the God who had revealed Himself in the Old Testament (Hebrews 1:1-2), as the true children of Israel (Romans 2:28-29) with Abraham as their father (Romans 4:1-25), and as the people of the New Covenant prophesied in the Old Testament (Hebrews 8:1-13). ), and also of the entire people of God, such as in the affirmation that Christ is ‘the head over all things to the church, which is his body’ (Ephesians 1:22-23)” (emphasis added). To better understand how the Bible defines and describes the Church, please request your free copy of the booklet The Church Jesus Built.
God’s Church in Prophecy
19 ‘
20
The Book of Revelation Unveiled
21
God’s Church in Prophecy
Duality in Bible Prophecy
P
rophetic.
22
God’s Church in Prophecy
The Book of Revelation Unveiled Civi- lization: Part III, Caesar and Christ, 1972, p. 594)., ‘I.
The Church’s battle with Satan
But there is an ominous side to Christ’s evaluation
The church in the city of Ephesus, in modern-day Turkey, was the first of seven churches addressed in Revelation. Long since abandoned, Ephesus was a thriving city in John’s day and is mentioned often in the New Testament.
Warnings of a false Christianity
Scott Ashley
23;
24
The Book of Revelation Unveiled request your free copy of the booklet The Church Jesus Built.) The prophecies given by Christ and His apostles concerning the development of a counterfeit Christianity came to pass just as they had predicted. This counterfeit even now dominates the world’s religious scene—but to nowhere near the extent it will in the coming years. Now let’s examine why we should have confidence in the other prophecies contained in the book of Revelation.
The Book of Revelation’s Divine Authority
25
The Book of Revelation’s
Divine Authority
O
ver is in a class of its own. Its dramatic symbolism comes from the divine author of all the other books of the Bible, not from the imaginations of John. John simply recorded what Jesus Christ revealed to him.
26
The Book of Revelation Unveiled
The Book of Revelation’s Divine Authority
27
The scroll, now in Christ’s hands, contains the answer to the continuous prayers of God’s people for justice and deliverance and the establishment of the Kingdom of God to rule on earth. [John], ‘Do: V)..
28
The Seals of the Prophetic Scroll
The Book of Revelation Unveiled
The Seals of the
R
Prophetic Scroll
evelation.
The first major trend prophesied in Revelation is the rise of false teachers claiming to represent Jesus Christ, but twisting His teachings for their own ends. “Take heed that no one deceives you,” He warns.
iStockphoto
Why God’s judgment is needed
The first five seals correspond to adversities that are to afflict vast portions of humanity, including some of God’s servants, between the first and second appearances of Christ. These hardships, having already
29
30
The Seals of the Prophetic Scroll
The Book of Revelation Unveiled
the apostle John saw in vision as each seal was opened.
first seal: false religion). When Jesus founded the Church, the Roman Empire was enjoying a brief period of peace. But this lasted only a few decades, then Rome was again at war. This pattern was to continue until the time of the end when it
The book of Revelation describes massive armies engaged in military actions that will take hundreds of millions of lives. Corbis Digital Stockity: ” ”
31
“When the Lamb opened the third seal …
32
The Seals of the Prophetic Scroll
The Book of Revelation Unveiled
6:5-6, NIV).—
Woodcut by Gustave Doré
“So I looked, and behold, a pale horse. And the name of him who sat on it was Death …” The apostle John saw, in a chilling vision, four horsemen symbolizing major trends leading up to Jesus Christ’s return.
33
After the four horsemen, Jesus opens yet more seals. John writes: ). (Revelation 13:15). The primary targets of this carnage will be those
34
The Seals of the Prophetic Scroll
The Book of Revelation Unveiled
“who keep the commandments of God and have the testimony of Jesus Christ” (Revelation 12:17). Additional prophecies explain that this time of great tribulation and persecution will also afflict the modern physical descendants of the 12 tribes of ancient Israel (see “The ‘Time of Jacob’s Trouble,’” page 56). well as the converted servants of
The sixth seal: signs in the skies
Satan lashes out
The end-time persecution and martyrdom of the saints (also directed at the physical descendants of ancient Israel) begins before the heavenly
Terrifying heavenly signs will precede Jesus Christ’s return, including the moon turning blood-red. Yet in spite of such frightening warnings, few will repent and turn to God.
Shaun Venish/Corbis Digital Stock notice, in the concluding description of the sixth seal, what is to follow the heavenly signs: ; compare Zephaniah 1:14-17). Note the order of these three separate events: First comes the tribulation, as described in the fifth seal. Next the heavenly signs, described in the sixth seal, occur. After the heavenly signs is the Day of the Lord, the day of God’s wrath. The heavenly signs occur after the time of tribulation has begun but before the Day of the Lord begins. The prophet Joel confirms this: “And I will show wonders in the heavens and in the earth: blood and fire and pillars of smoke. The sun shall be turned into darkness, and the moon into blood, before the coming of the great and awesome day of the Lord” (Joel 2:30-31). Why is this so significant?
35
Christ before the heavenly signs announce the Day of the Lord. This means that Satan’s wrath—the time when great tribulation will fall on God’s people—will have been underway for some time before the beginning of God’s wrath. Even after the time of God’s wrath—the Day of the Lord—Satan’s destructive war on God’s people apparently will not cease until he is bound at Jesus’ return (Revelation 20:1-2). Notice that the woman of chapter 12 will be “nourished for a time and times and half a time [a year, years and half a year], from the presence of the serpent” (verse 14). Even though God will nourish, strengthen and protect some of His people during that terrible time, many others, as we have already seen, will be killed. Revelation 11:2 tells us that Jerusalem is to be trampled underfoot by gentiles for 42 months (Revelation 11:2). God also promises to raise up two prophets to be His witnesses for 1,260 days (verse 3). That each of these periods equals 31⁄2 years is significant. These references indicate that a total of 31⁄2 years elapses from the beginning
36
The Book of Revelation Unveiled 31⁄2 years of Satan’s wrath. In other words, God’s punishments on the Day of the Lord would overlap Satan’s vengeance on God’s people for a period of one year—the final year of the last 31⁄2
The Seals of the Prophetic Scroll
37
multitude is made up of people from the many nationalities and ethnic groups on earth—from their tribes, clans and languages. What makes them special is that they all have “come out of the great tribulation, and washed their robes and made them white in the
38
The Book of Revelation Unveiled
blood of the Lamb” (verses 13-14). They are converted servants of God, having suffered from and—as seems to be implied—been converted during the first 21⁄2: ). Unmistakably clear in Revelation 7 is that a great harvest of true and faithful Christians will occur during the first years of the greattrib.
The Day of the Lord Finally Arrives
39
The Day of the Lord
T
Finally Arrives.
The seven trumpets provide us with a summary of what will happen during the time known as the Day of the Lord. Revelation explains and describes the type of punishment each trumpet blast represents..
40
The Day of the Lord Finally Arrives
The Book of Revelation Unveiled lifesustaining environment. Notice exactly what is affected by the first four trumpet plagues. First “a third of the trees” and “all green grass” are burned up. Next “a third of the sea”
PhotoDisc
A series of plagues will strike humanity’s life-support system— earth’s environment. Much of the planet’s vegetation will be destroyed and its water poisoned.: ‘As I live,’ says the Lord God, ‘I
41
42
The Book of Revelation Unveiled
The Day of the Lord Finally Arrives
43, ‘Seal up the things which the seven thunders uttered, and do not write them’” (verses 3-4). Notice that God revealed more prophecy to John than He allowed him to record (Revelation 10:1-4).
44
The Day of the Lord Finally Arrives
The Book of Revelation Unveiled: “‘Go,
Opposing the Beast and False Prophet will be God’s two witnesses. Jerusalem will find itself in the vortex of a great spiritual battle as prophesied events reach their climax.
iStockphoto 31⁄2 years, the same length of time Jerusalem will be occupied by the gentiles (Revelation 11:2-3). Thus it will have commenced just before the Great Tribulation.
45
of the events described in the book of Revelation: “So when you see standing in the holy place ‘the
46
The Book of Revelation Unveiled!’” (Revelation 11:14
The Day of the Lord Finally Arrives
47ated.
48
The Book of Revelation Unveiled
Satan’s War Against
the People of God
R
evelation
Satan’s War Against the People of God
49).,
These chapters explain the devil’s motivation and introduce the worldly powers he employs in his end-time battle against Christ and His servants., [31 ⁄2 years], from the presence of the serpent” (verse 14). God will intervene to help the woman survive
50
The Book of Revelation Unveiled
during this time of unbelievable affliction (verses 15), (verses 31-33). Its head represented Nebuchadnezzar’s NeoBabylonian Empire (verses 37-38), which conquered and destroyed Jerusalem in 586 B.C. The dominant powers after Babylon, represented by other parts of the image, were the Medo-Persian Empire, the
Satan’s War Against the People of God
51
The Mark and Number of the Beast
T.
52
Satan’s War Against the People of God
The Book of Revelation Unveiled
Greco-Macedonian Empire established by Alexander the Great and the Roman Empire (verses 39-40). …’” (verse 42-44, NIV). In other words the 10 toes of this image will exist at the time of the end and will be smashed by the returning Jesus Christ (verses 34, 44-45).).
feet of a bear [the Persian Empire], and his mouth like the mouth of a lion [ancient Babylon]. The dragon gave him his power, his throne, and great authority” (verse 2).). Illustrations, from left: Pablo Loayza, Winston Taylor
In Daniel 2 the heritage of this powerful end-time kingdom or empire is depicted as a statue of a human figure composed of four metals. The “beast” these horns collectively form will be a short-lived, endtime
53
This end-time alliance of nations is introduced as ”a beast rising out of the sea, having seven heads and ten horns, and on his horns ten crowns, and on his heads a blasphemous name.”
54
55
The Two Women of Revelation
Treligious
Winston Taylor
Satan’s War Against the People of God
The Book of Revelation Unveiled).
56
Satan’s War Against the People of God
The Book of Revelation Unveiled
names have not been written in the Book of Life of the Lamb slain from the foundation of the world” (Revelation 13:8).” (verses 13-15). John later describes the powerful religious leader as “the false). Who is the second beast? He is a tool of Satan who uses his position and authority to influence humankind to worship the first beast.
57
The ‘Time of Jacob’s Trouble’
Photos.com
S
hortly after the return of Christ, all of and it is the time of Jacob’s trouble, but he the descendants of ancient Israel— shall be saved out of it” (Jeremiah 30:7). including the descendants of the soGod revealed to Daniel that such a time of called lost 10 tribes—will again gather and trouble would occur at the time of the end: resettle in Palestine. Jerusalem will once “At that time Michael shall stand up, the more be the capital of the restored 12 tribes great prince who stands watch over the sons of Israel, as well as the capital of the world. of your people; and there shall be a time of This reunion of all 12 tribes is described trouble, such as never was since there was a in some detail in Ezekiel 37:15-28. God also nation, even to that time” (Daniel 12:1).. As the time of Jesus Christ’s return He calls this end-time catastrophe—especially on the descen- draws near, Satan will direct his wrath dants of ancient Israel’s northern not only toward faithful Christians, but kingdom, now known only as toward the physical descendants of all the lost 10 tribes—the time of Jacob’s trouble: “Alas! For that Israel. The Bible refers to this as the day is great, so that none is like it; “time of Jacob’s trouble.” request your free copy of The United States and Britain in Bible Prophecy.) Notice the reassurances God gives to all of the beleaguered people of Israel in the last days: “‘So then, the days are coming,’ declares the Lord, ‘when, NIV). ).
58
The Book of Revelation Unveiled
prophet who had performed the miraculous signs on his [the Beast’s] behalf” (Revelation 19:20, NIV). The False Prophet is evidently the satanically led leader of a false religious system represented by the immoral woman riding the Beast in Revelation 17 (see “The Two Women of Revelation” beginning on page 55). (verse 12). He will even influence and seduce the merchants of international commerce to such an extent “that no one may buy or sell except one who has the mark or the name of the beast, or the number of his name” (verse 17). (For additional information see “The Mark and Number of the Beast,” page 51.)
The Destruction of Satan’s Kingdom
59)., NIV; compare 14:3)., NIV). During this time the impending fall and destruction of that great city Babylon the Great is announced by another angel (verse 8).” (verse 9-10). command- ments’” (verses 12-13,” ).
60
The Destruction of Satan’s Kingdom
The Book of Revelation Unveiled
The Destruction
beast and his image” (Revelation 14:11). This includes the “worship [of] demons, and idols of gold, silver, brass, stone, and wood, which can neither see nor hear nor walk” (Revelation 9:20)..
of Satan’s Kingdom
D
The seven last plagues
“Then I heard a loud voice from the temple saying to the seven angels, ‘Go, pour out the seven bowls of God’s wrath on the earth’” (Revelation 16:1).
“The fourth angel poured out his bowl on the sun, and the sun was given power to scorch people with fire. They were seared by the intense heat and they cursed the name of God …”
iStockphoto, NRSV). Before we examine the nature of this final phase of God’s punishments on human beings who have refused to repent (Revelation 16:9, 11), pres- ence of the Lamb” (Revelation 14:9-10). These words indicate that all of the seven last plagues will occur within a brief time. As Christ descends through the clouds, “every eye will see Him” (Revelation 1:7; compare Acts 1:9-11).). One purpose for His return is to “destroy those who destroy the earth” (verse 18). pat- terns and practices of the whole world. After He completes that destruction, “all nations shall come and worship before” God (Revelation 15:4). That will be an incredible reversal. Why? Because at the beginning of the plagues those nations are engrossed in the “worship [of] the
61control, brutal, … lovers of pleasure rather than lovers of God, having a form of godliness but denying its power” (2 Timothy 3:1-5). He describes them as obsessed with knowledge but woefully lacking in understanding—“always learning and never able to come to the knowledge of the truth” (verse 7). This is a thoroughly deceived society. God shows that He will be able to reach their blinded minds only
62
The Destruction of Satan’s Kingdom
The Book of Revelation Unveiled” (verse 3,
they cursed the name of God, who had control over these plagues, but they refused to repent and glorify him” (Revelation 16:8-9, NIV). Satan’s kingdom is founded on a “form of godliness” (2 Timothy 3:5) that has consistently substituted many of the traditions that began in ancient Babylon for the commandments of God. His kingdom has been at “war with [those] … who keep the commandments of God and have the testimony of Jesus Christ” (Revelation 12:17). Therefore God, who has control of everything everywhere, will turn against that kingdom the very sun they unwittingly still worship. The modern custom of substituting Sunday download or request your free copy of Holidays or Holy Days: Does It Matter Which Days We Keep?) fifth angel poured out his bowl on the throne of the beast, and his kingdom was plunged into darkness. Men gnawed their tongues in agony and cursed the God of heaven . . .”
The nations gather to fight Christ
iStockphoto’” (verses 4-7, NIV). Remember, all of this is happening very quickly “in the presence of the holy angels and in the presence of the Lamb” (Revelation 14:10). “The fourth angel poured out his bowl on the sun, and the sun was given power to scorch people with fire. They were seared by the intense heat and
63
64
The Destruction of Satan’s Kingdom
The Book of Revelation Unveiled
wine of her fornication” (Revelation 17:1-2). “She has become a home for demons and a haunt for every evil spirit …” (Revelation 18:2). More than any other Western city, Rome, heir of ancient Babylon’s mystery cults, has a history of being “drunk with the blood of the saints and with the blood of the martyrs of Jesus” (Revelation 17:6). Influenced by a religious system that has led the way in opposing obedience to the commandments of God, Rome has allowed and often led the charge in persecuting “those who keep the commandments of God and the faith of Jesus” (Revelation 14:12)., NIV). As she has so often in the past, she will once more enjoy the fame and status of being “that great city
in Hebrew is called Armageddon” (verses 12-16,” (verse 19). This is accomplished partly by “a great earthquake, such a mighty and great earthquake as had not occurred since men were on the earth” (verse 18). Islands and mountains disappear as the earth shakes and shudders (verse 20). Notice what accompanies these vast earthly convulsions: “From the sky huge hailstones of about a hundred pounds each” batter the earth and its inhabitants (verse 21, NIV). Satan’s modern “Babylonian” kingdom is being systematically demolished.
The armies are now gathered at Armageddon, or “hill of Megiddo,” about 55 miles north of Jerusalem. The final battle, to take place at Jerusalem, is about to begin.
Corel Professional Photos
65
which reigns over the kings of the earth” (verse 18).” (verses 16-17, NIV). Chapter 18 describes the reaction of many of the world’s most prominent people to the burning of this mighty city. “When the kings of
66
The Book of Revelation Unveiled
Satan: The Great Seducer
W
hy will so many people zealously follow Satan’s deceptive ways to their death? There are two primary causes. The first arises from human nature and man’s innate hostility toward God’s ways (Romans 8:7). The second cause is Satan’s mastery in deceiving people.). Paul warned that even believers could be led to gullibly accept doctrines taught by false teachers—should they become negli-
gent). Never underestimate the skill Satan uses to deceive humanity. The book of Revelation plainly says he is “that ancient serpent called the devil, or Satan, who leads the whole world astray” (Revelation 12:9, NIV).
The Destruction of Satan’s Kingdom
67 …” (verses 9-11,” (verses 15, 17). “Your merchants were the world’s great men,” proclaims an angel. “By your magic spell all the nations were led astray” (verse 23,” (verses 4-8, NIV). All of creation is told, “Rejoice over her, O heaven, and you holy apostles and prophets, for God has avenged you on her!” (verse 20).).
68
The Book of Revelation Unveiled
Let’s not forget that Satan has gathered the armies of the nations to Jerusalem to fight Christ (verse 19)., NIV). He adds: “The Lord will be king over the whole earth. On that day there will be one Lord, and his name the only name” (verse 9,” (verses 12-14, NIV). An angel then summons scavenger birds to feast on the flesh of the armies (Revelation 19:17-18, 21).). (See “Satan: The Great Seducer,” page 66.) control over “this present evil age” (Galatians 1:4; 1 John 5:19) Destruction of Satan’s Kingdom
69
the hand of Satan and his Babylonian system. John watched as martyrs “came to life and reigned with Christ a thousand years” (Revelation 20:4, NIV; see also Revelation 22:12).; 3:21; NIV). “Blessed and holy are those who have part in the first resurrection,” writes John. “The second death has no power over them, but they will be priests of God and of Christ and will reign with him for a thousand years” (Revelation 20:6,, NIV). This is the beginning of the wonderful era often referred to by students of the Bible as the Millennium. For details about what will occur during Christ’s millennial reign, download or request; Revelation 2:10). One way He does this is to allow them to choose between good and evil (Deuteronomy 30:19).
70
The Destruction of Satan’s Kingdom
The Book of Revelation Unveiled
Punishment of the incorrigibly wicked
heaven and devoured them” (Revelation 20:7-9,” (verse 10, Modern King James Version). He will never again be allowed to deceive anyone.). Those who have turned down all opportunities to repent and be forgiven must also be resurrected for judgment at the end (Revelation 21:8). These are people who have deliberately rejected God’s way of life—even after they have been “once enlightened, and have tasted the heavenly gift, and have become partakers of the Holy Spirit” (Hebrews 6:4-6). They once were forgiven and given the Holy
A second resurrection
“‘For behold, the day is coming, burning like an oven, and all the proud, yes, all who do wickedly will be stubble. And the day which is coming shall burn them up,’ says the Lord of hosts . . .’’
iStockphoto/Shaun Venish” (verse 5). Yet a parenthetical note here states that “the rest of the dead did not live again until the thousand years were finished” (same verse). The dead who are resurrected to appear “before the throne” of God (verse 12) after “the thousand years have expired” (verse 7)” (verses 11-12).” (verse 15).
71), “there no longer remains a sacrifice for sins, but a certain fearful expectation of judgment, and fiery indignation which will devour the adversaries” (verses 26-27)., NRSV). Therefore it appears that this final resurrection, of necessity, must include the wicked who have already been condemned to perish in the lake of fire—however few they may be, comparatively speaking.
72
The Everlasting Kingdom of God
The Book of Revelation Unveiled
73
The Everlasting
Victory over death
This brings us to the time when, as Paul said, “death is swallowed up in victory” (1 Corinthians 15:54). download or request the free booklets What Happens After Death? and Heaven and Hell: What Does the Bible Really Teach?)
J
Kingdom of God
es
iStockphoto
“Then Jesus called a little child to Him, set him in the midst of them, and said, ‘Assuredly, I say to you, unless you are converted and become as little children, you will by no means enter the kingdom of heaven.’”
74
The Everlasting Kingdom of God
The Book of Revelation Unveiled
clothed with the imperishable and our mortality has been clothed with immortality, then the saying of scripture will come true: ‘Death
PhotoDisc
Jesus Christ will return to establish that Kingdom on earth at His second coming, at last bringing the peace mankind has always longed for but never achieved.. ‘The time has come,’ he said. ‘The kingdom of God is near. Repent and believe the good news!’” (Mark 1:14-15, NIV). At His first coming Jesus trained disciples who would, after His
75 com- munity
76
The Everlasting Kingdom of God
The Book of Revelation Unveiled
forerunner of the eternal family, the family “of all those who believe” (Romans 4:11). The “light” illuminating New Jerusalem comes from God (Revelation 21:24). Nothing that “defiles, or causes an abomination or a lie” will ever be allowed to enter download or request your free copies of The Ten Commandments, What Is Your Destiny? and The Road to Eternal Life. And for a broader look at biblical prophecy, be sure to download or request The United States and Britain in Bible Prophecy, The Middle East in Bible Prophecy, Are We Living in the Time of the End? and You Can Understand Bible Prophecy.
PhotoDisc
The biblical story of man begins in the Garden of Eden with his rejection of the tree of life. It closes with God’s immortal family dwelling before His throne enjoying the fruits of the tree of life.
77
78
The Book of Revelation Unveiled
79
What Should You Do Now?
T
he, God’s written revelation to mankind. The Bible claims God as its real author— that all Scripture is inspired by Him . We’re here to help. We’ve prepared a number of request our other booklets on prophecy—The Gospel of the Kingdom, The United States and Britain in Bible Prophecy, The Middle East in Bible Prophecy, these publications are free from any of our offices listed on page 80, or you can request or download them from our Web site at.
much is happening in the Sit’soworld, and so quickly, that almost impossible to sort
• Also request or download free copies of these eye-opening booklets at www. gnmagazine.org/booklets: You Can Understand Bible Prophecy, The Book it all out. of Revelation Unveiled, The Middle East Where are today’s dramatic and danger- in Bible Prophecy, The Gospel of the ous trends taking us? What does Bible Kingdom, The United States and Britain prophecy reveal about our future? Is in Bible Prophecy and Are We Living in prophecy coming to pass before our the Time of the End? You can’t afford eyes? How can you know the answers? to be without this crucial.
80
The Book of Revelation Unveiled Fax: 0046 0142 10340 Editorial reviewers: Scott Ashley, John Bald, Wilbur Berg, Jim Franks, Bruce Gore, Roy Holladay, Paul Kieffer, Tom Kirkpatrick, Graemme Marshall, Burk McNair, Darris McNeely, John Ross Schroeder, Richard Thompson, David Treybig, Leon Walker, Donald Ward, Lyle Welty Design: Shaun Venish Cover: Winston Taylor RV/0707.
RV/0707/2.0 | https://issuu.com/tatendakangwende/docs/the-book-of-revelation-unveiled | CC-MAIN-2017-47 | refinedweb | 5,563 | 67.18 |
In addition to modeling the desired features in integration scenarios, it is important to also consider non-functional aspects like resource consumption, performance, and reliability, just to name a few.
This blog is part of a series that shall assist Integration Developers to properly address these qualities when modeling Integration Flows.
This contribution focuses on Groovy scripts for XML processing, as they are often used in the Script step.
Integration Developers often use
XmlSlurper in Groovy scripts to process XML messages. There are aspects that need to be considered regarding memory consumption. Consider the following code snippet:
def body = message.getBody(java.lang.String) def xml = new XmlSlurper().parseText(body)
In the code above the
body variable is just used to provide input to the
XmlSlurper. This is fine as long as the body is of type
java.lang.String. In all other cases conversion to
java.lang.String will be applied, which requires allocation of additional memory for the
String object. Note that when the message body is large, or when many messages are processed in parallel, the additional memory footprint might even cause a
java.lang.OutOfMemoryError. Ultimately the
OutOfMemoryError would interrupt the message processing.
The additional memory allocation can be avoided if the
XmlSlurper would accept the body as it is, or if the body could be streamed. Then, a better approach is to stream the message body to the
XmlSlurper by using
message.getBody(java.io.Reader), as shown in the following snippet:
def body = message.getBody(java.io.Reader) def xml = new XmlSlurper().parse(body)
This will do the magic of streaming the message body – the
body variable is now a
java.io.Reader that is just a reference to the message payload object, thus reducing memory consumption and contributing to reliability of your integration scenario.
The next contribution in this series is Avoid Binding Variables in Groovy Scripts.
Thanks Markus for good blog and waiting for other blogs of the series… !
Regards,
Sriprasad Shivaram Bhat
I would like to award points for the best blog title I have seen in a long while. I love the names people give to new technology, when someone outside the IT space sees a sentence like “Stream the XML Slurper in the Groovy Script” it really helps dispel the idea we are all nerds.
As I recall the XML Slurper was some sort of monster from Star Wars, and when it got killed by Luke Skywalker it;s keeper burst into tears.
Hi Markus
Thanks for this blog on performance consideration. If I have a CSV to XML conversion step, the output of it would be a byte array. Would the same consideration apply there, i.e. do we need to stream it in the subsequent Groovy script, or is it even possible?
Regards
Eng Swee | https://blogs.sap.com/2017/06/20/stream-the-xmlslurper-input-in-groovy-scripts/ | CC-MAIN-2018-47 | refinedweb | 468 | 56.05 |
"The book is divided into two parts. First comes "The Core
Language", describing basic aspects of the language such as syntax,
types, and conditional statements. This part of the book is meant
to teach the reader necessary but somewhat abstract concepts like
control-of-flow and object-oriented design. Second comes "The Outer
Layers", which describes a few ways to extend Python in common
directions, like through CGI, a Tk GUI, Microsoft COM, and Java
Python (JPython). The second section of the book takes the theory
learned in the first section and puts it into practice."
"In the Core Language section, the authors give a rock-solid,
seamless explanation of the Python language. They first cover the
built-in object types (strings, lists, etc.), then go on to
describe functions, modules, classes, and exceptions. Here you
learn that Python, like most languages, concerns itself at the most
basic level with "doing things with stuff". First the things
(types) are covered, then the stuff (functions) is brought into the
picture. But that's not the end of it. The authors go on to cover
in great detail Python's namespace rules, object-oriented
capabilities, and other, more advanced facets of the language. By
the end of the section, you understand the fundamentals of every
facet of the Python language--knowledge you'll need in the second
section."
"They say that if you ever want to know something well, you
should try teaching it to someone else. Lutz and Ascher are
competent teachers (they also teach Python courses), so the first
section reads smoothly, progressing from one lesson to the next,
with each lesson building on the one before. The book looks slim,
but don't confuse that with being lightweight."
Complete Story
Related Stories:
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. | http://www.linuxtoday.com/infrastructure/1999091400104NWBR | CC-MAIN-2017-22 | refinedweb | 308 | 63.19 |
x86_64-pc-linux-gnu-gcc -c -fwrapv -fno-strict-aliasing -pipe -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -pipe -march=native -Wall -DVERSION=\"1.7.2\" -DXS_VERSION=\"1.7.2\" -fPIC "-I/usr/lib64/perl5/5.24.2/x86_64-linux/CORE" Quota.c
In file included from Quota.xs:11:0:
myconfig.h:18:21: fatal error: rpc/rpc.h: No such file or directory
#include <rpc/rpc.h>
^
compilation terminated.
-------------------------------------------------------------------
This is an unstable amd64 chroot image at a tinderbox (==build bot)
name: 13.0-desktop_20170905-225247
-------------------------------------------------------------------
gcc-config -l:
[1] x86_64-pc-linux-gnu-6.4.0 *
Available Python interpreters, in order of preference:
[1] python3.4
[2] python3.6 (fallback)
[3] python2.7 (fallback)
[4] jython2.7 (fallback)
Available Ruby profiles:
[1] ruby22 (with Rubygems) *
java-config:
The following VMs are available for generation-2:
*) IcedTea JDK 3.5.1 [icedtea-bin-8]
Available Java Virtual Machines:
[1] icedtea-bin-8 system-vm
emerge -qpv dev-perl/Quota
[ebuild N ] dev-perl/Quota-1.7.2
Created attachment 493650 [details]
emerge-info.txt
Created attachment 493652 [details]
dev-perl:Quota-1.7.2:20170910-080215.log
Created attachment 493654 [details]
emerge-history.txt
Created attachment 493656 [details]
environment
Created attachment 493658 [details]
etc.portage.tbz2
Created attachment 493660 [details]
temp.tbz2
could not assign ticket to c.affolter, because there was no bugzilla account. Please fix this.
(In reply to Jonas Stein from comment #7)
> could not assign ticket to c.affolter, because there was no bugzilla
> account. Please fix this.
The account should match again, as soon as the following PR will be merged:
What is the preferred way to proceed here? Shall the affected ebuild simply depend on "net-libs/libtirpc" or also on "sys-libs/glibc[rpc]" (as long as this is available)?
The bugtracker is not suited for discussions. Please join IRC #gentoo-proxy-maint
for discussion or ask in a forum/mailinglist.
The bug has been closed via the following commit(s):
commit 1259579a303b3397efa95e9868c02d17d601d524
Author: Andreas K. Hüttel <dilfridge@gentoo.org>
AuthorDate: 2017-11-01 21:18:30 +0000
Commit: Andreas K. Hüttel <dilfridge@gentoo.org>
CommitDate: 2017-11-01 21:18:30 +0000
dev-perl/Quota: Fix build with glibc-2.26, bug 630568
Not sure if this is complete though... it compiles, but does it run?
Closes:
Package-Manager: Portage-2.3.13, Repoman-2.3.4
dev-perl/Quota/Quota-1.7.2.ebuild | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
Changes lead to the following bug:
Also contains a IMO proper fix!
Thanks and have a nice day! | https://bugs.gentoo.org/show_bug.cgi?id=630568 | CC-MAIN-2019-43 | refinedweb | 428 | 54.59 |
Getting Started with Programming the Intel Edison
After my brief introduction to the Intel Edison it’s time to get more familar with the platform’s software aspects.
I’m going to show how you can start to develop and deploy your ideas, how you can read/write from sensors/actuators and how you can communicate with the Cloud. Giving you what you need to start tinkering and hacking IoT devices.
Installing and configuring the SDK
The first thing is to choose your preferred language for the project. To accommodate the needs of more developers, Intel made it easy to use many different programming languages and have provided several SDKs.
You can read about all the options in this article.
Intel Edison Board Installer
The latest version of the Intel Edison SDK is available through a unified installer that you can get here.
Make sure to have a recent version of Java JDK/JRE and continue the installation process.
This will install the appropriate driver for the board, updates the Yocto Linux image on the Edison and lets you choose your preferred IDE. The installer is available for Windows and Mac OS, Linux users need to install the preferred IDE separately.
Getting ready to develop
Assemble the development board, setup a serial terminal and connect the Edison to WiFi.
Make a note about the board IP address, Edison should expose itself via Zeroconf, but we all know that tech doesn’t always work.
Now we can configure our IDE.
Eclipse
If you are going to develop in C++, open Eclipse and select the IoT DevKit -> Create Target connection item.
You should see your board listed, else just enter a name and the ip address noted before.
Intel XDK
Start XDK and look at the bottom panel of the screen.
Click the IoT Device drop down menu and select re scan for device or enter the board ip address as shown below.
You should see a success message in the console.
Shell access
SSH is enabled on the board, so you can skip all the IDE fuss and do everything from the shell if you are more comfortable there.
Hello Edison
It’s time to say hello.
C++
In Eclipse click, IoT DevKit – >Create C++ project and select a blank template.
And choose the already defined target.
Add the following code:
#include <iostream> using namespace std; int main() { std::cout << "Hello, Edison!\n"; return 0; }
Run the code by clicking the green play button. Eclipse will build the project, deploy to the board and run it. On this first run, Eclipse will ask for the board password.
You an follow progress and the application output in the console at the bottom of the screen.
Javascript/Node JS
Open XDK, click on the Projects tab, and start a new project choosing the blank IoT template.
Add the following code:
console.log("Hello, Edison!")
Use the run button on the bottom toolbar. XDK will ask if you want to upload the updated project, click yes and check the output in the bottom console.
Python
In your favorite text editor, write the following code:
print "Hello, Edison!"
save as hello.py and run it with:
python hello.py
Summary
One of the great aspects to using the Edison is that there’s nothing new to learn. You can code in your current preferred language, use libraries of your choice, and do whatever you normally do on a Linux system.
The main difference is that you can run your project on a tiny device, ready to make wearable or internet
things.
But we are interested in making something more interesting, taking advantage of the platform’s I/O ability to make
things smart.
Dealing with Sensors and Actuators
One of my favorite aspects of Edison, is that even a software guy like me can deal with the hardware. Intel provides two useful libraries for this purpose,
lib rmaa and
lib upm.
The first provide an abstraction to the board, so that ports and other hardware features can be accessed through abstract classes without needing to know exact model numbers and data sheet details.
It’s time to make something exciting… blink a led! (OK, not that exciting).
Thanks to
lib mraa it’s simple:
C++
include <iostream> #include <unistd.h> #include <signal.h> #include "mraa.hpp" static int iopin = 13; int running = 0; void sig_handler(int signo) { if (signo == SIGINT) { printf("closing IO%d nicely\n", iopin); running = -1; } } int main(int argc, char** argv) { mraa::Gpio* gpio = new mraa::Gpio(iopin); // Select the pin where the led is connected if (gpio == NULL) { // Check for errors return MRAA_ERROR_UNSPECIFIED; } mraa_result_t response = gpio->dir(mraa::DIR_OUT); // Set "direction" of our operation, we use it as output here if (response != MRAA_SUCCESS) { mraa::printError(response); return 1; } while (running == 0) { // infinite loop just to test response = gpio->write(1); // set the output pin to "high" value, this will cause the led to turn on sleep(1); response = gpio->write(0); // set the output pin to "low" value, this will cause the led to turn on sleep(1); } delete gpio; // cleanups return 0; }
Javascript
var m = require('mraa'); var gpio = new m.Gpio(13); // Select the pin where the led is connected gpio.dir(m.DIR_OUT); // Set "direction" of our operation, we use it as output here var ledState = true; // Led state function blinkblink() // we define a function to call periodically { gpio.write(ledState?1:0); // if ledState is true then write a '1' (high, led on) otherwise write a '0' (low, led off) ledState = !ledState; // invert the ledState setInterval(blinkblink,1000); // call this function every second } blinkblink(); // call our blink function
Python
import mraa import time gpio = mraa.Gpio(13) # Select the pin where the led is connected gpio.dir(mraa.DIR_OUT) # Set "direction" of our operation, we use it as output here while True: gpio.write(1) # set the output pin to "high" value, this will cause the led to turn on time.sleep(0.2) gpio.write(0) # set the output pin to "low" value, this will cause the led to turn off time.sleep(0.2)
Simple, isn’t it?
Now let’s see how we read values from a sensor. In this example I’ll use a temperature sensor attached to the pin Aio 0.
Usually, to retrieve the temperature value from a sensor, you read raw values and then check the sensor data sheet, understand the meaning of the raw value and process the value before using it.
Here
Lib UPM comes to the rescue and we can use the class provided from the library to abstract all the low level details. I’ll use javascript, but as you have seen before, the same can be acheived in any language.
var groveSensor = require('jsupm_grove'); var tempSensor = null; var currentTemperature = null; var celsius = 0; function init() { setup() readRoomTemperature(); } function setup() { // Create the temperature sensor object using AIO pin 0 tempSensor = new groveSensor.GroveTemp(0); } function readRoomTemperature() { celsius = tempSensor.value(); console.log("Temperature: "+ celsius + " degrees Celsius"); } init();
Now we can combine the above examples and turn on a led only when a predefined temperature is reached.
var m = require('mraa'); var MAX_TEMP = 30; var groveSensor = require('jsupm_grove'); var tempSensor = null; var currentTemperature = null; var gpio = null; function init() { setup() setInterval(checkTemperature, 1000); } function setup() { // Create the temperature sensor object using AIO pin 0 tempSensor = new groveSensor.GroveTemp(0); gpio = new m.Gpio(13); // Select the pin where the led is connected gpio.dir(m.DIR_OUT); // Set "direction" of our operation, we use it as output here } function readRoomTemperature() { var celsius = tempSensor.value(); console.log("Temperature: "+ celsius + " degrees Celsius"); return celsius; } function checkTemperature() { var temp = readRoomTemperature(); if(temp>MAX_TEMP) gpio.write(1); else gpio.write(0); } init();
We can show a message on the LCD display with just few more lines of code, using classes provided by
Lib UPM.
// LibUpm requires var groveSensor = require('jsupm_grove'); var LCD = require('jsupm_i2clcd'); var myLcd; var currentTemperature = null; function init() { setup() setInterval(checkTemperature, 1000); } function setup() { // Create the temperature sensor object using AIO pin 0 tempSensor = new groveSensor.GroveTemp(0); myLcd = new LCD.Jhd1313m1 (6, 0x3E, 0x62); // setting up the grove lcd connected with i2c } function readRoomTemperature() { var celsius = tempSensor.value(); console.log("Temperature: "+ celsius + " degrees Celsius"); return celsius; } function checkTemperature() { var temp = readRoomTemperature(); var lcdMessage = "Room temp:" + temp + " C"; myLcd.setCursor(1,1); myLcd.write(lcdMessage); } init();
Browse the lib UPM docs to get an idea of supported sensors and actuators and you’ll understand how many things you can use in the same, simple, way.
But IoT is about the Internet, so let’s get connected.
One of the advantages of the full Linux stack on Edison is that you can use any existing standard library to access the web and all the needed tools to use REST API, xml and json etc are available in a project.
Web services
In JavaScript we can use
lib http to make API calls. I’m going to use this to query the Open Weather Map api and show the current weather on the LCD.
var myLcd; var LCD = require('jsupm_i2clcd'); var http = require('http'); // openweathermap apu uri var owmUrl = ""; // prepare the query var owmPath = "/data/2.5/weather?unit=metric&q=" // My lovely city name var yourCity = "Brescia,it"; function init() { setup() setInterval(checkWeather, 60000); } function setup() { myLcd = new LCD.Jhd1313m1 (6, 0x3E, 0x62); // setting up the grove lcd connected with i2c, the address is in the doc } function checkWeather() { // url building var url = owmUrl + owmPath + yourCity try { // api docs : // build the http request http.get(url, function(res) { var body = ''; // read the response of the query res.on('data', function(chunk) { body += chunk; }); res.on('end', function() { // now parse the json feed var weather = JSON.parse(body) // var id = weather.weather[0].id; // get the current weather code // show the message on the display lcdMessage = weather.weather[0].description; myLcd.setCursor(0,0); myLcd.write(lcdMessage); }); }).on('error', function(e) { // check for errors and eventually show a message lcdMessage = "Weather: ERROR"; myLcd.setCursor(0,0); myLcd.write(lcdMessage); }); } catch(e) { lcdMessage = "Weather: ERROR"; myLcd.setCursor(0,0); myLcd.write(lcdMessage); } }; init();
Conclusion
These brief examples could serve as a foundation to more complex applications integrating sensors, actuators and the internet.
In the next article we are going to build a complete project to show the possibility enabled by this platform and not a lot of code, giving anyone the ability to join the hype of IoT and, have fun in the process. | https://www.sitepoint.com/getting-started-with-programming-the-intel-edison/ | CC-MAIN-2018-09 | refinedweb | 1,752 | 62.58 |
I am developing a custom data augmentation method that requires access to the data at the batch level. Following advice elsewhere in this blog, I wrote a function as described below:
def aug_tfms(b:Collection[Tensor]):
xb, by = b
for i in range (xb.shape[0]):
(Augmentation code)
return [xb, yb]
In the DataBunch definition, I added dl_tfms = aug_tfms, like this:
.databunch(bs=bs, collate_fn=bb_pad_collate, dl_tfms=aug_tfms)
.normalize(imagenet_stats)
The function aug_tfms does its job creating the augmentation. However, it then applies the augmentation to both the train_dl and valid_dl dataloaders. I want it to apply only to the train_dl. How can accomplish this?
Your help is appreciated. | https://forums.fast.ai/t/help-using-dl-tfms-in-databunch/61567 | CC-MAIN-2020-05 | refinedweb | 109 | 51.85 |
numbers
Formats a number using fixed-point notation
Defaults to:
isToFixedBroken ? function(value, precision) { precision = precision || 0; var pow = math.pow(10, precision); return (math.round(value * pow) / pow).toFixed(precision); } : function(value, precision) { return value.toFixed(precision); }
value : Number
The number to format
precision : Number
The number of digits to show after the decimal point, []);
length : Number
indices : Number[]
options : Object (optional)
An object with different option flags.
count : Boolean (optional)
The second number in
indices is the
count not and an index.
Defaults to:
false
inclusive : Boolean (optional)
The second number in
indices is
"inclusive" meaning that the item should be considered in the range. Normally,
the second number is considered the first item outside the range or as an
"exclusive" bound.
Defaults to:
false.
number : Number
The number to check
min : Number
The minimum number in the range
max : Number
The maximum number in the range
The constrained value if outside the range, otherwise the current value
Corrects floating point numbers that overflow to a non-precise
value because of their floating nature, for example
0.1 + 0.2
The : Number
number
The correctly rounded number
Returns a random integer between the specified range (inclusive)
from : Number
Lowest value to return.
to : Number
Highest value to return.
A random integer within the specified range.
Returns the sign of the given number. See also MDN for Math.sign documentation for the standard method this method emulates.
x : Number
The number.
The sign of the number
x, indicating whether the number is
positive (1), negative (-1) or zero (0).
Snaps the passed number between stopping points based upon a passed increment value.
The difference between this and snapInRange is that snapInRange uses
The minimum value to which the returned value must be constrained. Overrides the increment.
maxValue : Number
The maximum value to which the returned value must be constrained. Overrides the increment.
The value of the nearest snap target.
Snaps the passed number between stopping points based upon a passed increment value.
The difference between this and snap is that snap does not use (optional)
The minimum value to which the returned value must be constrained.
Defaults to: 0
maxValue : Number (optional)
The maximum value to which the returned value must be constrained.
Defaults to: Infinity
The value of the nearest snap target. | https://docs.sencha.com/extjs/5.1.0/api/Ext.Number.html | CC-MAIN-2022-33 | refinedweb | 384 | 50.12 |
Fast forward to today's w3c announcement W3C Relaunches HTML Activity [w3.org]
HTML Working Group Charter [w3.org]
The W3C brings in an absolutely key element to the equation: Microsoft. WHAT-WG is basically a consortium of Mozilla, Opera and Apple (and Google), but Microsoft's presence, in the person of its chairman, will mean that the specifications will have an almost guaranteed chance of being implemented by all the major browser companies.
The organisation, which oversees the creation of Web standards such as HTML, CSS and XML, said that the establishment of the HTML Working Group recognises the importance of an open forum for the development of what is the predominant Web content technology. HTML (hypertext markup language) is a core element to describe how Web page content is presented and organised.
W3C sets new version of HTML in motion [pcpro.co.uk]
What a load of nonsense.
By developing HTML and xHTML in parallel if/when a business decision is made to change existing sites from one to the other it might even be practicable.
What good are standards when they don't even work in the first place except in a generic sense?
Nobody wants to see HTML stand still. However, just supplying validation tools to check compliance on web pages is only part of the problem. IMO the W3C should take on a more proactive role in actually validating the compliance and certification of the applications that implement those W3C standards.
By all means bring on the next wave of HTML but give us ways to verify that applications have met compliance so we know where the flaws are in advance and don't spin our wheels fighting cross-platform issues.
[edited by: incrediBILL at 5:47 pm (utc) on Mar. 8, 2007]
It's definitely good news despite the mess. And that mess won't be going away any time soon. We'll probably have the same complaints in ten years.
But in the meantime I'd love to see a combo dropdown box built into html. And built-in client side validation where you can specify it should be a telephone number, or an email address, or a valid domain name, or a valid url or a valid country/state or decimal or integer...etc.. (While you still need server side validation, it would be nice if the browser knew to force user to type data correctly w/o kludging in a bunch of javascript).
I wonder if others feel the same as me in wanting those tools or if there are other major "pet peeves" of "missing" html...
Yes: the ability to repeat alphanumerical strings across different pages in the same way that you can repeat image files across different pages.
Isn't that what XHTML transitional 'validation' has all been about? - nothing to do with serving the "correct" mime-type or whether or not pages validated XHTML1.1 strict or required XML - was it not simply a way to be "next step" ready while still getting on with the day job - e.g. lowercase tags and attributes, quoted attributes, closed elements etc.. all of these are important in other languages used in application building - why not HTML
Will HTML5 will be no more (apart from some new elements perhaps) than XHTML1 with the namespace and content-type doing the rest?
If they adopt the WhatWG working draft [whatwg.org] then it seems so
interesting times are ahead no doubt and Good to see ALL the majors involved :)
[edited by: SuzyUK at 11:08 am (utc) on Mar. 9, 2007]
That validates the PAGE, not the application displaying the page. If application developers can't validate that the applications (browser) display the pages properly to the w3c spec, then what chance does the web designer have of making cross-platform pages work properly?
QA Interest Group Charter (QA IG) [w3.org]
Study of a W3C Certification Activity. [w3.org] Posted and died. I suspect behind the scenes squeals from corporate sponsors as the responsible parties.
'Mr. Ballmer in the Ballroom with a Chair'?
I'm thinking that he was suggesting a way to include markup and/or text contained in a file across multiple pages without having to rely on scripting or SSI.
With images, on any page on your website you can have:
<img src="/images/mypic.jpg" style="width:100px; height:100px; border:none;" alt="My mugshot" />
Likewise with text in HTML 5, it would be useful and natural to have:
<text src="/snippets/copyright.txt" style="font-family:verdana, sans-serif; font-size:0.7em; text-align:right; margin-right:0.2em;" />
In fact, I think we should have had something like this ages ago.
I like it. And although you can do it with SSI, php, etc, it would be nice to have it in html. Not everyone using HTML understands SSI or has access to server includes, plus if you have an app that uses such a feature you may not know which platform the app is installed on. And php is more complicated than HTML. | http://www.webmasterworld.com/html/3274559.htm | CC-MAIN-2015-18 | refinedweb | 849 | 62.78 |
Generic Web Service to Event Hub scenario
Generally speaking, you will have devices which you want to connect to Azure, and the examples in the Getting Started section of Connect The Dots and in the various subdirectories show you one way to do this. The best way would be to use Microsoft's IoT Suite.
However, there are times when you do not have access to the devices or data producers, for example when you want to use public data feeds (such as the Department of Transportation's traffic information feed) as your data source. In this case you do not have the ability to put any code on the device or remote gateway to push the data to an Azure Event Hub or IoT Hub; rather, you need to set up something to pull the data from those sources and then push it to Azure. The simplest way is to run an application in Azure to do this. The code in this project is an example of how to do this. It is not a supported solution, or even a recommended one, but simply an example.
For more information about this sample, see the Pulling public data into Azure Event Hubs topic in this repository.
Prerequisites
- An Azure subscription. In order to configure and deploy the application you will need to have set up a Service Bus namespace and an Event Hub. An easy way to do this is to use AzurePrep, in the ConnectTheDots open source project, but that is not a prerequisite - set up the Event Hub manually if you like. Just make sure to configure at least one Shared Access Policy for the event hub.
- A version of Visual Studio installed on your desktop.
- Access to a source of data - either a public source not requiring credentials, or a private source and access credentials
Setup Tasks
Setting up the application once you have an Event Hub and its Connection String involves the following tasks, which will be described in greater detail below.
- Clone or copy the project to your machine
- Open the project solution in Visual Studio
- Edit App.config in the Worker Host folder to provide the relevant source and target configuration data
- Build the project
- Publish the application to your Azure subscription
- Verify that data is coming in to your Event Hub from the public data source.
- (Optional) Changing configuration of your application once it is running in the cloud
Editing app.config
There are two sections of App.config you will need to change, appSettings and XMLApiListConfig.
In the appSettings section, at a minimum enter
The connection string to your Service Bus. To find it, in the original Azure management portal, select Service Bus from the left nav menu, highlight your Namespace Name in the right pane, and click on Connection Information at the bottom of the page. In the Access connection information window that opens, highlight and copy the Connection String shown. In the new Azure management portal, select Browse from the left nav menu, pick Service bus namespaces from the list, and then your Service Bus Namespace from the right pane to get to the same management portal, starting with "Endpoint" and ending with an "=". XML to JSON conversion flag. If your data source sends you JSON formatted data, you are fine and do not need to change anything. On the other hand, if it sends you XML, and you leave sendJSON as false, the output will be XML. If you change sendJSON to true, it will send JSON regardless of input format. Find the line that says
<add key="SendJson" value="false" />and change 'false' to 'true'.
The credentials for the web service from which you will pull the data. If the site is a public site and does not need access credentials, you do not need to do this. If you do need the application to send credentials to the web service, find the section
<appSettings> <add key="UserName" value="[Api user name]" /> <add key="Password" value="[Api password]" /> </appSettings>Replace the [Api user name] with your user name, and the [Api password] with your password. It would look something like this:
<appSettings> <add key="UserName" value="Myname" /> <add key="Password" value="Mypassword" /> </appSettings>
In the XMLApiListConfig section, at a minimum enter
The URL for the web service from which you will pull the data. Find the section
<XMLApiListConfig> <add APIAddress=""/> </XMLApiListConfig>
Replace the APIAddress section with one or more URLs for web services you will access (the application will cycle through this list). It would look something like this if you have three URLs to access from the same root location:
<XMLApiListConfig> <add APIAddress=""/> <add APIAddress=""/> <add APIAddress=""/> </XMLApiListConfig>
Publishing the application
- In Visual Studio, right-click on 'DeployWorker' in Solution 'GenericWebToEH'/Azure, web sites you listed for the data they publish, and pushing it to your Event Hub. From there, you can access with Stream Analytics or any other application as you would normally.
Changing configuration once your application is running in the cloud
If you want to change the event hub you use, or the frequency the app polls the external web service, you can do that in the Azure Management Portal. To find the location, in the original Azure management portal, select Cloud Services from the left nav menu, click on your Cloud Services Name in the right pane, and select Configure from the top menu bar. In the new Azure management portal, select Browse from the left nav menu, select Cloud Services in the list, select your Cloud Service in the right pane, and click on Configuration in the Settings pane. Either of these bring you to a page where you can adjust those parameters. When you save it, the service is restarted using the new information you enter here. Any information entered here overrides anything you put in App.Config when you built and deployed your application. | https://azure.microsoft.com/pl-pl/resources/samples/event-hubs-dotnet-importfromweb/ | CC-MAIN-2017-26 | refinedweb | 984 | 56.59 |
I've made a change to the 1602 OLED library and also added a new function.
I've modified the sendString() function to now include the cursor position data and added a new function called sendFloat(), this allows you to send float data values, such as temperature, to the LCD ant it gets converted to a string before being sent to the display.
The sendFloat function takes the following parameters: float value, minimum length inc. decimal point, number of positions after the decimal point, the start column position, the start row position.
I've updated the library in my GitHub repository here
Please note that older sketches using the sendString() function will not work with the new library and will need to be altered.
/*
Demo sketch to display Strings and float values on the OLED
1602 display from Wide.HK. This uses a Lbrary that I've
put together containing some basic functions.
The I2C Address is set to 0x3C in OLedI2C.cpp
Phil Grant 2013
*/
#include "Wire.h"
#include "OLedI2C.h"
OLedI2C LCD;
float digit;
void setup()
{
Wire.begin();
LCD.init();
digit = 21.6;//This would normally be the float value returned from a temp sensor or other sensor
}
void loop()
{
LCD.sendString("Temp",0,0);// now includes the cursor position data (col, row)
LCD.sendFloat(digit,5,2,7,0);//Send the string to the display
while(1);
}
Wednesday, 6 November 2013
Hot water control using a Raspberry Pi Zero W
Following on from the first blog about the hot water heating control here's what I put together for the mounting. Whilst looking for a...
| https://gadjetsblog.blogspot.com/2013/11/ | CC-MAIN-2020-40 | refinedweb | 268 | 64.51 |
Sergey Senozhatsky wrote:> On (10/29/10 13:16), Paul E. McKenney wrote:> > Interesting...> > > > The task-list lock is read-held at this point, which should mean that> > the PID mapping cannot change. The lockdep_tasklist_lock_is_held()> > function does lockdep_is_held(&tasklist_lock), which must therefore> > only be checking for write-holding the lock. The fix would be to> > make lockdep_tasklist_lock_is_held() check for either read-holding or> > write-holding tasklist lock.> > > > Or is there some subtle reason that read-holding the tasklist lock is> > not sufficient?This was discussed in the thread at .Quoting from one of posts in that thead Usually tasklist gives enough protection, but if copy_process() fails| it calls free_pid() lockless and does call_rcu(delayed_put_pid().| This means, without rcu lock find_pid_ns() can't scan the hash table| safely.And now the patch that adds rcu_lockdep_assert(rcu_read_lock_held());was merged in accordance with that comment.Therefore, I thing below change is not good.> Should it be changed to (let's say)> > struct task_struct *find_task_by_pid_ns(pid_t nr, struct pid_namespace *ns)> {> - rcu_lockdep_assert(rcu_read_lock_held());> + rcu_lockdep_assert(rcu_read_lock_held() || lockdep_tasklist_lock_is_held());> return pid_task(find_pid_ns(nr, ns), PIDTYPE_PID);> }Regards. | https://lkml.org/lkml/2010/10/30/60 | CC-MAIN-2015-32 | refinedweb | 178 | 58.38 |
it a static variable or field.
- These variables belong with a class and can also be called class fields/variables.
- Every object of a class has a same static variable, because there are no copy of a static variable is being made, all objects share that single static field.
- A static variable must be accessed by its class name you do not need to create an object to access it.
- Static variables are being called first than non-static or instance variables because static variables load into memory at compile time while instance variables loads into memory at run-time after object initialization.
Example:
using System; namespace csharpBasic { class MarkSheet { // Class scope is started. // Static fields/variables are declared. public static string StudentName; public static string StudentAddress; public static string Asp; public static double AspMarks; } // Class scope is ended. class Program { // Static main method void type declaration. static void Main(string[] args) { // Static variables are being initialiazed through a class name (MarkSheet) itself by .(dot) operator. MarkSheet.StudentName = "abc"; MarkSheet.StudentAddress = "xyz"; MarkSheet.Asp = "Asp.net"; MarkSheet.AspMarks = 80; // Static fields/variables are being printed. Console.WriteLine("Student name: {0}", MarkSheet.StudentName); Console.WriteLine("Student address: {0}", MarkSheet.StudentAddress); Console.WriteLine("Subject 1: {0}", MarkSheet.Asp); Console.WriteLine("Asp.net Marks: {0}", MarkSheet.AspMarks); Console.ReadKey(); } /* The Output will be: Student name: abc Student address: xyz Subject 1: Asp.net Asp.net Marks: 80 */ } } | https://tutorialstown.com/csharp-static-variables/ | CC-MAIN-2018-43 | refinedweb | 234 | 51.95 |
The case against checkPermission()
A few years back, Denis Pilipchuk wrote a four-part series on Java versus .NET Security from a number of angles, including code containment, crypto, code protection, authorization and authentication, and more. The series was later assembled into a "Short Cut" PDF. Editing these articles, I have to say I was a little overwhelmed by the depth with which he delved into the topic. For the layman, the application developer who wants to be security-aware without having to become a security expert, you often just want to know if Java security is "good enough".
And that raises the question "good enough... how?" Secure enough? Configurable enough? Flexible enough? Practical enough? Performant enough? All of these factors often tend to work against each other: offer too much flexibility and you might unwittingly open a security hole. But be too stringent and application developers can't do anything interesting.
Denis re-frames the discussion in today's Feature Article,
Pitfalls of the Java Permissions Model, taking a historical look at how the call-stack based concept of permissions emerged:.
In Java Today,
Caciocavallo project co-founder Roman Kennke has apparently made the first OpenJDK commit by someone with no Sun ties. As he explains in his blog, ."
Noted in Kirill Grouchnikov's Swing Links of the Week, Maxim Zakharenkov has posted the slides (PDF) for his JavaZone presentation on debugging with SwingExplorer. The slides show a simple but buggy Swing application, and how SwingExplorer can be used to track down problems with layout, painting, event-listening, and misuse of the event-dispatch thread
Don't forget that the 2009 Mobile, Media, and eMbedded Developer Days Call for Papers closes today, September 30, for technical sessions, panel sessions, hands on talks, and lightning talks. If you want to submit any of these for consideration, visit the Call for Papers page and follow the instructions there.
Today's Weblogs begin with Eamonn McManus' announcement
JMX Namespaces now available in JDK 7. "The JMX Namespace feature has now been integrated into the JDK 7 platform. You can read about it in detail in the online documentation for javax.management.namespace. Here's my quick summary."
Kohsuke Kawaguchi follows up yesterday's announcement of an easier-to-install Hudson for Windows with winsw: Windows service wrapper in less restrictive license. "I wrote a little program that can host any executable (Java included) as a Windows service, and made it available in the BSD license."
Finally, in GlassFish Migration: WebLogic's Split Directory to Ear, Sekhar Vajjhala writes, "in my one my previous blogs, I wrote about how GlassFish verifier can be used to verify an archive when migrating J2EE/Java EE applications to GlassFish. Here I will show how to generate an Java EE ear file starting from WebLogic's Split Directory Development. "
In today's Forums, Fabian Ritzmann
explains where JAX-WS keeps its Maven POMs in the follow-up
Re: Using WSIT with maven. "JAX-WS has a number of POMs that you might be able to use. WSIT doesn't really add that many dependencies. XWSS and FastInfoset are the ones I can think of from the top of my head. You can find the POMs in CVS:
CVSROOT = :pserver:javanetuserid@cvs.dev.java.net:/cvs
Repository = jax-ws-sources/repo"
CVSROOT = :pserver:javanetuserid@cvs.dev.java.net:/
Repository = jax-ws-sources/repo
whartung explains the purpose and application of SOAP in
Re: Publishing a web API using glassfish, is there an easy way? "Believe it or not, SOAP is the big winner here for you. It pretty much does what you want to do Is it complicated? Yes, it CAN be. But if you're looking to do simple things, then SOAP is simple (well, simple enough), particularly in Glassfish. SOAP suffers from several things. It's biggest problem is simply that it has been moving SO fast in the past several years. By the time folks implement and agree on one aspect, they're adding more to it."
whartung
Finally, Shai Almog offer a tip for LWUIT customization in
Re: How to change the "background selection color" of the command list. "The menu is a list hence the component within the list is the default cell renderer. The default cell renderer doesn't have its on UIID and so it uses the Label UIID which it derives. To replace that you can replace the menu renderer with any renderer you want that carries any UIID style you desire.". | http://weblogs.java.net/blog/editors/archives/2008/09/change_the_lock.html | crawl-002 | refinedweb | 752 | 53.61 |
1 Jan 2003 14:21
CustomDrawn TreeView
Manish Gupta <mrmangu@...>
2003-01-01 13:21:21 GMT
2003-01-01 13:21:21 GMT
Hello Wtl, I'm doing a custom drawn tree view control. I use both CDDS_ITEMPREPAINT and CDDS_ITEMPOSTPAINT. During CDDS_ITEMPREPAINT, I just change the font(sometimes), textcolor and text backcolor and return ( CDRF_NOTIFYPOSTPAINT | CDRF_NEWFONT ). During CDDS_ITEMPOSTPAINT, I change the default (+) icon that windows draw and replace it with a custom icon. I also do some drawing around the item rect. So far so good. But, when I call DeleteAllItem() and repopulate the tree, the item's text does not show up. All other things appear fine including the backcolor, the icon that i paint and the drawing around the item rect. They reappear when I resize the dialog. I've tried using Invalidate() and UpdateWindow() but this does not seem to help. It seems to be related to the call to DeleteAllItem() because if I don't call this things work as expected. But I can't live with not deleting the items. Any clues anyone? Thanks for all you help Manish _________________________________________________________________ MSN 8: advanced junk mail protection and 2 months FREE*. | http://blog.gmane.org/gmane.comp.windows.wtl/month=20030101 | CC-MAIN-2013-20 | refinedweb | 196 | 76.11 |
...
If you spend any time exploring Angular 2 you'll quickly see that most of the content is geared towards developers using text editors like VS Code, Sublime, etc. Those are amazing editors, and I use them both on a regular basis, but I also write a lot of .NET code. That means I use Visual Studio as my primary IDE when writing code every day. It's a truly amazing tool, but it's not always clear how to get it to play nice with some of the web-based frameworks like Angular 2.
With Angular 2 Google made a strategic decision to use Typescript as the language, and I think this will really help Angular 2 adoption. There are tons of devs like me, who are used to awesome compilers, typed languages, and advanced features (generics, async/await, etc) that might have found the quirks of javascript a bit...ahem...frustrating at times. I'm not saying javascript isn't a useful language, or that it's too tough to understand, but I have spent many many hours hunting down simple bugs that a compiler would have caught without missing a beat. By baking Typescript right into Angular 2 the team at Google have made it so much more accessible to developers like myself.
So in this two-part series I want to walk through how a developer who is used to Visual Studio can start using Angular 2 in the IDE they know and love. In this first part we'll build a relatively simple API, and add Nancy to let us host the Angular 2 application using OWIN. In the second post of the series I'll walk through some of my experiences writing Angular 2 code within Visual Studio. For reference, I'm using Visual Studio 2015 Update 2 and ASP.NET 4.5 (i.e., not the bleeding edge ASP.NET Core stuff).
- 0-60 with Angular 2 & Visual Studio - Part 1 (this post)
- 0-60 with Angular 2 & Visual Studio - Part 2
The application
In this example I wrote a simple application that lets a user enter a zip code, and see the past high & low temperatures of the current day over the past 10 years. Visually the Angular 2 app looks like this:
Behind the scenes our API will take the provided zip code, query an external API to map that to a geo-coordinate. That coordinate is then used to query another API for the weather data. Then a response is built up and returned to the Angular app.
Create our Visual Studio web application
To start we can create our visual studio solution using the "blank solution" template:
I usually like to start with a blank solution so that things are named how I want, but you can also start with the ASP.NET web application directly too. Now to add our application we need to add a new project to the solution:
note, if you wanted to just host a website you can use the "add new website" option, but we want to build out an API so we need the web app.
Then, since we're going to be building our API using OWIN we want to start with the Empty template:
Start building out our API
To build out this API we're going to use OWIN, which is a really simple library Microsoft provides to build up a pipeline of actions to take when an HTTP request comes in. The thing I really like about OWIN applications is they are very modular, and it's easy for me to reason about what's actually happening in my application. They can also be hosted in just about anything, including windows services, so they're quite portable. Now, this means we need to plumb things up ourselves, but it's really not too difficult.
To get started we're going to need to pull in the OWIN nuget packages for WebAPI, and another package that will actually kick-start the OWIN process, so in the package manager console go ahead and run the following:
PM> install-package Microsoft.AspNet.WebApi.Owin PM> install-package Microsoft.Owin.Host.SystemWeb
That will pull in a few libraries and give us what we need to host an API using the OWIN pipeline. I also usually look for any updates that are available at this time too since, depending on your settings, Visual Studio will pull in the oldest version of a package available to satisfy any dependencies.
With the nuget packages in place we can start writing some code. With OWIN you need to provide what's called the Startup class. You can either use a convention and call the class
Startup, or use an assembly attribute to use something else, but it should look like the following:
using Owin; namespace WeatherHistory.Web { public class Startup { public void Configuration(IAppBuilder appBuilder) { } } }
Now this doesn't do anything yet, but if you were to debug this and set a breakpoint on the start of the Configuration method, you would see that it's called:
If you're debugging the project at this point and not hitting the breakpoint, stop and hit the interwebs to figure things out.
Now that we know our OWIN configuration code is being hit we can make some simple updates to the code to add in web api, and even host the API at a specific path within the site. First however, let's add in the shell of our only API controller.
Adding the first WebAPI controller
Within WebAPI there is a base class provided that makes it really simple to add endpoints. You simply derive from a class called
ApiController and start adding your functionality. We're going to be building a weather history application, so we'll add a simple endpoint at
/api/temperatures that will accept a zip code as a query parameter.
First, we need a model (or class) that we'll be returning from the API. This object will contain the location details, and then includes a list of historical dates:
using System; namespace WeatherHistory.Web.Models { public class ZipcodeWeather { public string City { get; set; } public string State { get; set; } public float Latitude { get; set; } public float Longitude { get; set; } public List<HistoricalTemperature> HistoricalTemperatures { get; set; } public ZipcodeWeather() { HistoricalTemperatures = new List<HistoricalTemperature>(); } } public class HistoricalTemperature { public DateTime Date { get; set; } public float Low { get; set; } public float High { get; set; } } }
Once we have that we can build out an initial version of our API controller that will accept the zip code as a query parameter, and return a list of historical temps:
using System; using System.Collections.Generic; using System.Web.Http; using WeatherHistory.Web.Models; namespace WeatherHistory.Web.Api { //) { // Create our dummy response just to show the API is working // var zipcodeWeather = new ZipcodeWeather { City = "St. Paul", State = "MN", Latitude = 44.9397629f, Longitude = -93.1410727f }; // Now just add a list of fake temperatures to the return object // zipcodeWeather.HistoricalTemperatures.AddRange( new List<HistoricalTemperature> { new HistoricalTemperature { Date = DateTime.Now, High = 75, Low = 50 }, new HistoricalTemperature { Date = DateTime.Now.AddYears(-1), High = 75, Low = 50 }, new HistoricalTemperature { Date = DateTime.Now.AddYears(-2), High = 75, Low = 50 }, new HistoricalTemperature { Date = DateTime.Now.AddYears(-3), High = 75, Low = 50 }, new HistoricalTemperature { Date = DateTime.Now.AddYears(-4), High = 75, Low = 50 } } ); // Use the WebAPI base method to return a 200-response with the object as // the payload // return Ok(zipcodeWeather); } } }
Now that we have the controller we need to actually wire up WebAPI into the OWIN pipeline. What I normally do is have all my API routes exist underneath a
/api/ path within my site. However, I don't want to embed that
/api/ in all of my controllers since I might change my mind some day. With OWIN this is super simple, and involves using a simple technique when configuring web api:
using Owin; using System.Web.Http; namespace WeatherHistory.Web { public class Startup { public void Configuration(IAppBuilder appBuilder) { // Host all the WebAPI components underneath a path so we can // easily deploy a traditional site at the root of the web // application // appBuilder.Map("/api", api => { // This object is what we use to configure the behavior // of WebAPI in our application // var httpConfiguration = new HttpConfiguration(); // We'll use attribute based routing instead of the // convention-based approach // httpConfiguration.MapHttpAttributeRoutes(); // Now add in web api to the OWIN pipeline // api.UseWebApi(httpConfiguration); }); } } }
With this code in place you can debug the solution, which should automatically open a new page in your default browser. You'll have to update the URL to use the route we specified, but you should see something like this:
Now don't freak out because you're seeing a blob of XML! That's simply because WebAPI supports returning data to the client in either XML or JSON, but if no specific format is requested it will default to XML. If we switch over and use Postman and request JSON we'll see data that looks more familiar perhaps:
Now we're starting to get somewhere, but you're probably noticing that the casing on the properties matches the .NET class. In javascript that isn't the convention used widely, so we need to update WebAPI to serialize our data differently. This involves a simple update the configuration we provided it to it uses camel case when serializing the .NET classes:
// We'll use attribute based routing instead of the // convention-based approach // httpConfiguration.MapHttpAttributeRoutes(); // Change the serialization so it does camelCase // var jsonFormatter = httpConfiguration.Formatters.JsonFormatter; var settings = jsonFormatter.SerializerSettings; settings.ContractResolver = new CamelCasePropertyNamesContractResolver(); // Now add in web api to the OWIN pipeline // api.UseWebApi(httpConfiguration);
By updating the JSON formatter our API is using we now get data in the format we want:
Adding real functionality to the API
Ok, now it's time to add some real functionality to our API that will take the zip code provided, map it to a latitude/longitude using zipcodeapi.com, and query The Dark Sky Forecast API to get historical temperature data. In this case the Forecast.IO API provides the historical data we need, but it requires the latitude and longitude to do so. We want to expose a service that only requires the zip code though, so we'll use the zipcodeapi.com API to do that conversion first.
Let's first update our API to convert the zip code, and then we'll worry about pulling the actual data. To query the zip code API we'll use the RestSharp nuget package, which makes it really easy to create REST queries. The docs for that API show that it wants a request like so:<api_key>/info.<format>/<zip_code>/<units>
So let's make a small update to our API controller to make this call:
// }; // Use the WebAPI base method to return a 200-response with the object as // the payload // return Ok(zipcodeWeather); } /// <summary> /// Map a zip code string into a response that contains the latitude, /// longitude & city name. /// </summary> /// <param name="zipcode"></param> /// <returns>A valid object if the zip code could be mapped, otherwise null in any error condition.</returns> private ZipCodeApiResponse RequestGeoFromZipcode(string zipcode) { // Create a RestSharp client we can use to make the API call // var client = new RestClient(""); // Now build up a request that matches what this API is expecting. We include // out authentication token and the zipcode right in the URI, and that's simple // to do with RestSharp's url segments // var request = new RestRequest("/rest/{apiKey}/info.json/{zipcode}/degrees", Method.GET); request.AddUrlSegment("apiKey", ConfigurationManager.AppSettings["zip-code-api-key"]); request.AddUrlSegment("zipcode", zipcode); // Now any HTTP call will be asynchronous, but this is trivial to handle with // the async/await functionality available in .NET // var response = client.Execute(request); // Let's make sure the request is valid before attempting to decode anything // if (response.StatusCode != HttpStatusCode.OK) { return null; } // Finally, we need to "decode" the JSON data returned so we can load it // into our internal object. // var content = JObject.Parse(response.Content); // Just populate a new object using the key's that existed in the JSON // data returned // var zipCodeResponse = new ZipCodeApiResponse { Zipcode = Convert.ToString(content["zip_code"]), Latitude = Convert.ToSingle(content["lat"]), Longitude = Convert.ToSingle(content["lng"]), City = Convert.ToString(content["city"]), State = Convert.ToString(content["state"]) }; return zipCodeResponse; } }
Here's our updated
Web.config that pulls in our API key from a "secret" file:
<configuration> <appSettings file="appSettings.secret"> </appSettings> <system.web> <compilation debug="true" targetFramework="4.5.2" /> <httpRuntime targetFramework="4.5.2" /> </system.web> <!-- rest removed for brevity -->
There are a couple changes to note in this updated code:
- We check the response from the zip code API to ensure it's valid
- We're using the
JObjectclass of JSON.net to easily parse the json returned by the API
- We're adding our API key to the app settings, but using a separate file that isn't checked in
Outside of those two this is mostly straightforward C# code. Let's move on to querying the historical data now. Unlike the zip code API, there is a nice nuget package for querying the Forcast.IO API.
For this application we want to let the user get the past X years of temperatures on the current day for the provided zip code, but default to 10 if the number of years isn't provided. So what we're going to do is update our
Get method to take an optional parameters, and add a simple loop to the method to pull out the historical information. Here's the updated method: }; // Grab the current date so we can create offsets from it // var startDate = DateTime.Now; // Now loop according to 'years' and use the index each time to make a request back // in time // foreach (var offset in Enumerable.Range(0, (int) years)) { // Calculate the date for this iteration // var pastDate = startDate.AddYears(-offset); // Make the actual forecast.io call // var request = new ForecastIORequest(ConfigurationManager.AppSettings["forecast-io-key"], zipCodeResponse.Latitude, zipCodeResponse.Longitude, pastDate, Unit.us); var response = request.Get(); // Create the temp object we need to return and add it to the list // zipcodeWeather.HistoricalTemperatures.Add(new HistoricalTemperature { Date = pastDate, High = response.daily.data[0].temperatureMax, Low = response.daily.data[0].temperatureMin }); } // Use the WebAPI base method to return a 200-response with the list as // the payload // return Ok(zipcodeWeather); }
Now we've got a nice endpoint that will convert a zipcode into a list of historical weather data. We allow the user to specify how many years of history they want, but don't force them to do so. And all it took was a controller that has just over 130 lines of code (including the comments...which are verbose)!
Add Nancy for hosting web pages
As we build our Angular code base we'll of course need something to host the actual HTML pages, and for that we'll use Nancy. It is a simple library that provides really concise ways of routing HTTP requests to code. Because we're using OWIN, we can't use ASP.NET MVC even if we wanted to. To add what we need just install the nuget package built for hosting Nancy within OWIN (which pulls in the Nancy package automatically):
PM> install-package nancy.owin
Once we have the package installed we just need to wire Nancy into the OWIN pipeline, create a module that defines our route, and add a simple view that will be rendered for us. To keep things simple I'm just using Nancy's conventions, but you can easily override where files need to be for Nancy to find them. Here's what our project looks like now:
The root module is about as simple as it can get:
using Nancy; namespace WeatherHistory.Web { public class RootModule : NancyModule { public RootModule() { Get["/"] = _ => View["index"]; } } }
...and the view is nothing special yet, but it's prepped for our Angular 2 application:
<!DOCTYPE html> <html> <head> </head> <body> <my-weather-app> Nancy is working, but this means there's no Angular 2 yet! </my-weather-app> </body> </html>
...and how do we actually wire up Nancy? It's one line of code added right after the
appBuilder.Map block we created above:
// Add nancy to the pipeline after WebAPI so the API can // handle any requests it is configured for first // appBuilder.UseNancy();
So at this point we have a working WebAPI that can provide the weather data, and now a simple website that can provide dynamic HTML content in the same Visual Studio project. This means we can easily run this code, debug into it, and leverage all the power of that Visual Studio provides.
That will conclude part 1 of this series. In the next part we'll tackle adding the Angular 2 code and explore what the experience is like when using Visual Studio.
All of the code for the example application is available in my github repository. | https://blog.sstorie.com/0-60-with-angular-2-and-visual-studio-part-1/ | CC-MAIN-2018-05 | refinedweb | 2,839 | 52.7 |
Overview
I’ve recently been looking into the different ways in which you can integrate SharePoint Online (SPO) with Windows Azure—especially around integrating client-side code and services deployed in Azure as this tends to be a more prominent pattern. If you’ve been following my recent blog-posts, then you’ve certainly seen this.
Of late, though, I’ve been spending some time with JSON and JSONP (JSONP is JSON with Padding and supports cross-domain data loading/scripting by dynamically injecting a callback script through the use of ?callback=? at the end of the service URI reference in the client code). This is because I talk a lot about WCF services (and often refer to cross domain policy files if you use Silverlight), but in reality if we’re to think about the Web in the broader sense, then we need to ensure we’re also talking about JSON. Hence, this blog discusses how you can create a REST service and then expose that service with JSON formatting to a Web part in SPO that accepts the cross-domain call using JSONP. Take the following diagram, for example, which shows a SPO (or what could be a SP on-premises site) that consumes a REST service that consumes data—both of which are deployed to Windows Azure. While this blog post only discusses the SPO client app, you could certainly use the REST service in other clients as well.
To successfully get the sample that is discussed above working, at a high level you’ll need to do the following:
There’s a lot of guidance on the above, so I’ll call out the places where we tweaked the code with references back to some original walkthroughs/docs.
The Data
In this blog post, I’ll assume that you’ve got some data in a SQL Database already created. If you’ve not done this before, you can go here for overview information and here for specific Create SQL script direction. In this example, I’ve got a small table of Speaker data in it. The below screenshot shows the entity data model used in the example.
Once you’ve created your database, you can now create a service that wraps around that SQL Azure Database.
The Service
You have a couple of ways to build out a REST service. For those of you using more mainstream techniques, you can create an ASP.NET project and add a WCF Data Service item. You then add an entity data model (similar to the above schema), and by configuring the reference in the core service class you create a REST service for your SQL Database data. More recent methods use the Web API, which is a great way to leverage the MVC model/templates to build out REST services quickly and easily. If you’re new to the ASP.NET Web API, check this out for more details. (For this sample, I used VS 11 Beta with the MVC 4 templates and Azure 1.7 SDK/Tools installed. I then created a new Cloud project, selected the MVC 4 template and then selected the Web API method. After that, I added a controller for the Speaker entity data model. If you’re new to this and Web API, there is a great TechEd session here.) Bolded are the core verbs/URI paths; both a GET that either return all Speakers or a specific speaker.
…
namespace MyFirstWebAPI.Controllers
{
public class SpeakerController : ApiController
{
private TechEdEntities db = new TechEdEntities();
// GET api/Speaker
public IEnumerable<Speaker> GetSpeakers()
{
return db.Speakers.AsEnumerable();
}
// GET api/Speaker/5
public Speaker GetSpeaker(int id)
Speaker speaker = db.Speakers.Single(s => s.ID == id);
if (speaker == null)
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.NotFound));
}
return speaker;
}
Regardless of whether you use WCF Data Services or Web API, you need to ensure you format the return data from your REST service you need to handle JSONP. A good primer on how to do this can be found in this excellent blog post. Directly using Alex’s model, we leveraged this pattern and then simply created a client application that ingested the service with this custom JSONP formatter. The core JsonpMediaTypeFormatter class is below.
using System.Net.Http.Formatting;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using System;
using System.IO;
using System.Web;
using System.Net;
public class JsonpMediaTypeFormatter : JsonMediaTypeFormatter
private string callbackQueryParameter;
public JsonpMediaTypeFormatter()
SupportedMediaTypes.Add(DefaultMediaType);
SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript"));
MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", DefaultMediaType));
}
public string CallbackQueryParameter
get { return callbackQueryParameter ?? "callback"; }
set { callbackQueryParameter = value; }
public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext)
string callback;
if (IsJsonpRequest(out callback))
return Task.Factory.StartNew(() =>
var writer = new StreamWriter(stream);
writer.Write(callback + "(");
writer.Flush();
base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait();
writer.Write(")");
});
else
return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext);
private bool IsJsonpRequest(out string callback)
callback = null;
if (HttpContext.Current.Request.HttpMethod != "GET")
return false;
callback = HttpContext.Current.Request.QueryString[CallbackQueryParameter];
return !string.IsNullOrEmpty(callback);
The Web API was wrapped in a Windows Azure cloud project, which configures the service to be deployed to Azure. Note that you can start with a Windows Azure Cloud project as I noted above or you can add it later on depending on what project template you’re using. (I would recommend starting with it if you’re planning on deploying the app to the cloud.) When you’ve deployed the REST service to Windows Azure, you can then use the REST URI to access data. For example, by inputting a URI similar to the below
you’d return a single JSON object that looks like the following (this is returned by adding a “1” at the end of the REST URI):
"$id":"1","ID":1,"Name":"Steve Fox","Title":"Director","Presentation":"Overview of Azure","Skill_Level":200,"Abstract":"Overview of Azure","Rating":3,"Comments":"Awesome Demos!","EntityKey":{"$id":"2","EntitySetName":"Speakers","EntityContainerName":"TechEdEntities","EntityKeyValues":[{"Key":"ID","Type":"System.Int32","Value":"1"}]}
Using a combination of semantic HTML and jQuery, you’re able to parse the JSON/JSONP and then make some use of it in the web part.
The Client
So, most importantly the question is does this work within SPO—which has HTTPS enabled and securing the SPO sites. Here you can create a sandboxed visual web part and format the web part control with the following markup. You’ll note that I’m referencing the jQuery libraries directly from within my SPO site collection. The heavy lifting here is done by the $(document).ready() function, which uses the getJSON method which uses the core ajax function to make the call to the REST endpoint and then handle the parsing of the JSON object. Here, I’ve used the each and append methods to add all returned elements by default to a <list> object.
<%@="SalesDataWebPart.ascx.cs" Inherits="SPSalesDemo.SalesDataWebPart.SalesDataWebPart" %>
<script src=</script> <script type="text/javascript" src=></script> <script type="text/javascript">
$(document).ready(function () {
var spList = $("#speakerList"); var i = 0; var speaker = new Object;
$.getJSON(?, function (data) { $.each(data, function (index, value) { spList.append($("<li>" + index.valueOf(i) + ": " + value.valueOf(i) + "</li>")); i++; }); }); });
</script>
<h2> Speaker Information (Cross-Domain) </h2> <br /> <li id="speakerList"></li>
The markup is pretty bare-bones, and you could use some formatting or additional logic to filter out unwanted properties from the JSON object. You can see below what the JSON returns by viewing the Locals in IE’s F12 debugging experience. You could also use some jQuery data templating called data linking—for more information on this go here.
While not exactly exciting, the above JSON object and client-side code parses and renders the data in the following way. Here you can see that I’ve just accepted everything coming back off the wire, but you’d want to either serialize into a local object or use another method to filter out the pieces of data that you wouldn’t want to expose (e.g. $id and EntityKey).
So, we’ve proved it works. Does that mean you should use JSONP? While it is is powerful and does provide you with the ability to use a different method than using Silverlight or jQuery for WCF service calls, it also comes with a risk: because JSONP injects script at runtime within your page, you should have a high degree of trust in the service code. Also, there are limitation of JSONP such as poorer error handling. You’d also want to run some tests around performance to ensure you get the best possible user experience when comparing cloud-based services.
Summary
In summary, this blog post discussed the use of JSONP/JSON to manage cross-domain service calls. It also discussed using the MVC 4 Web API to build and deploy your REST services. You could think of this as an additional pattern for integrating services and SPO using Windows Azure. Others I’ve discussed include Silverlight or JavaScript and leveraging the SP Client Object Model to integrate with SharePoint.
One thought to throw out there is where cross-origin resource sharing (CORS) fits into the above. There is a great blog post here that discusses how adding a wildcard Header amendment such as the below could enable cross-domain scripting as well.
msg.Headers.Add( “Access-Control-Allow-Origin”, “*” );
This comes at a lower custom code price (i.e. no need for custom JSON formatters) because it is supported at the browser level. The problem is that not every browser supports CORS in the same way.
More on this in the future. | http://blogs.msdn.com/b/steve_fox/archive/2012/07/01/jsonp-sharepoint-online-amp-windows-azure.aspx | CC-MAIN-2014-35 | refinedweb | 1,607 | 54.22 |
This code resizes the bar to fit the stage height.
import flash.display.Stage; import flash.events.Event; var myStage:Stage = this.stage; myStage.align = StageAlign.TOP_LEFT; myStage.scaleMode = StageScaleMode.NO_SCALE; myStage.addEventListener(Event.ENTER_FRAME,initSite); function initSite(e1:Event):void { sidemenu_mc.height = myStage.stageHeight; }
sidemenu_mc is the bar, which is 180px wide.
The problem is that the bar appears, but only a bit, something like 2 or 3px, then it stops.
I guess it stops as the resize is executed, infact if I comment the line with the resize command the animation goes on, but of course without the resize.
Is it normal this behaviour? How to solve the problem?
This post has been edited by Alhazred: 11 October 2010 - 05:40 AM | http://www.dreamincode.net/forums/topic/194454-the-resize-stops-the-animation/ | CC-MAIN-2017-13 | refinedweb | 124 | 69.38 |
unique_list
An implementation of
List that enforces all elements be unique.
Usage
import 'package:unique_list/unique_list.dart';
UniqueList is an implementation of
List that doesn't allow the same element
to occur in the
List more than once (much like a
Set.) Elements will be
considered identical if comparing them with the
== operator returns
true.
The
UniqueList class implements
List, as such it has all of the same methods
and parameters of a
List, can be used interchangeably with a
List, and be
provided to any parameter that enforces a
List.
The default constructor is identical to
Lists', accepting an optional
length
parameter. If
length is provided the
UniqueList will be a fixed-length list.
/// Create an empty [UniqueList]. final list = UniqueList(); /// Create an empty [UniqueList] of [int]s. final integers = UniqueList<int>(); /// Create a fixed-length [UniqueList] of [int]s. final fiveIntegers = UniqueList<int>(5);
By default,
UniqueList doesn't allows for multiple instances of
null to be
contained within the list, unless creating a fixed-lenght list. To create a
UniqueList that allows for multiple instances of
null to occur, the
nullable
parameter can be set to
true.
/// Create an empty [UniqueList] that allows multiple instances of `null`. final list = UniqueList.empty(nullable: true);
Strict Lists
By default,
UniqueList behaves like a
Set, when an element that already exists
in the list is added to it, the list will be left as it was. The
UniqueList.strict
constructor can be used to create a list that will throw a
DuplicateValueError
instead.
final list = UniqueList<int>(); list.addAll([0, 1, 2]); list.add(0); print(list); // [0, 1, 2] final strictList = UniqueList<int>.strict(); strictList.addAll([0, 1, 2]); strictList.add(0); // This will throw a [DuplicateValueError].
Factory Constructors
UniqueList has all of the same factory constructors as a regular
List, with
the exception of
List.filled, as the values created by
filled would not be
unique.
Each of
UniqueList's factory constructors have a
strict and a
nullable
parameter, and most have a
growable parameter like
List.
UniqueList.from
/// Create a new [UniqueList] list containing all elements from another list. final list = UniqueList<int>.from([0, 1, 2]); final strict = UniqueList<int>.from([0, 1, 2], strict: true); final nullable = UniqueList<int>.from([0, 1, 2], nullable: true);
UniqueList.of
/// Create a new [UniqueList] list from an iterable. final list = UniqueList<int>.of([0, 1, 2]); final strict = UniqueList<int>.of([0, 1, 2], strict: true); final nullable = UniqueList<int>.of([0, 1, 2], nullable: true);
UniqueList.generate
/// Generate a new [UniqueList] using a generator. final list = UniqueList<int>.generate(5, (index) => index); // [0, 1, 2, 3, 4] final strict = UniqueList<int>.generate(5, (index) => index, strict: true); final nullable = UniqueList<int>.generate(5, (index) => index, nullable: true);
UniqueList.unmodifiable
UniqueList.unmodifiable is the only standard factory constructor without a
strict
parameter, as it isn't necessary if the list can't be modified.
/// Create an unmodifiable [UniqueList] from an iterable. final list = UniqueList<int>.unmodifiable([0, 1, 2]); final nullable = UniqueList<int>.unmodifiable([0, 1, 2], nullable: true);
Constructor Errors
Attempting to construct a strict list that contains multiple instances of the same
element, will throw a
DuplicateValuesError, as opposed to the
DuplicateValueError
thrown when attempting to add a duplicate element to a list.
A
DuplicateValuesError will also be thrown if attempting to construct a fixed-length
list that contains multiple instances of the same element.
Adding and Inserting Elements
Adding and inserting values into a non-strict
UniqueList have different behavior
when a duplicate element is provided. Both will throw a
DuplicateValueError
if adding or inserting duplicate elements into a strict list.
Add and AddAll
When adding elements into a list with the
add or
addAll method, any duplicate
values will be ignored.
final list = UniqueList<int>.from([0, 1, 2]); print(list); // [0, 1, 2] list.add(3); print(list); // [0, 1, 2, 3] list.add(2); print(list); // [0, 1, 2, 3] list.addAll([0, 1, 4, 5]); print(list); // [0, 1, 2, 3, 4, 5]
Insert and InsertAll
When inserting one or more elements into the list with the
insert or
insertAll
method, any existing instances of any of the elements being inserted will be removed,
shifting the indexes of all elements occuring after the one(s) removed down.
final list = UniqueList<int>.from([0, 1, 2]); print(list); // [0, 1, 2] list.insert(0, 3); print(list); // [3, 0, 1, 2] list.insert(3, 3); print(list); // [0, 1, 2, 3] list.insertAll(3, [0, 1, 2]); print(list); // [3, 0, 1, 2]
Setting Values
When setting values with the
setAll,
setRange,
first,
last, or the
[]=
operator a
DuplicateValueError will always be thrown, regardless of whether the
list is strict or not, unless the resulting list does not contain any duplicate
values once all values have been set.
final list = UniqueList<int>.from([0, 1, 2]); print(list); // [0, 1, 2] list.setAll(0, [0, 1, 2]); // Throws a [DuplicateValueError]. list.setRange(1, 2, [3, 4]); print(list); // [0, 3, 4] list.setRange(0, 1, [2, 3]); // Throws a [DuplicateValueError].
Note: In order to comply with
List, the
fillRange method is provided, but
will always throw a
DuplicateValueError unless the value being filled is
null
in a nullable list, or if only a single element is being set.
The ToUniqueList Extension Method
As many of
List's methods return an
Iterable, they're often cast back to a
List using
Iterable's
toList method. To follow the same pattern, this package
extends
Iterable with the
toUniqueList method.
Like
toList, the
toUniqueList method contains a
growable parameter, in
addition to the
nullable and
strict parameters, which by default are
true
and
false respectively.
var list = UniqueList<int>.from([0, 1, 2, 3, 4]); final reversed = list.reversed.toUniqueList(); print(reversed); // [4, 3, 2, 1, 0] | https://pub.dev/documentation/unique_list/latest/ | CC-MAIN-2021-25 | refinedweb | 972 | 59.09 |
I'm trying to find out a way in python to redirect the script execution log to a file as well as stdout in pythonic way. Is there any easy way of acheiving this?
I came up with this [untested]
import sys class Tee(object): def __init__(self, *files): self.files = files def write(self, obj): for f in self.files: f.write(obj) f.flush() # If you want the output to be visible immediately def flush(self) : for f in self.files: f.flush() f = open('out.txt', 'w') original = sys.stdout sys.stdout = Tee(sys.stdout, f) print "test" # This will go to stdout and the file out.txt #use the original sys.stdout = original print "This won't appear on file" # Only on stdout f.close()
print>>xyz in python will expect a
write() function in
xyz. You could use your own custom object which has this. Or else, you could also have sys.stdout refer to your object, in which case it will be tee-ed even without
>>xyz. | https://codedump.io/share/GaSt9Z37VERr/1/output-on-the-console-and-file-using-python | CC-MAIN-2017-47 | refinedweb | 173 | 77.64 |
Python – Directory Listing
Python can be used to get the list of content from a directory. We can make program to list the content of directory which is in the same machine where python is running.
We can also login to the remote system and list the content from the remote directory.
Listing Local Directory
In the below example we use the listdir() method to get the content of the current directory. To also indicate the type of the content like
file or directory, we use more functions to evaluate the nature of the content.
for name in os.listdir('.'): if os.path.isfile(name): print 'file: ', name elif os.path.isdir(name): print 'dir: ', name elif os.path.islink(name): print 'link: ', name else: print 'unknown', name
When we run the above program, we get the following output −
>file: abcl.htm dir: allbooks link: ulink
Please note the content above is specific to the system where the python program was run. The result will vary depending on the system and its content.
Listing Remote Directory
We can list the content of the remote directory by using ftp to access the remote system. Once the connection is established we can use commands that will
list the directory contents in a way similar to the listing of local directories.
from ftplib import FTP def main(): ftp = FTP('')('pub/academic/biology/') # change to some other subject entries = print(len(entries), "entries:") for entry in sorted(entries): print(entry) if __name__ == '__main__': main()
When we run the above program, we get the following output −
>(6, 'entries:') INDEX README acedb dna-mutations ecology+evolution molbio | https://scanftree.com/tutorial/python/python-network-programming/python-directory-listing/ | CC-MAIN-2022-40 | refinedweb | 271 | 53.31 |
WHAT IS C++?
C++ was discovered by Bjarne Stroustrup starting in 1979 at Bell Labs.C is powerful structured language. C is reliable, simple and easy to use and has ability to extend itself. But it is observed that certain real life problems are difficult to code in C. As the programs grew larger, even the structured approach fails to show the desired result .The programs becomes difficult to maintain and reusability of programs decreases. Bjarne Stroustrup developed C++ based on C. This is the reason C++ is called as incremented version of C.
Most of the C features are also available in C++. In addition to those, Bjarne Stroustrup added features of Object Oriented Programming (OOP). OOP is an approach to program organisation and development . In OOP the emphasis is on data rather than procedure.Programm are divided into objects and it follows the bottom up approach in program design .The approach of OOP is more closer to real life problems. Suppose you want to build a house or repair your bike, or even to take admission for a course, first you think about the object and its purpose and behaviour. Then you select your tools and procedures. The solution fits the problem. In real life many times we use the same name but with different reference. As human beings we understand it.
C++ provides constructs so that we can assign same name, to similar actions. The object – oriented features such as classes, function overloading, operator are added in C++. This makes C++ an Object Oriented Programming Language.Editors, Compliers, Data bases, Communication Systems, and any real life complex system to be developed in C++. C++ programs can be easily maintained and expanded. A new feature can be easily added in C++ program. In C++, The elements of GUI (Graphical user In menus, windows etc can be developed. Now a days the demand for GUI based software is increasing.)
C++ runs on a variety of platforms that are Windows, Mac OS, ubuntu and some of UNIX. this post will help to learn from the strict to the extreme level of C++ with some funny tricks and exciting question giving you programmers a high level of control over system resources and memory.C++ is a very flexible language that was favouring software programming and also embedded resources.C++ is useful in many department like video editing gaming learning programming creating some every day app.
APPLICATION OF C++ language
1.It is very usefully in programming advance version of software and more frequent and easy to get comfortable with.Most of the new app and software are developing by this language.
2.It is fast and allow programs to have a great hand on hardware so it also very useful in gaming department.
3.It is mostly used in developing medicial and engineerings applications like software for ct scan ,MRI machine etc.
there are many more department where developer are much comfortable in using c++ and it a easy language to set hand on.
BASIC MCQ ABOUT C++:
1.C++ is a:
a. General purpose programming language
b.Client-side scripting language
c.Movie making program
2. Who discover C++?
a. Bjarne Stroustrup
b. Bjarne bell
c. Herb Sutter
3. In which year C++ was discovered ?
a. 1978
b.1979
c.1977
some theory question?
1.How C++ is different from other high-level languages?
2.Why C++ is called a superset of C?
3.What are the additional features in C++?
SOFTWARE REQUIRED TO STUDY:
- code block
- codelite
- conTEXT
- NewbieIDE
- Dev-C++
- ECLIPSE
- Notepad
- SkyIDE
- etc…
LETS START WITH SOME BASIC CONCEPT OF C++:
so lets begin with the our first program
HELLO WORLD!
before being with this a program is always a collection of commands and statements.you must be wondering what is command and statements. basically statement are the description that are given by command.
the code for “HELLO WORLD” program:
#include <iostream>
using namespace std;
int main()
{
cout << “Hello world”;
return 0;
}
OUTPUT:
Hello world
lets study the code term by term :
c++ has many headers in each of which contains information needed for the programs to work better and properly .In the program there is <iostream>
# :means the compiler’s pre-processor.
namespace std : tells the compiler to use standard namespace
{: beginning of the code
}: end of the code
<< :to insert the data
; : used to end each command in compiler
int main :the main part of there code from where the command has shared to processor that how exactly the program is going to work.
return 0: to terminate the main function.
CHARACTER SET:
A character can be a alphabet, digital symbol or special symbol used to represent information.collection of the character that are used in any language such set is called as character set of that particular langanue. in every language we can use only those character that are included in its set of character of the language.
example :
ALPHABET: A,B,C,_ _ _ _ _ Z
DIGITS : 0, 1,2,3,4,5,6,7,8,9
SPECIAL SYMBOL-!,@,#,$,%,^,&,*,”,<,>,?,. ETC
TOKENS:
smallest individual units in program are called tokens.
C++ consist of the this following tokens:
- keywords
- constant
- strings
- Operators
- Identifiers
The program are written as per the syntax rule of the language.syntax rule are grammatical rule for writing programs.
DATA TYPES:
Set of quantities which are of same kind and work similarly are Called as data type .C++ is rich in data type following are the data type in C++
- Built in type (basic or primary data type)
- User defined type
- Derived type (structured)
BUILT IN TYPE: are classified in 5 type
- Integer(int)
- Character(char)
- Float(float)
- Double(double)
- Void(void)
User. defined type: are classified in 4 type
- Structure
- Union
- Class
- Enumeration
one of the major advantages of c++is that the user defines the data type and that can be used as if build in data type.
Derived type: are classified in 3 type
- Array
- Functions
- Pointer
they are also called as secondary data type formed by using basic data type of the language.
To know more about the data types please check the next post about build in data types
Hope this post was useful and clear to understand if you guys have any doubt ragarding anything please comment below and let me know | https://betapython.com/beginning-of-c/ | CC-MAIN-2022-05 | refinedweb | 1,059 | 65.62 |
A file consists of sections that should be separated by blank lines and an optional comment identifying each section.
Files longer than 2000 lines are cumbersome and should be avoided.
For an example of a Java program properly formatted, see "Java Source File Example" on page 19.:
All source files should begin with a c-style comment that lists the class name, version information, date, and copyright notice:
/* * Classname * * Version information * * Date * * Copyright notice */
The first non-comment line of most Java source files is a
package statement. After that,
import statements can follow. For example:
package java.awt; import java.awt.peer.CanvasPeer;
The following table describes the parts of a class or interface declaration, in the order that they should appear. See "Java Source File Example" on page 19 for an example that includes comments. | http://www.oracle.com/technetwork/java/codeconventions-141855.html | CC-MAIN-2016-30 | refinedweb | 137 | 56.66 |
It was found that opencv3.1.0 has been released, and the computer just reconfigured the system. It is found that the configuration process of opencv2 is simpler than that of opencv2, and it has been adapted to vs2015.
Download and install opencv3.1.0
1. Download opencv3.1.0, enter the official website, and click opencv for windows to download.
Click Run to download the good file. In fact, the installation program of OpenCV is to unzip the files. Because there is only Disk C, a folder named opencv3.1.0 is directly created on disk C.
After selecting the path, click extract.
Opencv3.1.0 environment variable configuration
Select this computer (computer), right-click Properties > Advanced System Settings > environment variables > system variables > find path > add the corresponding path to the variable value. My path is C: opencv3.1.0 / opencv / build / x64 / vc14 / bin. Please input semicolon in English. This update found that the existing x86 folder has been deleted, that is to say, it does not support the X86 compilation of vs2015. This problem will be highlighted later. In addition, if you are vs2013, please select the vc12 folder. If you are an older vs version, it is recommended to choose other versions of OpenCV.
Create a WIN32 console project
1. Open vs2015 first
File > New > Project > Visual C + + new Win32 console project
2. Click next, click next, check the blank item, and then click finish
Vs2015 includes directory and Library Directory configuration
1. Now configure the directory
First create a. CPP source file under the source file
Named main.cpp
2. Then click view, find other windows in view, find property manager under other windows, and click open
3. Then there will be a window for the property manager. Next, click test, and there will be a folder called debug| x64 under it. Click open, and the name is Microsoft.Cpp . x64. User, right-click properties
4. Then select the VC + + directory under the general properties, and there will beInclude directoryandLibrary Directory, click the include directory and add the following three paths. In fact, these are the directories where the opencv related decompression files are located
C:\Opencv3.1.0\opencv\build\include
C:\Opencv3.1.0\opencv\build\include\opencv
C:\Opencv3.1.0\opencv\build\include\opencv2
These three paths should be modified according to the path of decompressing opencv3.1
5. Click the library directory to add the following path
C:\Opencv3.1.0\opencv\build\x64\vc14\lib
6. It is still the property page just now
Click linker, select input, and you will see additional dependencies on the right. Add the following file
opencv_world310d.lib
Note: here, we add debug mode. You will see d at the end of the file. If you want to add release mode, remove D
OpenCV_ world310.lib
display picture
1. The configuration has been completed in the above process. Let’s show a picture to verify whether the configuration is successful! First switch to Solution Explorer, and then click source main.cpp Add the following code
#include<opencv2\opencv.hpp> using namespace cv; int main() { Mat picture = imread(" wallpaper.jpg "); // pictures must be added to the project directory //That is, and test.cpp File in a folder!!! Imshow (picture); waitKey(20150901); }
Then click the local windows debugger or press F5 to run the program –
You’ll find that the report is wrong…
This should be the choice here
So you can display the picture. It’s too big… Just cut a part of it
summary
The above has completed the configuration of opencv3.1.0 in vs2015 under win10. It is found that with the change of OpenCV version, the configuration process is becoming easier and easier. I hope that we can learn image related knowledge and make progress together in the future. Next, I plan to take a look at the official tutorials in my spare time in combination with the “Introduction to opencv3 programming” by the God of maoxingyun
The above is the whole content of this article, I hope to help you in your study, and I hope you can support developeppaer more. | https://developpaper.com/detailed-process-of-opencv3-1-0-configuration-in-vs2015-under-win10/ | CC-MAIN-2021-21 | refinedweb | 691 | 67.35 |
0
Hi All. Wondering if someone can assist me with this code im trying to write. I have to make a small calculation with random variables but the problem i run into is that i have to pass the calculated variable back to the main method and display it that way. Im not very good with Java but im trying to learn. Here is what i have so far:
public class Calculator { public static void main(String[] args) { int price = 20000; int commission = 10; int discount = 15; int endPrice; endPrice = calculation(price); System.out.println("The total price after a " + commission + "% commission and a " + discount + "% discount is " + endPrice); } public static int calculation(int finalPrice) { int productPrice = 20000; int withSalesCommission = productPrice * (10/100); int withCustDiscount = productPrice * (15/100); productPrice = withSalesCommission - withCustDiscount; return productPrice; } }
When i run this it tells me that the endprice is equal to 0. I appreciate your help.
Sam | https://www.daniweb.com/programming/software-development/threads/308717/calculator | CC-MAIN-2017-47 | refinedweb | 151 | 50.77 |
Exp.
The ? is called a ternary operator because it requires three operands and can be used to replace if-else statements, which have the following form −
if(condition) { var = X; } else { var = Y; }
For example, consider the following code −
if(y < 10) { var = 30; } else { var = 40; }
Above code can be rewritten like this −
var = (y < 10) ? 30 : 40;
Here, x is assigned the value of 30 if y is less than 10 and 40 if it is not. You can the try following example −
#include <iostream> using namespace std; int main () { // Local variable declaration: int x, y = 10; x = (y < 10) ? 30 : 40; cout << "value of x: " << x << endl; return 0; }
When the above code is compiled and executed, it produces the following result −
value of x: 40 | http://www.tutorialspoint.com/cplusplus/cpp_conditional_operator.htm | CC-MAIN-2021-17 | refinedweb | 129 | 56.42 |
Overview of ADMIT¶s).
The primary operations supported by ADMIT are focused on analysis of images commonly produced by ALMA and similar radio telescopes. ADMIT has three primary functionalities:
- An automatic pipeline flow which produces a fixed set of science data products which are ingested into the ALMA Archive and available to the requester from the ALMA Archive. These ADMIT products are created from the ALMA image cubes after they are accepted as valid by the observatory. ADMIT products are not currently available from the ALMA Archive – they should become available later in 2016.
- A desktop environment where the astronomer can quickly create flows to produce and inspect ADMIT data products via a web or graphic file browser and access information in the form of XML table, PNG files, and FITS files. Because ADMIT is a Python environment, the user can choose to examine the data using the Python capabilities.
- A capability for astronomers to rerun pipeline flows with hand-tuned parameters and to modify flows based on existing ADMIT Tasks. Additionally, the advanced user may create new ADMIT tasks to suit the needs of their scientific goals.
Typically an astronomer will interact with ADMIT in one of two ways:
- By retrieving the ADMIT products from the ALMA Archive. These products will be in a gzipped tar file of size a few to a few tens of megabytes. Once untar’ed on the disk, these products can be viewed with a browser utilizing the index.html file provided or inspected directly with Unix or xml tools. Archive products available later in 2016.
- By creating a local ADMIT flow to do the user requested analysis, inspect the results, and then modify and improve the flow parameters to achieve the desired final data products..
ADMIT requires that you have the image cubes local to the execution machine to create new data products. You do not need to have the image cubes local to view pre-existing ADMIT products. The tarball from the ALMA archive will contain ADMIT products without the image cubes. If you wish to modify parameters and make new ADMIT products, you need to download the appropriate data cubes from the ALMA archive and then re-run the ADMIT flow.
A Short Summary of the Technical Side of ADMIT¶
ADMIT is Python-based software which utilizes CASA routines where possible and reads CASA image format or FITS. ADMIT is an add-on package to CASA and designed to be compatable with the CASA Python environment. Currently, compatability with CASA forces incompatability with some commonly used Python packages (e.g., APLpy, astropy). CASA must be installed on your computer and instantiated in your working shell in order to use ADMIT. ADMIT software must be installed on your system (see Install Guide).
The basic components of ADMIT are:
- ADMIT Task (AT): An ADMIT task is a Python script that accomplishes a specfic job. Each task has a specific set of keywords and a set of outputs. Most tasks follow a load-and-execute model so that they can be run automatically based on set inputs.
- Basic Data Product (BDP): A Basic Data Product is an output from an AT and often the input to another AT. The content of the BDP is defined by the AT that produces it. It can consist of XML, PNG images, and image data in CASA or FITS format. BDPs are written to disk.
- Flow: ADMIT Tasks are generally run sequentially to create a set of BDPs. Tasks are created and added to the Flow, and once the flow is set up, the user runs the flow which runs each task in turn. Tasks are connected to each other via BDPs – the output BDP of an AT is used as the input BDP to another AT downstream. ADMIT has a Flow Manager which maintains information about the sequence of ATs and the input and output BDPs. The Flow Manager allows the astronomer to re-run any sequence and only execute tasks whose input parameters have changed. Furthermore, if an input BDP or keyword value for one AT in a flow is altered, the Flow Manager knows to execute not only that AT, but any ATs further along in the flow that depend on the output of altered AT.
ADMIT operates on individual ALMA data cubes. The output BDPs are written to a directory named input-cube-name.admit, where the originating FITS file would be input-cube-name.fits. (As ALMA FITS file names can be rather long, the the user has the option to give an alias for the basename). Each data cube is associated with its own input-cube-name.admit directory. Within that directory, the admit.xml file contains metadata about the BDPs and the summary information that that is used to create the index.html file for displaying the BDPs in the browser. Each BDP created by an AT has a xml file (file extension .bdp) in this directory which contains information about the BDP and pointers to any PNG or image files associated with the BDP. Most users will never examine admit.xml or a BDP file directly, rather they will use the web browser interface or Python methods.
ADMIT expects to be in control the contents of its BDPs so users should not delete files in the input-cube-name.admit directory. Furthermore, if the input-cube-name.admit directory is deleted at the Unix level, all information about the flow and all data products are deleted.
Getting Started with ADMIT (for Linux users)¶
You should have already installed ADMIT on your local machine (see Install Guide).
In the shell that you want to work, there must be a path to CASA.
which casa
on your Unix command line and you should see the path to CASA. If not, you need to either install CASA or invoke your local script that defines your path to CASA. Similarly, admit should be in your path
which admit or echo $ADMIT
on your Unix command line and you should see the path to ADMIT. If you do not, go to the directory where ADMIT was installed and source the admit start-up script:
source admit_start.csh
You can type “echo $ADMIT” again and now you should see the path. archive, you should proceed to the next section to create simple ADMIT data products from an ALMA image in FITS or CASA format.
Getting Started with ADMIT (for OS X users)¶
You should have already installed ADMIT on your local machine (see Install Guide).
In the shell that you want to work, there must be a path to CASA.
which casa
on your OS X command line and you should see the path to CASA. If not, you need to either install CASA or invoke your local script that defines your path to CASA. Similarly, admit should be in your path
which admit or echo $ADMIT
on your OS X command line and you should see the path to ADMIT. If you do not, go to the directory where ADMIT was installed and source the admit start-up script:
source admit_start.csh
You can type “echo $ADMIT” again and now you should see the path.
There are now two more steps. First, CASA must be able to “see” where ADMIT is. The mac executable ‘casa’ or ‘casapy’ overwrites the system supplied path. To fix this, edit (in your home directory) the ~/.casa/init.py file to reflect both the ADMIT path and the ADMIT/bin path.
import os import sys try: admit_path = os.environ['ADMIT'] sys.path.append(admit_path) os.environ["PATH"] += os.pathsep + admit_path + '/bin/' + os.pathsep + '/usr/local/bin/' except KeyError: print("ADMIT path not defined. If you wish to use ADMIT, source the admit_start.[c]sh file.")
(you can find a template of this script in $ADMIT/scripts/casa.init.py) The second thing that must be done is that calls to the ADMIT-supplied script ‘casarun’ must be replaced with calls to the CASA-supplied command ‘casa-config.’ As an explicit example, one test that is run to establish the Python-path as seen by CASA is performed by running
make python1
This command in the Makefile reads ‘casarun bin/python-env’, and it will hang on OS X. Instead, this should be edited to read ‘casa-config –exec bin/python-env’. ALMA archive, you should proceed to the next section to create simple ADMIT data products from an ALMA image FITS file or CASA image.
Prepared ADMIT Recipes¶
ADMIT will provide standard recipe scripts for common flows. These can be invoked in CASA or at the shell command line. For example, to invoke the recipe Line_Moment in CASA:
CASA<1>: import admit CASA<2>: admit.recipe("Line_Moment","myimage.fits")
and at the shell command line:
admit_recipe Line_Moment myimage.fits
To see the list of available recipes, type admit_recipe with no arguments. There are some advanced Unix scripts to run ADMIT flows, but these are discussed below. See runa1 and admit1.py.
Making an ADMIT Data Product¶
ADMIT Tasks – which do the work – can be run directly within CASA from the command line, or from scripts in either the Unix or CASA environment. The goal of ADMIT is to produce, reproduce and simplify the production of data products of scientific interest to you, so ADMIT must internally keep track of what you are doing. To do this, ADMIT will create a “your-name-choice”.admit directory and store information there. This tracking capability also means that simple ADMIT usage will involve a couple of administrative steps.
Let’s start in the CASA environment. At the CASA prompt, type:
CASA <1>: import admit CASA <2>: p = admit.Project('your-name-choice.admit',dataserver=True) CASA <3>: t0 = p.addtask(admit.Ingest_AT(file='your-image-cube-name.fits', vlsr=10.0))
The admit.Project command initiates the project, opens the directory with the name that you gave and creates a Python ‘Admit object’ in memory named “p”. The “p” can be anything that you choose; as it will become the first piece of every project command you type, a short name is recommended. The dataserver=True flag causes ADMIT to start up a webpage for showing the results; more on that later (in ADMIT in Your Web Browser). The webpage will be blank until you actually perform calculations.
The
addtask() method (see Admit Project) puts an ADMIT task into your
flow—in this case, Ingest_AT—and returns a handle to the task (the
task’s ID number). The Ingest_AT brings an image cube
into ADMIT. If it is a FITS file, the image cube will be read into a CASA
image on disk. If it is a CASA image, Ingest_AT will just create an ADMIT
information file.
Note
Since CASA images generally do not have information about your source Vlsr, Ingest_AT is typically a good place to input it (in km/sec).
The “t0” (or whatever name you choose) is the ADMIT task number, which provides a handle to the Basic Data Product (BDP)—in some cases, multiple BDPs—produced by the task. BDP outputs from a task are numbered from zero and referred to with Python tuples such as (t0,0), which represents the first BDP output from task t0. (Since many tasks produce only one BDP, for convenience tuples such as (t0,0) can be abbreviated simply to t0, as shown in the following example.)
To make a moment map, such as zero, first and second moment maps, from the image cube, you would then type:
CASA <4>: t1 = p.addtask(admit.CubeStats_AT(ppp=True), [t0]) CASA <5>: t2 = p.addtask(admit.Moment_AT(mom0clip=2.0, numsigma=[3.0]), [t0, t1])
The CubeStats_AT will produce a series of statistics about its input data [t0]—shorthand for [(t0,0)]—which will be output in BDP (t1,0), the first (and only) BDP generated by the task, t1. The Moment_AT produces the requested moment map(s)—by default, just moment-0—for the image cube t0 that you digested. In this case, for the entire cube (all spectral channels) with a S/N cutoff of 3 times the RMS noise determined by CubeStats (the t1 input), and with the higher moment maps (1,2,3...) clipped to be valid only where the moment zero map is greater than 2 times the RMS. (In this example, no higher-order moments are produced.)
Note
The moments=[...] argument to Moment_AT specifies the list of moments to produce, each in its own BDP. For example, adding moments=[0,1,2] to the preceding call will direct Moment_AT to produce moment-0, moment-1 and moment-2 maps, which can be input to other tasks using the BDP handles (t2,0), (t2,1) and (t2,2), respectively.
Up to this point, you have just been creating a flow; the data products have not actually been calculated yet. You should have seen an “INFO” message as you entered each of the above lines. To execute your flow and create the BDPs, type:
CASA <6>: p.run()
p.run() causes ADMIT to calculate your data products. The data products can
be viewed in your local browser window—there should be one now created by
ADMIT. If not, you can start up the data browser by typing
CASA <7>: p.startDataServer()
If you already have a data server running, the above command, will inform you:
A data server for this Admit object is already running on localhost:NNNNN
where NNNNN is a port number. If so, look through the webpages in your browser to see if it is hiding among your tabs, or copy and paste the localhost:NNNNN to a new tab. You should now have a browser page with bars for Ingest, CubeStats and Moment, as well as a flow diagram. Click on the bars to see the products. In this case, the most interesting one is probably the moment-0 map, which is the emission in your cube integrated over frequency. Examining the flow diagram is a good way to visually explore how the tasks in your flow relate to each other.
Great. Now let’s say that you want a spectrum at the highest peak in your moment map. ADMIT can do that automatically given the Moment_AT output. To make the spectrum, you use the CubeSpectrum_AT:
CASA <8>: t3 = p.addtask(admit.CubeSpectrum_AT(), [t0, t2]) CASA <9>: p.run()
The p.run() command is needed again—the addtask() puts the task into the flow and p.run() executes it. Your browser page should now have a new line at the bottom which is labeled CubeSpectrum. Click on the bar and you will see your spectrum.
The ADMIT tasks, as they execute, create a python structure in memory containing all of the task and flow information, and they write out information, images, and files to the “your-name-choice”.admit directory. As long as you remain in your CASA session, you have access to the contents of the structure—you can add tasks to and re-execute the flow and your browser page will continue to update accordingly.
Note
To minimize execution time, ADMIT re-runs projects intelligently. Each time you add a task and re-run the flow, only the task(s) which have not yet been run (or are otherwise out-of-date; e.g., due to changing the task arguments) are executed. Unchanged tasks are skipped.
Using ADMIT Scripts¶
ADMIT can also be run from script files using either the Unix command line or the CASA command line. The direct connection to the browser page and the ability to dynamically add to flows from the command line is only available from within CASA because the CASA session keeps your python structures in active memory. When a script is run from the Unix command line, all memory-based products disappear when the script ends; however, ADMIT writes all of the products to persistent disk files so you can view your ADMIT products using the browser, as described in the next section, or modify and re-run the flow using a script file.
An ADMIT script looks very much like what you would type on the CASA command line. For example, the script below will create all of the same products in the CASA session of the previous section.
#!/usr/bin/env casarun # set up admit in the casa environment import admit # define project name p = admit.Project('your,0)]) p.run()
The script can be run in CASA using the “execfile” command, or
from the Unix command line by making the script file executable
(
chmod +x) and then executing it. The file containing your
script can be named ‘anything-you-want.py’.
The ‘your-name-choice.admit’ directory includes a file, admit0.py, containing a transcript (an ADMIT script) of the flow that created ‘your-name-choice.admit’. Comparing this script to the graphical representation of the flow (shown in the “Flow Diagram” tab at the top of the data browser window) can be instructive when learning how to create your own ADMIT scripts.
Warning
Flow transcripts are not intended to be used directly as script templates (although this will work in simple cases). In particular, flows containing tasks producing a variable number of BDP outputs, such as LineCube_AT, require special care—the transcript includes all literal outputs of such variadic tasks, whereas user scripts should assume only a single, placeholder output is present (see the following section for an example).
Molecular Line Identification¶
ADMIT is very useful for finding spectral lines in your data, identifying the molecular species and transition of the line, and cutting out a sub-cube which contains only the channels with line emission. The primary tasks for this purpose are LineID_AT and LineCube_AT. LineID_AT find the channel intervals with emission above a user-selected noise level and then tries to identify the lines in the Splatalog database. LineCube_AT cuts out sub-cubes for each identified line emission region and writes out a separate CASA image file for each.
Information about the Vlsr of your object is not passed down the ALMA imaging pipeline to your ALMA image cubes. Hence, ADMIT does not have access to the Vlsr or spectral line information that you input in your observing set up and correlator setting in the ALMA OT. The proper identification of lines is greatly aided by having the approximately correct Vlsr of your target source. You are allowed to put this value into ADMIT when you ingest your image cube, and/or when you run LineID_AT. If you use the Vlsr keyword in LineID_AT it overrides the value used in Ingest_AT.
A typical use of LineID_AT would look like this in]) t4 = p.addtask(admit.LineID_AT(csub=[0, 0], minchan=4, maxgap=6, numsigma=5.0), [t1, t3]) t5 = p.addtask(admit.LineCube_AT(pad=40), [t0, t4]) t6 = p.addtask(admit.Moment_AT(mom0clip=2.0, moments=[0, 1, 2]), [t5, t1]) t7 = p.addtask(admit.CubeSpectrum_AT(), [t5, (t6,0)]) p.run()
The CubeStats_AT is done to get the RMS noise in the cube and to generate two spectra: one consisting of the maximum flux density in each channel and the other the minimum. The CubeSpectrum_AT is run to get the spectrum at the position of the peak total integrated emission. Both of these BDPs are input to LineID_AT to estimate the emission segments and do the line identification. LineCube_AT produces one data cube for each segment found. Moment_AT and CubeSpectrum_AT are then repeated for each emission segment identified. (ADMIT automatically replicates the latter two tasks in the flow for each LineCube_AT output it finds—do not do this manually!)
At the present time, some (perhaps many) ALMA total power line cubes have baselines that are not “average” zero in the non-line channels. There are infrequently cases where the 7-m or 12-m interferometric maps have incorrect continuum subtractions but you are best off to correct that by remaking the maps in CASA based on a new continuum subtracted u,v dataset. For the total power data, the sequence would be similar to the above with the insertion of two new tasks: LineSegment_AT and ContinuumSub_AT. LineSegment_AT finds the channel segments with emission above your set noise level; ContinuumSub_AT does a spatial pixel by spatial pixel baseline removal in the spectral direction with the emission segments ignored in determining the baseline fit. The output of ContinuumSub_AT is a new image cube with the baseline removed – and that is then fed forward to the rest of.CubeSum_AT(numsigma=5.0, sigma=99.0), [t0, t1]) t3 = p.addtask(admit.CubeSpectrum_AT(), [t0, t2]) t4 = p.addtask(admit.LineSegment_AT(csub=[0, 0], minchan=4, maxgap=6, numsigma=5.0), [t1, t3]) t5 = p.addtask(admit.ContinuumSub_AT(fitorder=1, pad=60),[t0, t4]) t6 = p.addtask(admit.CubeStats_AT(ppp=True), [t5]) t7 = p.addtask(admit.CubeSpectrum_AT(), [t5, t6]) t8 = p.addtask(admit.Moment_AT(mom0clip=2.0, numsigma=[3.0]), [t5, t6]) t9 = p.addtask(admit.LineID_AT(csub=[0, 0], minchan=4, maxgap=6, numsigma=5.0), [t6, t7]) t10 = p.addtask(admit.LineCube_AT(pad=40), [t5, t9]) t11 = p.addtask(admit.Moment_AT(mom0clip=2.0, moments=[0, 1, 2]), [t10, t6]) t11 = p.addtask(admit.CubeSpectrum_AT(), [t10, t11]) p.run()
Interacting with Line ID¶
The identification of emission/absorption from specific molecular species and transitions is important to the scientific analysis of ALMA data. The general case of species/transition identification is a difficult problem due to the possibilities of complex line shapes and line blending, and the high density of potential matching lines in the Splatalog database. Add to this the range of physical conditions giving rise to molecular emission in the Universe (cold cores, hot cores, evolved stars, galaxies diffuse ISM) and the perfect identification of species/transition is not practical without significant a priori information, which is not available from the ALMA archive data at present.
LineID_AT attempts to identify lines based first on the most commonly observed species and transitions. CO, CS, HCN, CN, H2CO, and other such common species are given preference in a first search for indentification. The Tier 1 Database contains a list of these molecules along with their transitions. See the following section for a more detailed description of the database. After that a deeper search is done with either the CASA slsearch task or the online splatalogue database. There are several keywords in LineID_AT for controlling aspects of the search and identification. The following are the keywords that may be of the most use to the user.
The output of LineID_AT in the browser page includes a table of emission segments found, and identification for each segment if found. The LineId Editor mode in the browser (see tabs along the second line from the top of the ADMIT page for your data prducts). Click on that button and you initiate the capability to edit the results from LineID_AT. You can: change the frequency, id, and channel range of an emission region. You can reject an emission segment; then you can write out your estimate of the best line identification as a replacement for the original LineID_AT BDP, which can be fed into LineCube_AT to cut line cubes. You can also use the force and reject buttons as input advise to a second run of LineID_AT.
The interaction mode with LineID Editor can only be used if your ADMIT file is created or opened from within your current CASA session. The editing mode requires that your ADMIT flow be present as an active python memory structure. The interactive edits that you make within LineID Editor are not saved to the flow so, at present, you cannot automatically recreated your final data products by re-running the flow from the scratch.
Tier 1 Database¶
The Tier 1 Database (DB) contains the transitions of molecules that if present, are expected to be a dominant emission peak in the spectrum. The allowed frequency/velocity ranges for these transitions are relaxed compared to those of others. In gneneral any peak detected within 30 km/s (galactic source) and 200 km/s (extragalactic source) of a Tier 1 rest frequency will be assigned the identification of that transition. Additionally, the identified peak is traced down to the cutoff level and any additional peaks found along the way are also labeled the Tier 1 transition. Tier 1 molecules are:
HFL indicates hyperfine lines, these transitions are treated specially in that only the strongest hyperfine line is searched for initially. If that line is found then the rest of the hyperfine components are searched for. There are currently 962 transitions in the DB (542 primary transitions and 420 hyperfine transitions).
You can query the DB directly through python as follows:
from admit.util.Tier1DB import Tier1DB # connect to the DB t1db = Tier1DB() # query for all primary transitions between 90.0 and 90.1 GHz t1db.searchtransitions(freq=[90.0, 90.1]) # get the results as LineData objects results = t1db.getall() # look for any with hyperfine transitions (hfnum > 0) and get them for line in results: if line.getkey("hfnum") > 0: t1db.searchhfs(line.getkey("hfnum")) hfsresults = t1db.getall()
ADMIT in Your Web Browser¶
ADMIT Data Products are most easily viewed from your favorite web browser utilizing the index.html file that is present within the input-cube-name.admit (default, or your-alias-name.admit) directory.
You can do this To do so, start up CASA and instantiate an ADMIT object of the output data:
CASA <1>: import admit CASA <2>: a = admit.Project('/path/to/input-cube-name.admit',dataserver=True)
This will open a new page in your default browser (or new browser window if none was open) and load the ADMIT products web page view of the specified directory. The page is divided into 4 separate tabs: Flow View, Form View, ADMIT Log, LineID Editor, and ADMIT Documentation.
- Flow View
- This view shows the tasks in the order in which they were executed. At the top is the directed acyclic graph of the entire ADMIT Flow. Each task has a bar giving the task name and arguments. If you click on the bar, that section will expand to show all the output from that task. Clicking on a thumbnail of an image will display a larger version of the image.
- Form View
- This view allows you to edit task parameters and re-run the ADMIT flow. Similar to Flow View, the tasks are show in execution order and clicking on each bar will expand to give an editable form of the task keywords. Once you are done editting keywords, click on Re-run ADMIT Flow” button at the bottom of the page. This will communicate your changes back to your CASA session and re-run the tasks that you changed (and any that depended on them).
- ADMIT Log
- This is the full log file of the ADMIT process/script that created your ADMIT data.
- LineID Editor
- This allows you do modify the results of LineID_AT. Currently in prototype stage.
- ADMIT Documentation
- Link to the on-line ADMIT documentation webpages.
ADMIT output for multiple projects can be loaded one at a time into separate browser pages. The browser pages do not interact.
For simply viewing the products without a CASA session, you
can enter the full directory path into an open browser
( as the url) or by using the ADMIT aopen.
aopen index.html or aopen sub-directory-name/index.html
However, note you do not get the Form or LineID Editor functionality with this mode. | http://admit.astro.umd.edu/userguideintro.html | CC-MAIN-2021-31 | refinedweb | 4,593 | 63.09 |
Google's "Dart" on the JVM
Google's "Dart" on the JVM
Join the DZone community and get the full member experience.Join For Free
Google's newest programming language can now be run on the JVM, thanks to the JDart project hosted on Google Code. Unveiled at the goto conference last week, the Dart language is seen by some to be suitable for Java developers who can't get into Javascript. The language is supposed to make it easy to create quick prototypes using structured code. Visual Basic for the web? We'll see.
The JDart project is in it's early stages, with only a few instructions translated. The JDart compiler generates jar files to run on any Java 7 VM. The author has provided a few examples so you can see what the compiler actually generates. Here's the simple Hello World output. First the Dart code:
main() { print("hello world"); }
Which gets compiled to the following:
public class test { public static void main(java.lang.String[]); Code: 0: invokedynamic #18, 0 // InvokeDynamic #0:__main__:()V 5: return public static java.lang.Object __main__(); Code: 0: ldc #21 // String hello world 2: invokedynamic #27, 0 // InvokeDynamic #1:print:(Ljava/lang/String;)V 7: aconst_null 8: areturn }
It's very early days for Dart, and it has a long way to go if it has any chance of toppling Javascript. But if you've been messing around with the languages, perhaps this project gives you another reason to consider moving to it for your web applications.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/run-your-dart-code-jvm | CC-MAIN-2019-43 | refinedweb | 278 | 61.97 |
public Pet(int argPetId, String argPetType) { this.petType = argPetType;......(1) petId = argPetId;...............(2) }
But would like to know why is it mention at (1) and not at (2)
Originally posted by Gavin Tranter: Its use at one is an attempt to remove ambiguaty, by saying that you are assigning argPetType to the field petType of this (the current object). I dont think it is really need in that example. In the example below, both field name and paremeter name are the same, so we need to tell Java that we wish to assign someValue to eh instance variable otherwise Java will just try to assign someValue to itself. public class X{
private int someValue;
public X(int someValue){
this.someValue = someValue;
}
public void setSomeValue(int someValue){
this.someValue = someValue;
}
} You might, in future, wish to put code tags around your code snippets, as this will perserve the codes formatting and make it easier to read. [ May 29, 2008: Message edited by: Gavin Tranter ] [ May 29, 2008: Message edited by: Gavin Tranter ] | http://www.coderanch.com/t/410560/java/java/keyword | CC-MAIN-2015-14 | refinedweb | 171 | 70.02 |
tightrope alternatives and similar packages
Based on the "Web" category.
Alternatively, view tightrope alternatives based on common mentions on social networks and blogs.
swagger-petstore10.0 8.4 tightrope VS swagger-petstoreswagger-codegen contains a template-driven engine to generate documentation, API clients and server stubs in different languages by parsing your OpenAPI / Swagger definition.
hakyll10.0 7.2 tightrope VS hakyllA static website compiler library in Haskell
servant10.0 8.6 tightrope VS servantMain repository for the servant libraries — DSL for describing, serving, querying, mocking, documenting web applications and more!
scotty10.0 3.3 tightrope VS scottyHaskell web framework inspired by Ruby's Sinatra, using WAI and Warp (Official Repository)
yesod-persistent10.0 8.3 tightrope VS yesod-persistentA RESTful Haskell web framework built on WAI.
haskell-bitmex-rest10.0 8.4 tightrope VS haskell-bitmex-restswagger-codegen contains a template-driven engine to generate documentation, API clients and server stubs in different languages by parsing your OpenAPI / Swagger definition.
postgrest10.0 9.5 tightrope VS postgrestREST API for any Postgres database
neuron9.9 4.8 tightrope VS neuronFuture-proof note-taking and publishing based on Zettelkasten (superseded by Emanote:)
markdown9.9 1.1 tightrope VS markdownConvert Markdown to HTML, with XSS protection
reroute9.9 4.3 tightrope VS rerouteAnother Haskell web framework for rapid development
aeson9.9 8.0 L1 tightrope VS aesonA fast Haskell JSON library
webify9.8 1.8 tightrope VS webifywebfont generator - converts ttf to woff, eot and svg
wreq-patchable9.8 0.0 tightrope VS wreq-patchableAn easy-to-use HTTP client library.
wreq9.8 0.0 tightrope VS wreqAn easy-to-use HTTP client library.
espial9.8 6.7 tightrope VS espialEspial is an open-source, web-based bookmarking server.
graphql-api9.8 0.0 tightrope VS graphql-apiWrite type-safe GraphQL services in Haskell
keter9.7 8.1 tightrope VS keterWeb app deployment manager
scalpel9.7 0.0 tightrope VS scalpelA high level web scraping library for Haskell.
airship9.7 6.1 tightrope VS airshipHelium + Webmachine = Airship. A toolkit for building declarative, RESTful web apps.
apecs-gloss9.7 3.6 tightrope VS apecs-glossa fast, extensible, type driven Haskell ECS framework for games
req9.7 4.9 tightrope VS reqAn HTTP client library
concur-core9.7 0.0 tightrope VS concur-coreAn unusual Web UI Framework for Haskell
stripe-haskell9.7 0.0 tightrope VS stripe-haskell:moneybag: Stripe API
servant-auth9.7 5.3 tightrope VS servant-authAuthentication combinators for servant
nixpkgs-update9.7 7.6 tightrope VS nixpkgs-updateUpdating nixpkgs packages since 2018
postgres-websockets9.7 4.7 tightrope VS postgres-websocketsPostgreSQL + Websockets
react-haskell9.7 0.0 tightrope VS react-haskellReact bindings for Haskell
scalpel-core9.7 0.0 tightrope VS scalpel-coreA high level web scraping library for Haskell.
stripe-http-streams9.7 0.0 tightrope VS stripe-http-streams:moneybag: Stripe API
stripe-core9.7 0.0 tightrope VS stripe-core:moneybag: Stripe API
haskell-kubernetes9.7 0.0 tightrope VS haskell-kubernetesHaskell bindings to the Kubernetes API (via swagger-codegen)
hamlet9.6 7.4 tightrope VS hamletHaml-like template files that are compile-time checked
lucid9.6 6.2 tightrope VS lucidClear to write, read and edit DSL for writing HTML
digestive-functors9.6 0.0 tightrope VS digestive-functorsA general way to consume input using applicative functors
HaTeX9.6 2.7 tightrope VS HaTeXThe Haskell LaTeX library.
tagsoup9.6 2.4 tightrope VS tagsoupHaskell library for parsing and extracting information from (possibly malformed) HTML/XML documents
yaml9.6 4.3 L2 tightrope VS yamlSupport for serialising Haskell to and from Yaml.
telegram-api9.6 0.0 tightrope VS telegram-apiTelegram Bot API for Haskell
hbro9.6 0.0 tightrope VS hbro[Unmaintained] A minimal web-browser written and configured in Haskell.
servant-elm9.6 0.0 tightrope VS servant-elmAutomatically derive Elm functions to query servant webservices
yesod-auth-oauth29.6 5.6 tightrope VS yesod-auth-oauth2OAuth2 authentication for yesod
jsaddle9.5 4.9 tightrope VS jsaddleJavaScript interface that works with GHCJS or GHC
kubernetes-client-coreHaskell client for the kubernetes API. A work in progress.
engine-io9.5 0.0 tightrope VS engine-ioA Haskell server implementation of the Engine.IO and Socket.IO (1.0) protocols
backprop9.5 0.0 tightrope VS backpropHeterogeneous automatic differentiation ("backpropagation") in Haskell
graphql9.5 0.0 tightrope VS graphqlHaskell GraphQL implementation
jsc9.5 4.9 tightrope VS jscJavaScript interface that works with GHCJS or GHC
keera-hails-reactive-htmldomKeera Hails: Haskell on Rails - Reactive Programming Framework for Interactive Haskell applications
ghcjs-dom9.4 0.0 tightrope VS ghcjs-domMake Document Object Model (DOM) apps that run in any browser and natively using WebKitGtk
android-lint-summaryPrettier display of Android Lint issues
Automate your Pull Request with Mergify
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of tightrope or a related project?
README
Tightrope is a library that makes writing Slash commands easier.
You give it a function that turns a
Command into a
Text response, and it gives you a WAI application that you can hand to Warp (or any other WAI-compatible server).
You need to give it an incoming-webhook token in order to use the
say command. You can pass in the empty string if you're writing a bot that only needs to communicate with the user who initiated the slash command.
Usage
Here's a simple echo bot:
import Network.Tightrope import Network.Wai.Handler.Warp (run) main :: IO () main = do -- we don't want to keep our token in source control token <- readFile "token" -- change this, of course let url = "" port = 4000 echobot = bot (Account token url) echo putStrLn $ "Running on port " ++ show port Warp.run port echobot echo :: Command -> Slack Text echo command = do -- `say` will broadcast a message to everyone in the specified room, -- or to one person (if you you `say` to a private room). -- You can `say` as many times as you want, to whatever room you like. -- But here we're just gonna post a message to the room the request -- came in on. let msg = message (Icon "ghost") "Echobot" (command ^. text) say msg (command ^. source) return "echoing, be patient..." -- Only the user who typed "/echo" will see the return value, and -- it isn't persistent. It'll disappear the next time they refresh -- slack. Use `say` if you want something persistent -- when given -- a `Private` Room, `say` will send messages from slackbot to the -- specified user.
Slack is a
MonadIO, so you can use
liftIO to do any IO operation in the process of handling the request.
What? A monad??
Yes. Here there be monads. Don't be afraid. Monads can smell fear. We can get through this together. I'm there for you.
But what if I don't know Haskell
That's okay! I don't either. But I've been having a lot of fun making bots anyway, and I've picked up a little bit of Haskell along the way. | https://haskell.libhunt.com/tightrope-alternatives | CC-MAIN-2022-33 | refinedweb | 1,182 | 58.99 |
Cosmos is an operating system "construction kit", built from the ground up around the IL2CPU compiler in C# and our home-brewed language called X#.
editms-dos program
occurred System.IO.FileLoadException: Could not load file or assembly 'Microsoft.CSharp, Version=4.0.4.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'.
Cosmos.Compiler.Assembler.X86namespace somewhere that lets you do just that. is it removed now? what's the alternative (besides x#)? using latest devkit
XSharpnamespace in it, but i can't access it (example:). what do i need to do to use it? do i need to include xsharp's project in references somehow? | https://gitter.im/CosmosOS/Cosmos?source=orgpage | CC-MAIN-2019-51 | refinedweb | 103 | 54.29 |
In the previous 6 articles we have illustrated the usage of Google and AWS NLP APIs. We have also experimented the spacy library to extract entities and nouns from different documents. We have shown how to improve the model using pattern matching function from spaCy ( ) . Finally we have also trained the model with new entities. We have demonstrated how to match CV to job profiles.
Let us now dig a bit deeper into some linguistic features of Spacy and how this can used in improving virtual conversations. The same can be used for mail processing, more advanced chatbots or virtual assistants. It can also be used as underlying technique for voice assistants.
Let us say we are an online shop for personal computers and we would like to allow our customers to send us requests to order computers. This can come through a site chatbot or by mail.
Let us assume we receive following input from a potential client:
Hello
I would like to order a notebook with 16GB and 256 GB disk, I would like to spend less than 1000 Francs, what would be the options
Thanks a lot
Patrick
As we have shown in earlier articles, let us import required Python libraries and process the text through the Spacy pipeline. Nothing new, but good to repeat:
# import required libraries
import spacy from spacy.pipeline import EntityRuler from spacy.matcher import Matcher,PhraseMatcher from spacy.symbols import nsubj, VERB, dobj, NOUN, root, xcomp from spacy import displacy from spacy.matcher import Matcher # install the large pre-trained English model driectly from spacy !python -m spacy download en_core_web_lg # process the input text through the standard spacy model docMail=nlp(text)
Once this is all done, let us start with named entity recognition as usual.
# print text entities detected for ent in docMail.ents :
print(ent.text, ent.label_,) 16GB QUANTITY 256 GB QUANTITY less than 1000 Francs MONEY Patrick PERSON
We can also visualize the result directly in the text with highlighted entities.
The default model does not seem to detect notebook and disk as entities, but identifies the sender as a person and identifies the RAM and disk size as quantities. This is a good start, but still far away from a practical solution. So, let us add some domain specific entities that will help us later on.
# add domain specific entities and add to the pipelinepatterns = [{"label": "COMPUTER", "pattern": [{"lower": "notebook"}]},{"label": "CURRENCY", "pattern": [{"lower": "francs"}]},
{"label": "PART", "pattern": [{"lower": "disk"}]}]ruler = EntityRuler(nlp, patterns=patterns,overwrite_ents=True)nlp.add_pipe(ruler)
Now the results look a bit better
#process the mail again with added entities
docMail=nlp(text) for ents in docMail.ents:
# Print the entity text and its label
print(ents.text, ents.label_,) notebook COMPUTER
16GB QUANTITY
256 GB QUANTITY
disk PART
Francs CURRENCY
Sometimes it is not enough to match only entities, for example we have defined the RAM size as 16 GB. So let us see how to detect the memory size automatically
matcher = PhraseMatcher(nlp.vocab) terms = ["16 GB","256 GB"] # Only run nlp.make_doc to speed things up
patterns = [nlp.make_doc(t) for t in terms] matcher.add("MEMORY", None, *patterns) doc = nlp(text) matches = matcher(doc) for match_id, start, end in matches: span = doc[start:end] print(span.text) 16GB
256 GB
Quite cool, it detected the patterns and matched the text related to memory size. Unfortunately, the issue is that we do not know to what it refers to, so we need to start a different kind of analysis.
One of the key features of Spacy is its linguistic and predictive features. Indeed, Spacy is able to make a p rediction of which tag or label most likely applies in a specific context.
Let us start with displaying the result of part of speech tagging and dependency analysis. As we can see below, the code is pretty simple
displacy.render(docMail, style="dep", minify=True)
The result is quite impressive, it shows all predicted tags for each word and the dependency tree with the associated dependency labels. For example ‘I’ is a pronoun and is subject to the verb ‘like’.
Let us detect the numerical modifiers, as we will need them to identify the memory size required
for token in docMail:
if token.dep_ == 'nummod':
print(f"Numerical modifier: {token.text} --> object: {token.head}") Numerical modifier: 16 --> object: GB
Numerical modifier: 256 --> object: disk
Numerical modifier: 1000 --> object: Francs
This is again quite cool, we can associate quantities to different words in the text.
Spacy provides all the required tagging to find the action verbs, we want to know if the customer wants to order something or is just interested by some information for example. Let us iterate through all tokens in the text and search for an open clausal complement ( refer to for all possible dependency tags )
verbs = set()
for possible_verbs in docMail:
if possible_verbs.dep == xcomp and possible_verbs.head.pos == VERB :
verbs.add(possible_verbs) print(verbs) {spend, order}
We have now identified ‘spend’ and ‘order’ as possible actions in the text. We can also do the same to find objects or items in the text that are the referred to by the client.
Let us find possible items in the text using the dependency tag ‘dobj’ for direct objects of a verb.
items = set()
for possible_item in docMail:
if possible_item.dep == dobj and possible_item.head.pos == VERB:
items.add(possible_item) print(items) {Francs, notebook}
‘Francs’ and ‘notebook’ have been found. Now we can think of using word similarities to find what kind of item the client is referring to. We could also use other techniques, but let us try a simple way for now. We will compare similarities between identified obejcts and the word ‘laptop’. The word ‘notebook’ is much closer to ‘laptop’ than Francs.
orderobject=nlp("laptop")
for sub in items:
print(sub.similarity(orderobject)) 0.0015887124852857469
0.8021939809276627
Finally putting it together, we can think of automatically detecting the required action verb using a heuristic. Let us assume that if the similarity is more than 80%, then we have found the right verb. We then search for the direct object of the similar verb. That could look like this
orderword=nlp("order")
for verb in verbs:
if (verb.similarity(orderword)) >=0.8:
for v in verb.children:
if v.dep==dobj:
print(v.text) notebook
For this experiment we have used the following
- Google collab executing a Python 3 notebook
- Python 3.6.9
- Spacy 2.2.4
(Visited 2 times, 2 visits today) | http://technewsdestination.com/2020/03/24/natural-language-processing-smarter-conversations-using-spacy/ | CC-MAIN-2021-31 | refinedweb | 1,088 | 56.45 |
IPython magics to render cells as templates in a variety of different templating languages.
Project description
Template IPython magics 🎩
This package provides simple IPython magics to render cells as templates in a variety of different templating languages. It currently supports Mako and Jinja2.
To use it, first install the package from PyPI, along with at least one of the supported templating languages. E.g. using
pipenv (everyone should use
pipenv):
pipenv install template-ipython-magic jinja2 mako
In your notebook, load the
template_magic module:
%load_ext template_magic
Note that the available templating languages are detected at the point of loading the extension, and each magic only enabled if the appropriate package is found. If neither Jinja2 or Mako are installed, there will be no magics!
Now you can use
%jinja as a line magic within any code block, with access to all variables in scope. The result is formatted as Markdown:
import sys %jinja Hello from **Jinja** on Python {{sys.version_info.major}}.{{sys.version_info.minor}}! 🐍
Hello from Jinja on Python 3.8! 🐍
If you prefer,
%mako is also available:
import datetime now = datetime.datetime.now() %mako Hello from *Mako* at ${now.strftime('%I:%M %p')}... ⏰
Hello from Mako at 08:39 PM... ⏰
Cell magics are also available for each language, which lets you render the entire cell as a template for convenient report generation:
%%jinja {%- for x in ['spam'] * 7 + ['eggs', 'spam'] %} - {% if loop.last %}and {% endif %}{{x}}{%if not loop.last %},{% endif %} {%- endfor %}
- spam,
- spam,
- spam,
- spam,
- spam,
- spam,
- spam,
- eggs,
- and spam
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/template-ipython-magic/ | CC-MAIN-2020-05 | refinedweb | 280 | 63.39 |
vsnprintf()
Write formatted output to a character array, up to a maximum number of characters (varargs)
Synopsis:
#include <stdarg.h> #include <stdio.h> int vsnprintf( char* buf, size_t count, const char* format, va_list arg ); vsnprintf() function formats data under control of the format control string and stores the result in buf. The maximum number of characters to store, including a terminating null character, is specified by count.
The vsnprintf() function is a "varargs" version of snprintf().
Returns:
The number of characters that would have been written into the array, not counting the terminating null character, had count been large enough. It does this even if count is zero; in this case buf can be NULL.
If an error occurred, vsnprintf() returns a negative value and sets errno .
Examples:
Use vsnprintf() in a general error message routine:
#include <stdio.h> #include <stdarg.h> #include <string.h> char msgbuf[80]; char *fmtmsg( char *format, ... ) { va_list arglist; va_start( arglist, format ); strcpy( msgbuf, "Error: " ); vsnprintf( &msgbuf[7], 80-7, format, arglist ); va_end( arglist ); return( msgbuf ); } int main( void ) { char *msg; msg = fmtmsg( "%s %d %s", "Failed", 100, "times" ); printf( "%s\n", msg ); return 0; }
Classification:
Caveats:
It's safe to call vsnprintf() in a signal handler if the data isn't floating point.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/v/vsnprintf.html | CC-MAIN-2020-05 | refinedweb | 230 | 57.77 |
Prabhu Ramachandran <prabhu at aero.iitm.ernet.in> wrote: > Hi, > > Thanks for the vector_indexing_suite! It is really nice and seems to > work wonderfully! However, I just ran into one problem with it. I > tried wrapping a std::vector<bool> and things don't compile because > the vector_indexing_suite::get_item(...) returns a reference to the > bool which the compiler has problems with. Here is the trivial test > program and error message that I get with g++-2.95.4 and Boost from > CVS. > > // -------------------------------------------------- > #include <boost/python.hpp> > #include <boost/python/suite/indexing/vector_indexing_suite.hpp> > > using namespace boost::python; > > BOOST_PYTHON_MODULE(vec_index) > { > class_< std::vector<bool> > ("VectorBool") > .def(vector_indexing_suite< std::vector<bool>, true> ()); > } > // -------------------------------------------------- [snip errors] > I made a local copy of the vector_indexing_suite and changed this: > > static data_type& > get_item(Container& container, index_type i) > to: > > static data_type > get_item(Container& container, index_type i) > > and my test program compiles cleanly. This is obviously not a fix but > just to let you know that this is the only problem. Thanks for trying it out, Prabhu. I'll go and find a solution to this problem. I think, off the top of my head, that get_item should be: static typename mpl::if_< is_class<data_type> , data_type& , data_type >::type get_item(Container& container, index_type i) Regards, -- Joel de Guzman | https://mail.python.org/pipermail/cplusplus-sig/2003-August/004877.html | CC-MAIN-2016-30 | refinedweb | 207 | 51.04 |
findChessboardCornersSB irregular behavior
I am testing findChessboardCornersSB as an alternative to **findChessboardCorners', mainly to be able to use a calibration target which may overlap the image boundaries. While it works like expected in many cases, I always encounter strange detection misses and spurious detections:
and:
even if I ignore the missed points in between, I would have no idea how to associate the detected points with their world coordinates.
Am I doing something wrong, or is findChessboardCornersSB unstable?
Here's the code:
import cv2 import numpy as np import os import glob # Defining the dimensions of checkerboard CHECKERBOARD = (5, 5) criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 500, 0.0001) # Creating vector to store vectors of 3D points for each checkerboard image objpoints = [] # Creating vector to store vectors of 2D points for each checkerboard image imgpoints = [] colortable = [(255, 100, 100), (100, 255, 100), (100, 100, 255)] #('./square/*.png') for fname in images: img = cv2.imread(fname) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ret, corners, meta = cv2.findChessboardCornersSBWithMeta(img, CHECKERBOARD, cv2.CALIB_CB_LARGER ) if ret == True: objpoints.append(objp) imgpoints.append(corners) # Draw and display the corners cv2.convertScaleAbs(img, img, 0.4, 0.0) for corner, m in zip(corners, meta.ravel()): color = colortable[m] cv2.drawMarker( img, (corner[0][0], corner[0][1]), color, cv2.MARKER_CROSS, 30, 3) else: print("not found") cv2.imshow('img',cv2.resize(img, (0, 0), None, 0.5, 0.5)) if (cv2.waitKey(0) == 27): break; cv2.destroyAllWindows() | https://answers.opencv.org/question/235415/findchessboardcornerssb-irregular-behavior/ | CC-MAIN-2020-45 | refinedweb | 241 | 52.87 |
Core in inserted into a document
or has its attributes set. [XML] [XMLNS] [DOMCORE]
[DOMEVENTS]
Certain algorithms in this specification spoon-feed the parser characters one string at a time. In such cases, the XML parser must act as it would have if faced with a single string consisting of the concatenation of all those characters.
When an XML parser creates a
script
element, it must be marked as being "parser-inserted".
If the parser was originally created for the XML fragment
parsing algorithm, then the element must be marked as
"already executed" also. When the element's end tag is
parsed, the user agent must run the
script element. If this causes
there to be a pending external script, then the user
agent must pause until that script has completed
Since the
document.write() API is not
available for XML documents, much of the complexity in
the HTML parser is not needed in the XML
parser.
When an XML parser reaches the end of its input, it must stop parsing, following the same rules as the HTML parser.
The XML fragment serialization algorithm for a
Document or
Element node either returns a
fragment of XML that represents that node or raisesization of all of that node's child nodes, in tree order. User agents may adjust prefixes and namespace declarations in the serialization (and indeed might be forced to do so in some cases to obtain namespace-well-formed XML).
For
Elements, if any of the elements in the
serialization are in no namespace, the default namespace in scope
for those elements must be explicitly declared as the empty
string. (This doesn't
apply in the
Document case.) [XML] [XMLNS]
If any of the following error cases are found in the DOM subtree
being serialized, then the algorithm raises an
INVALID_STATE_ERR ("'").
Attrnode,
Textnode,
CDATASectionnode,
Commentnode, or
ProcessingInstructionnode whose data contains characters that are not matched by the XML
Charproduction. [XML]
CDATASectionnode whose data contains the string "
]]>".izable. The DOM enforces all the other XML constraints; for
example, trying to set an attribute with a name that contains an
equals sign (=) will raised an
INVALID_CHARACTER_ERR
exception.
The XML fragment parsing algorithm for either returns
a
Document or raises a
SYNTAX_ERR
exception. Given a string input and an optional
context element context, the algorithm is as
follows:
Create a new XML parser.
If there is a context element,.
A namespace prefix is in scope if the DOM Core
lookupNamespaceURI() method on the element would
return a non-null value for that prefix.
The default namespace is the namespace for which the DOM Core
isDefaultNamespace() method on the element
would return true.. | http://www.w3.org/TR/2009/WD-html5-20090212/the-xhtml-syntax.html | CC-MAIN-2016-18 | refinedweb | 440 | 51.58 |
In last month’s column, I showed how to create a web site testing tool based on Perl’s own testing framework and the WWW::Mechanize module. For reference, I’ve reproduced the code developed in last month’s article in Listing One. The test code verifies the proper operation of a web site, in this case,.
Listing One: A testing program to diagnose site health
01 #!/usr/bin/perl
02 use Test::More tests => 9;
03 use WWW::Mechanize;
04 isa_ok(my $a = WWW::Mechanize->new, "WWW::Mechanize");
05
06 $a->timeout(1);
07 $a->get("");
08 is($a->status, 200, "fetched /");
09 like($a->title, qr/The CPAN Search Site/, "/ title matches");
10
11 SKIP: {
12 ok($a->follow_link( text => 'FAQ' ), "follow FAQ link")
13 or skip "missing FAQ link", 2;
14
15 SKIP: {
16 is($a->status, 200, "fetched FAQ page")
17 or skip "bad FAQ fetch", 1;
18 like($a->content, qr/Frequently Asked Questions/, "FAQ content matches");
19 $a->back;
20 }
21 }
22
23 SKIP: {
24 ok($a->form_number(1), "select query form")
25 or skip "cannot select query form", 2;
26 $a->set_fields(query => "PETDANCE", mode => 'author');
27 $a->click();
28
29 SKIP: {
30 is($a->status, 200, "query returned good for 'author'")
31 or skip "missing author page", 1;
32 like($a->content, qr/Andy Lester/, "found Andy Lester");
33 $a->back;
34 }
35 }
While invoking this program directly certainly gives an immediate status, it’d be more useful to run this program automatically and frequently. For example, I might invoke this program every five minutes from cron, and mail the results to my cell phone or pager. However, because there’s a lot of output even when everything is OK, I’d get a lot of useless interruptions just to hear that “Everything’s OK.”
Running the program under the standard Test::Harness module helps a bit. Test::Harness interprets the ok and not ok values appropriately, providing a nice summary at the end, resulting in output something like:
01-search-cpan….ok
All tests successful.
Files=1, Tests=9
However, it’s still hard to reduce the text to pinpoint the errors, or know whether things were successful or were a partial or total failure. Also, I hate being told the same thing over and over again when a failure occurs, and also hate not being told when something has cleared up. And, the text doesn’t squish well into a nice SMS message for my phone or pager.
So, let’s take this one step further. The Test::Harness module inherits its core functionality from Test::Harness: :Straps, which is still in development. We can use a Test::Harness::Straps object to invoke the test script and interpret its output to determine which tests failed programmatically. If we have that, we can tailor the output.
One strategy is to test every five minutes (from cron), but send a page only when things are broken, and then only once every thirty minutes. This message can be cut down to precisely show the failing tests and perhaps the associated error output and exit status of the test program. Once the error clears up, the program can page on the next round with a single “All clear,” letting you turn back around and head home instead of finishing your midnight trek into the office to fix the problem.
Of course, there’s more going on than just “All OK” and “Something’s broken.” Each different combination of “Something’s broken” may be worthy of paging. Let’s ensure that only one page per unique combination of events gets sent, and that only one all clear signal is sent when everything clears up.
Difficult? Not at all, especially when you use Cache: :FileCache as a lightweight time-oriented database. The resulting cron-job program is shown in Listing Two.
LISTING TWO: Send failure and recovery reports to an administrator
01 #!/usr/bin/perl -w
02 use strict; #turn on warnings
03 $|++; #disable buffering of stdout
04
05 ## CONFIG
06
07 my $ALL_CLEAR_INTERVAL = "never"; # how often to repeat "all clear" signal
08 my $TEST_FAIL_INTERVAL = "30 minutes"; # how often to repeat test failed
09
10 sub SEND_REPORT { # what do I do with a report?
11 ## open STDOUT, "|sendmail 5035551212\@vtext.com" or die "sendmail: $!";
12 @_ = "ALL CLEAR\n" unless @_;
13 print @_;
14 }
15
16 ## END CONFIG
17
18 use File::Temp qw(tempfile); # core
19 use File::Basename qw(dirname); # core
20 use Test::Harness::Straps (); # core
21 use Cache::FileCache (); # CPAN
22
23 my $errors = tempfile();
24 open SAVE_STDERR, ">&STDERR" or warn "dup 2 to SAVE_STDERR: $!";
25
26 my $cache = Cache::FileCache->new({namespace => 'healthcheck_reporter'});
27
28 chdir dirname($0) or warn "Cannot chdir to dirname of $0: $!";
29
30 my $strap = Test::Harness::Straps->new;
31
32 my @failed;
33
34 for my $test_file (glob "*.t t/*.t") {
35 my %results;
36 {
37 open STDERR, ">&", $errors or print "dup $errors to STDERR: $!";
38 %results = $strap->analyze_file($test_file);
39 open STDERR, ">&", \*SAVE_STDERR or print "dup SAVE_STDERR TO STDERR: $!";
40 };
41 push @failed, map {
42 $results{details}[$_]{ok} ? () :
43 ["$test_file:".($_+1) => $results{details}[$_]{name}]
44 } 0..$#{$results{details}};
45 push @failed, ["$test_file:wait" => $results{wait}] if $results{wait};
46 }
47
48 if (-s $errors) {
49 seek $errors, 0, 0;
50 local $/;
51 push @failed, ["errors" => <$errors>];
52 }
53
54 my $key = join " ", map $_->[0], @failed;
55
56 if ($key) { # bad report
57 $cache->remove(""); # blow away good report stamp
58 if ($cache->get($key)) { # seen this recently?
59 ## print "ignoring duplicate report for $key\n";
60 } else {
61 $cache->set($key, 1, $TEST_FAIL_INTERVAL);
62
63 my @report;
64
65 for (@failed) {
66 my ($key, $value) = @$_;
67 my @values = split /\n/, $value;
68 @values = ("") unless @values; # ensure at least one line
69 push @report, "$key = $_\n" for @values;
70 }
71
72 SEND_REPORT(@report);
73 }
74 } else { # good report
75 if ($cache->get("")) { # already said good?
76 ## print "ignoring good report\n";
77 } else {
78 $cache->clear(); # all is forgiven
79 $cache->set("", 1, $ALL_CLEAR_INTERVAL);
80
81 SEND_REPORT(); # empty means good report
82 }
83 }
Lines 5 to 16 define the configuration section — those things I’m likely to change and therefore want to locate quickly. The two time constants are defined in Cache: :Cache-compatible units, which understands things like 15 seconds or 4 hours. (See the documentation for details.) $ALL_CLEAR_INTERVAL defines how often a repeat page saying “Everything’s OK” gets sent. If you set this to 1 day, you’ll get a single page a day saying everything is OK as a nice meta-check that your health-check is OK. By setting it to never, you get one page when the monitoring starts the very first time, but never again unless a failure’s been fixed. Similarly, $TEST_ FAIL_INTERVAL defines how often a page is sent for an identical combination of failures.
Lines 10 to 14 define the callback subroutine of what to do when an event of significance occurs. If this subroutine is called with no parameters, then it’s the “All clear” signal. Otherwise, it’s the current error text. For debugging, I simply display this text to stdout, but by reopening standard output to a pipe to sendmail, I could just as easily send this to my cell phone or pager, presuming there’s an email gateway.
Lines 18 to 21 pull in the four modules needed by the rest of the program. Three of the four modules come with all recent Perl distributions, although you might have to upgrade your Test::Harness from the CPAN if you get an error because of Test::Harness::Straps. The fourth is Cache::File Cache, part of the Cache::Cache distribution.
Lines 23 and 24 create a temporary file and associated filehandle to let me capture the stderr output from the various test programs being run (such as the one in Listing One). The error output is usually diagnostic explanations, and often elaborates on the reasons for failure. The code also saves the current stderr so that it can be reset it after every child process, so that die and warn messages end up in the right place.
Line 26 sets up the Cache::FileCache object, creating persistence between invocations.
Line 28 ensures that the current directory is the same as the running script. This permits the paging program to be called as an absolute path in cron without having to manage the location of the test scripts, as long as they’re all nearby.
Line 30 creates the Test::Harness::Straps object to interpret the result of one or more testing program invocations.
Line 32 collects the failure information that eventually decides what to report. The array will contain arrayrefs, where each referenced array has two elements. The first element is an identifying (hopefully unique) key of a failing condition, and the second is its associated text. This is explained more later.
Lines 34 to 46 loop over every test file in either the current directory or a subdirectory named t, similar to the normal Test: :Harness module-installation operation. Each of these tests is run separately, although the results are gathered for a single page. Instead of limiting the location of these *.t files to the current directory and one subdirectory, I might also want to use File: :Find or a configuration file to define which tests are run. Line 35 defines %results, having the same meaning as the Test: :Harness::Straps documentation gives to the variable.
Lines 36 to 40 run a specific test file, using the Test: :Harness::Straps object. Because you want to capture the STDERR output from each invocation, you must first redirect stderr into the temporary file, then call the testing harness, then restore it back. It’s a bit messy, but necessary. Perhaps future versions of Test::Harness::Straps will provide a hook to do this directly.
Once you have the results, there are two things to ask: first, did any of the tests fail? And second, did the test child exit in some unusual manner?
To see if any of the tests failed, look at $results{details}, which references an array of individual tests and their results. Within each element of the array, the ok element of the hashref is true if the test succeeded. If ok is true, simply ignore the test. If it’s false, add a new arrayref that contains an identification name for the test (offset by one because element zero of the array is test one) and the name the test gives itself (usually the text after the comment mark) to @failed. This is all handled nicely by the map in lines 41 to 44.
If the child exited badly, add another element with the wait status to @failed in line 45.
Lines 48 to 52 look at the standard error output that’s accumulated from running all of the tests. If any output exists, it’s gathered into another @failed entry, keyed by errors.
When the program hits line 54, all of the invidivual tests have run, possibly from many different test programs, and the results are in @failed. Next, the code creates a “current problems key” in $key, resulting from joining all of the error tags into a space separated list. If this string is empty, everything went OK, but otherwise you end up with a list like “health.t:4 health.t:wait errors” showing that test 4 failed, the wait status was bad, and that there was some text on stderr from one or more of the children.
Based on this error key string, you can now decide whether to page or not.
Line 56 distinguishes between “Everything’s OK,” and “Something’s wrong.”
If something is wrong, the code in lines 57 to 73 runs. First, remove any marker for “Everything’s OK” in the cache. This ensures that the next time everything is OK, a page is sent that says so.
Line 58 determines if a page’s been sent recently for this particular combination of errors. If so, the value returns true from the cache, and nothing further happens this invocation. (I’ve left the commented-out debugging print at line 59 so you can see where this happens.) Otherwise, it’s time to send a page. First, in line 61, ensure that this particular page isn’t duplicated within the $TEST_FAIL_INTERVAL time window.
Line 63 defines a holder for the report. Lines 65 to 70 process the @failed array, extracting out each key/value pair, and then prepending each line of the value with the key for careful labeling. Even if the value is empty, at least one line containing the key is generated.
Line 72 passes this report list into the SEND_REPORT callback, defined at the top of the program. This sends the appropriate report with just the broken pieces of the collective tests.
Lines 75 to 82 deal with an “Everything’s OK” run. First, if there’s already been an “All OK” signal sent recently enough, there’s nothing to do (again, noted in a commented-out debugging print in line 76). Otherwise, throw away all of the recently seen failure tags in line 78 by clearing out the entire cache, and set a flag to prevent an additional “Everything’s OK” message until the $ALL_CLEAR_INTERVAL has passed in line 79.
Line 81 passes an empty list to SEND_REPORT, a signal to the program that it’s time to send the all-clear message.
Although this simple test reporting tool doesn’t have a lot of fancy features, it illustrates how the basics are accomplished and how reporting can be kept to the essentials. If you want more, there’s larger, more complex, and even commercial solutions to notify you when things go wrong. Until next time, enjoy! | http://www.linux-mag.com/id/1534/ | CC-MAIN-2018-30 | refinedweb | 2,317 | 68.2 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Nice discussion of the problems associated with the RPC model, which abstracts the network making remote calls look like local calls, even though they exhibit different types of failure.
Web services, JAX, and Cw are also mentioned.
Related links: here.
A lot of the article discusses the difficulty of translating a language's native structures to and from XML. He calls this Object/XML mapping, reminding us of the problems with Object/Relational mapping.
However, I'm more interested in this quote from the article:
What seems like a good, simple idea on the surface — hiding networks and messages behind a more familiar application development idiom — often causes far more harm than good. Worse still is that it’s harm that, even 30 years later, we’re still learning about — usually, the hard way.
Does anyone have any more information about the harm of making remote procedure calls look like local ones? The abstraction seems useful. I guess the question is how it should deal with distribution problems.
Are people moving away from RPCs and exposing more of the network layer? Are there any alternative abstractions that are worth looking at? e.g. message passing.
The XMLHTTPRequest/ActiveX.XMLHTTP model in browsers (what they call Ajax now) is actually pretty good: you have to deal with asynchrony, idempotence (GET vs POST) and response codes, and it's _still_ very simple to program against.
However, it's rather coarsely grained (chunks of text and XML are the basic unit) and one-way (browser always initiates).
Check out A Note on Distributed Computing. I first read this ten years ago and it changed the way I look at things.
When I first heard of RPC (back when I was a kid), the first reaction was: "wow, so my program counter goes over to that other machine, while my machine sits idle? how stupid!"
When I first saw the idea of passing serialized objects over HTTP (SOAP and the like), my first reaction was: "i've seen this before, it was called CORBA, and it failed".
This is just common sense vs. the love of abstraction, no?
Now it uses XML! ;-)
;-)
But like some annoying phoenix, bad but simple ideas will always return.
I agree about the common sense, but not about the love of abstraction (naturally, since I love abstraction).
If you agree with the criticism regarding RPC, the conclusion to draw is that the RPC abstraction is the wrong abstraction. It is way too simple an abstraction for what you need. So you need a better abstraction (e.g., async queues or buffers, blackborads, tuple spaces or whatever). The failure of one specific abstraction, won't make us abandon our love for abstractions, just like the fact that we love abstraction doesn't compel us to like any specific abstraction (such as RPC).
Eric Raymond on RPC (scroll down to about the middle)
He takes an "ecological" rather than "mathematical" view:
one of the functions of interfaces is as choke points that prevent the implementation details of modules from leaking into each other
...
the RPC model tends to encourage programmers to treat network transactions as cost-free
RPC seems to encourage the production of large, baroque, over-engineered systems with obfuscated interfaces, high global complexity, and serious version-skew and reliability problems
Of course, this is all rather unverifiable...
What successful systems are based on RPC?
From the top of my head, I only see NFS and SMB, with NIS and it's derivative barely registering...
There's got to be others, no? If not, that's rather few successes for RPC.
RPC doesn't scale particularly well. OTOH, RPC has long been used for many simple (and app-specific) client-server applications.
wow, so my program counter goes over to that other machine, while my machine sits idle? how stupid!
Your problem is you are thinking too simple. That is you are assuming your Foobar1000 is running a function on someone else's Foobar1000 while your's sits idle. In that case, RPC is stupid. Consider the following situations though:
The remote computer is some super computer with much better abilities than your desktop, and the function is something that needs a super computer. Eg. Multiply this 100x100 matrix by this, something that would slow your computer down greatly (particularly when you are doing this in a loop), but is trivial to the other machine.
You program has forked just before calling this RPC, the other half of the program is continuing on, chewing up 100% of the local CPU, and the remote function does the other half of the problem, chewing up 100% of the remote CPU. Basically this is a variation of MPI.
The remote computer has some resource that the local doesn't. RPC is an abstraction not for CPU, but for some other hardware. NFS is designed like this (though implementations universally write their own RPC to save the overhead, the spec assumes this abstraction), and the X windows system is a form of this, but using the X client libraries as the abstraction to the X server elsewhere.
If your local computer can do everything the remote can, then you are right, RPC is stupid. If your local computer cannot, either because it is overloaded, slow, or lacks some file/device, then you are wrong.
In the scenario you outlined, all you've really done is implement asynchronous messenging over RPC. What I think Vladimir was trying to get across is that RPC is a poor abstraction. Even if you fork or spawn another thread to call the remote procedure, you're still left with some thread/process sitting there doing nothing but waiting and wasting resources. From this perspective, RPC really is stupid.
Not really, because RPC is just an abstraction for messaging. (which could be synchronous or asynchronous. I gave examples of both)
When working on a white board RPC is great, because on a white board it is okay to say "Some magic happens here", while ignoring all the errors. On real computers though there are far too many errors that you need to handle, so you cannot safely use RPC for anything that you wish to set and forget.
Vladimir seemed to be saying that RPC as a whole was bad. This is not the case, so long as you look as RPC as a simple abstraction for messaging. For trivial problems (that is something where you die on error, and restart by hand) RPC is an easy way to get your program running quickly, without having to write a protocol. SOAP/XML is much the same, an easy way around writing your own protocol. (Though writing the XML may be harder than writing your own protocol)
...with XML-RPC, SOAP, JAX and all of these technologies. Fascinated in the sense that watching train wrecks in slow motion is fun.
They treat what is essentially building an arbitrarily complex network protocal as a data representation problem. "Hm," they seem to say, "I need to get something done that involves moving a blob of data from where it is to where it needs to be, without deadlocking and ensuring that I'll actually make forward progress. I know, I'll pack the data in XML! That'll fix everything!" "impedence mismatch" problems in the world only expands the capacity to 12.5 pounds.
A fun experiment you can do at home: go ask your JAX guru what the phrase "byzantine failure" means.
This is worth keeping in mind. I think RPC is a useful abstraction in these cases, since it allows you to solve the simple problems without knowing what the phrase "byzantine failure" means.
Obviously, if you need to design robust distributed systems, you better know what it means, but then it's really not the responsibility of a programming construct to make sure you know this sort of thing...
If you want to look at it as an abstraction, it is not a good one in the sense that it is fragile. It simplifies solutions to some problems, sure, but not others. Also, there are no big warnings signs when you go off into the deep end. ("idl2c foo.idl -> "Warning: this looks like you have more than one server. Do you know what you are doing?")
As an abstraction, RPC abstracts the wrong things; it just covers the simplest part of network communications and hides the other issues.
I guess you haven't read my comment above about abstraction.
and it doesn't matter if you're talking about RPC or CORBA or RMI or SOAP or XMLRPC or what have you, is that you are in effect creating a wire protocol, not unlike SMTP. The fact that a lot effort is being put into making the wire protocol look like just another function call doesn't change this fundamental truth.
The problem with wire protocols is that the software version of one side can be signifigantly different from the software version on the other side. Which means either you need a carefully defined protocol where new software can still talk to old software and vice versa, or you have a major configuration issue, and have to make sure that when one half is upgraded, the other half is upgraded as well.
The easiest way to make a wire protocol that isn't vulnerable to change is to a) design the protocol seperate from the implementation (see SMTP), and b) abstract as much of the implementation away as possible. But the whole point of RPC etc. is to encourage you to not view the wire protocol as seperate from the code, and to reveal more of the implementation than strictly necessary.
This isn't to say they are totally worthless- especially not if they're used wisely. And there are situations where they are the cat's pajamas. It's just that they aren't the magic elixir they keep being sold as (on about a seven year cycle, it appears).
RPC is fine, under certain conditions: (1) you have a tolerable RPC library, (2) you're willing to adapt to the realities of the network, and (3) your clients and servers are loosely coupled.
These days, the typical RPC user is hacking on an in-house script that needs to talk to a simple server process. The server may be local or across the backbone.
And modern RPC code is certainly a lot less offensive than CORBA:
import xmlrpclib
server = xmlrpclib.Server("")
server.submit_record({name: "Joe", phone: "555-1212"})
Usually, somewhere around version 1.5 of the server, the author discovers several things: To reduce latency, batch requests. To achieve decent reliability, make server functions idempotent, and periodically retry failed requests. To support future expansion, use keyword arguments or Perl-style hash tables whenever possible.
For many applications, the alernative to RPC is a line-oriented TCP protocol (a la SMTP) that takes too long to write, introduces buffer overflows in your server, and doesn't handle Unicode correctly.
Is using synchronous communication a good idea in a loosely coupled environment?
Mind you, there's a world of difference between synchronous messaging (think HTTP) and RPC.
Synchronous messaging does not mean RPC, but RPC means synchronous messaging at least to some extent. In a loosely coupled environment, I would choose message oriented middleware over RPC any time.
Of course terms like loosely coupled mean practically nothing. Your loose could be very different from mine.
There are also different aspects into coupling. Components can be coupled because of same data structures, communication protocols, assumptions about available services etc. Because RPC is quite strongly connected to underlying PL and paradigm, those things tend to leak out. That's why I have hard time understanding how one could use it in a loosely coupled environment.
Modern RPC protocols like XML-RPC and (to a certain extent) SOAP rely on single cross-language data model. This data model is roughly equivlant to that used in most dynamic scripting languages: strings, integers, floating-point numbers, booleans, binary globs, lists, and sets of key/value pairs.
A straightforward data model has sharply reduced leakage of underlying language crud onto the wire.
An example of a loosely-coupled system might be the original NFS file operations: you could read and write files, list directories, and so on, but the server maintained no per-client state. The server never knew whether a file was open, or whether if any locks existed. Up to a point, this worked fine.
The trouble appeared when the NFS architects tried to implement Unix deletion and locking semantics, which require per-client state. This required adding all sorts of buggy daemons and distributed protocols that still don't work quite right. As usual, tight coupling causes headaches.
I don't think these issues have much to do with synchronicity versus asynchronicity. All things being equal, I slightly prefer synchronous systems (unless scalability is required), because most programmers can get it kinda-sorta right--as opposed to asynchronous RPC, which is generally screwed up by all but the best programmers.
You're advocating HTTP without realizing it :-)
make server functions idempotent
the whole GET/POST thing
use keyword arguments
GET query parameters (name=value), POST form fields (including binary attachments and such)
Unicode
response encodings
introduces buffer overflows in your server
use a standard HTTP server - there's lots of them, including embedded ones like Jetty
REST (passing RPC messages as CGI parameters) is basically OK as long as your parameters fit a simple key-value model and your result is a file. This happens often enough in the real world.
However, as soon as you need to pass complex data structures across the wire, REST gets pretty hackish. If you find yourself doing funny encoding tricks to stuff your arguments into CGI parameters, or doing a lot of XML parsing on the returned document, it's easier to just use an off-the-shelf RPC library.
There's other problems with REST, too. For example, there's no standard, well-implemented way to package up multiple RPC calls into a single request (similar to XML-RPC multicall, for example). You'd think that HTTP pipelining would let you do this, but I've yet to test a standalone HTTP library that could avoid a gratuitous round-trip per request. Maybe that's improved in the last year or two?
RPC is fine, under certain conditions:...your clients and servers are loosely coupled.
But RPC pretty much ensures that your clients and servers are tightly coupled, due to versioning issues, proceedure call semantics, and a whole host of requirements put on both sides in order to make it look like a proceedure call.
The RPC problem is a perfect example of a more general issue in both software and engineering that I can’t resist commenting on. As engineers and scientists we are trained to be analytic. Analysis is certainly powerful and beautiful, and we often try to extend analysis and abstraction into the environment in order to avoid the ugly consequences of dealing with reality. RPC is the perfect example since it tries to extend the analytic environment of the computer into the network and the world. Getting this strategy to work is always a matter of further and further extensions. Taken to its logical conclusion it amounts to specifying and thus controlling the entire world! Is this really what we want to do? Why not just accept reality and deal with it? In the case of communication it is only a matter of adaptation. Provide information on the capabilities of the various parts and configure as needed. This might be more work but it is “realisticâ€, and it doesn’t necessarily prevent abstraction
REST is not a substitute for RPC, but a different model altogether.
It's no problem to send an XML document (or any other binary content) via POST, if the transfer of more complex data structures is required.
The semantics of GET are such that the URL by itself should be sufficient to identify the requested resource, so very complex search queries should probably be performed by POSTing the search to some query handler that uses it to create a resource representing the search and gives the URL of that resource in return, then GETting the query results from that URL. If the query will take time, then the URL could alternatively be used to provide a query status indicator. That way the client can get on with something else while the search is being performed, polling the result URL periodically to see if it's been updated.
Because REST transfers aren't RPC calls, there isn't any sense in trying to package up multiple RPC calls into them. But, again, there's nothing to stop you sending a lengthy document representing a batch of work to be done, and getting a document representing a schedule (with a URL to query the status of each work item requested) in return. For multiple tasks, it's probably better to do things asynchronously anyway.
It all depends on the nature of the application. If the application wants to drive other applications or be driven by other applications, then the RPC abstraction works. Examples of such applications are defense applications, where every module sends and receives commands in order for the whole system to be at a specific state at any moment in time.
If the application only wants to submit data into a server, then PRC is not good; mainly it is slow, and not very dynamic. There are other protocols that are much better, including plain SQL. | http://lambda-the-ultimate.org/node/982 | CC-MAIN-2018-05 | refinedweb | 2,975 | 60.95 |
Keith Brown
Pluralsight
January 2006
Applies to:
Microsoft Visual Studio .NET
Code Access Security (CAS)
Summary: The .NET deployment model is based on clients pulling the latest version of an app from a Web server. While this eliminates a lot of headaches, how is a client to know the code is secure? Keith Brown explains. (11 printed pages)
Introduction Trust Levels The Big Picture Evidence Permissions Policy Inside .NET Security Policy Conclusion
The Microsoft .NET Framework is a great platform for developing and deploying smart clients. The .NET deployment model is based on clients pulling the latest version of an app from a web server, which eliminates a lot of headaches. However, this introduces the potential for a client to download malicious code. How is a client to know the difference?
To deal with this, the .NET Framework introduced a security system called Code Access Security (CAS). CAS helps centralize trust decisions and introduces the notion of partially trusted code, which can be run with reduced permissions. I'll start by introducing some concepts and painting the overall picture of what CAS is intended to do as well as how it hangs together, and finally I will drill down a bit to dispel a little of the magic.
Before the .NET Framework existed, Windows had two levels of trust for downloaded code. When browsing the web, you probably remember seeing dialogs like the one shown here:
Figure 1
There are two choices here: Yes and No. They represent levels of trust in the code you're about to install and run. If you choose No, the code won't run. If you choose Yes, the code will run with all the permissions you currently have based on your user login. Like most people, you're probably a member of the Administrators group, which means the downloaded code can do anything it wants to your machine.
This old model was a binary trust model. You only had two choices: Full Trust, and No Trust. The code could either do anything you could do, or it wouldn't run at all.
In the managed world, you still have these two options, but the CLR opens up a third option: Partial Trust. When you use partially trusted code, it will be allowed to execute, but it will be constrained by the .NET Framework and won't necessarily be able to do all the things you can do. In fact, there are a whole raft of permissions that control exactly what the code is allowed to do, and as you'll see shortly, these permissions can be granted and revoked using .NET security policy.
The most important concept to understand at this point is that partial trust grants a set of permissions that will always fall somewhere between no trust and full trust, where no trust means the code cannot run at all, and full trust means the code can do anything the user running it would normally be allowed to do. Fully trusted code run by an administrator can do administrative tasks..
Figure 2
So when should you choose to run your code with partial versus full trust? Well, as a developer this decision is not in your hands. Code Access Security was not introduced to protect applications from users. Its main goal is to protect users from potentially malicious applications. Remember, I'm talking about code that's downloaded over the network here. Locally installed applications are fully trusted by default.
System administrators ultimately control security policy. If you want users to be able to run your smart client application from the network without having to install it on their machines, you'll either need to convince the system administrator to tweak security policy to allow your code to run with full trust, or you'll need to write your code carefully so that runs properly with partial trust. Doing the latter requires a little bit of learning and a lot of patience, which is why I'm dedicating another upcoming article to the topic of writing partially trusted code.
Before diving into the details of how Code Access Security works, let's stand back and look at the big picture. When an assembly is loaded, the CLR gathers evidence about that assembly. This includes the download location and might also include information about who authored the assembly.
The CLR feeds this evidence into its security policy engine, which decides which permissions to grant to the assembly based on the evidence. Security policy is controlled by administrators and users on the machine on which the code will run. The result is a permission set, which the CLR attaches to the assembly. These permissions will help decide what the code in the assembly can or cannot do.
Now, how are these permissions enforced? Stop and think for a moment about the managed code that you write. If you want to send an e-mail, write to a file, call a Web service or a stored procedure in a database, or even do something simple like reading an environment variable, what do you do? You use a class that's provided by the .NET Framework. It's these classes that gate access to sensitive resources like the file system, network, databases, and so on. If security policy didn't grant your assembly the permission to write to a particular file, the FileStream class will throw a SecurityException if you try to open that file for writing.
So what's to stop you from simply going around the .NET Framework classes and calling directly to a Win32 function; say, CreateFile? Well, in order to do this, you must use the .NET Framework's interop layer, and that layer will throw a SecurityException if you haven't been granted permission to use interop. This is the type of permission that's only granted to fully trusted code. You won't be able to use P/Invoke or COM interop in a partially trusted assembly.
Oh, and since partially trusted code is also required to be typesafe (it must be verifiable or it won't run), you can't use pointers to get around the .NET Framework classes and call to native code directly. In short, partially trusted code is sandboxed and restricted by .NET security policy. Fully trusted code has none of these restrictions.
Now let's drill down a little deeper and look at the three components of Code Access Security: evidence, permissions, and policy.
As with just about all aspects of Code Access Security, there are classes that represent each form of evidence, and you can write your own if you have specialized needs. Here are the most common types of evidence you'll encounter in the .NET Framework version 1.1:
Figure 3
These are real classes that you can find in the System.Security.Policy namespace, and I've categorized them to help you understand where they come from. Before the CLR even downloads an assembly, it's got to have an URL to find the assembly in the first place. This will typically be a file:// URL or an http:// URL, depending on whether the assembly is installed on the local machine, or being loaded from the network.
The CLR computes zone and site evidence from the URL, as well. The former is simply the Internet Explorer zone that the URL belongs to. For example,:\temp\myapp.exe is in the MyComputer zone, while will most likely fall into the Internet zone. I say "most likely" because zones may be customized. For example, if happens to be a website that you trust, you may have added it to your list of trusted sites via the Internet Options control panel, in which case the zone evidence would be Trusted.
Figure 4
For http:// style URLs, site evidence is also computed. In my example, the site would be.
After downloading the assembly, the CLR examines its contents to determine hash, publisher, and strong name evidence. The hash value of an assembly is the SHA1 or MD5 hash of the assembly manifest, which contains hashes of each module making up the assembly. For all practical purposes, if the assembly changes (even if it is simply recompiled), its hash value will change.
Some authors sign their assemblies using Authenticode. In this case the CLR will produce publisher evidence that contains the code-signing X.509 certificate. Because of the public key infrastructure behind these certificates, publisher evidence is a reasonably secure way of making security policy decisions. If a publisher's private key is compromised, she can report it to her certificate authority who will publish a new Certificate Revocation List (CRL). This means that over time, as users download assemblies signed with the compromised key, the CLR will recognize that the publisher's certificate has been revoked and won't run any assemblies signed by the revoked key.
Most assemblies also have strong names, which will be packaged into evidence as well. Beware using strong name evidence in security policy however, as there is no key revocation infrastructure as there is with certificates.
The .NET Framework version 1.1 defines a whole host of permissions, protecting everything from file and database access to thread suspension and resumption. Just like evidence, each permission is represented as a class, and you can define custom permission classes if the need arises. To give you a taste, here are the permission classes defined in System.Security.Permissions:
Figure 5
The identity permissions should look familiar, as they map directly onto the corresponding evidence. These are typically used to restrict which code can use your classes or methods, which only really works in partially trusted scenarios. This is a more advanced topic that I won't drill into any further here.
The resource permissions are the ones you'll most likely be interested in. These permissions control which classes in the .NET Framework you can use, and sometimes even which methods or properties you can use on those classes. Ultimately this controls which resources your code has access to.
Most of these permissions have parameters. The UIPermission class has parameters that control what type of windows you can draw in, as well as whether you'll be allowed to read the clipboard or not. The SecurityPermission class has a load of flags that control things as varied as whether your code can suspend threads to whether it can call through interop and get to native code directly.
Here's an example. Say your assembly is granted FileIOPermission to read the path "c:\temp\*". This means your code can read any file under c:\temp, including files in subdirectories (or more specifically, any file that the user running your code is allowed to read based on her login). And you can do this silently, without prompting the user, because your calls to File.Open() or the FileStream constructor will succeed.
But if your assembly is partially trusted, it's much more likely that it won't be granted FileIOPermission. If this is the case, trying to open the file directly, using File.Open() or by creating a new FileStream object, will generate a SecurityException, because those operations demand that you have FileIOPermission.
Your partially trusted assembly may only be granted FileDialogPermission for reading files. In this case, you may use the OpenFileDialog class to prompt the user to pick a file. You may then use the OpenFile method on OpenFileDialog to get your hands on a read-only FileStream. This way, lesser-trusted assemblies may open files if they are willing to get the user involved.
It's interesting to note that the FileName property on the OpenFileDialog class demands that you have FileIOPermission, because it discloses the location and name of the file the user picked. If you only have FileDialogPermission, you're only allowed to read the contents of the file, not to discover where it came from! These little gotchas can be a bit frustrating, which is yet another reason to follow up with the article on writing partially trusted code.
Earlier I mentioned how the CLR's security policy engine examines evidence and constructs a permission set for an assembly at load time. No one policy will fit all organizations, so the policy engine reads a set of XML files that contain permission grants. These files make up what is called .NET security policy.
Fortunately you don't have to edit these files by hand. The .NET Framework Configuration administrative tool makes editing policy pretty straightforward, and has several wizards to make the process easier for people who are new to .NET. But before I start drilling down into policy, let's talk about the default policy that ships with the .NET Framework, as you're guaranteed to run into it at some point.
The default policy works on the premise that code installed on your machine is more trusted than code that is loaded from the network. And, of course, there is a provision ensuring assemblies in the .NET Framework itself are always fully trusted. After all, some piece of code has to be trusted to implement this whole security infrastructure.
To keep things simple, Microsoft's default policy grants permissions based on zone evidence. The MyComputer zone is granted full trust. This represents locally installed code such as applications you've installed from a CD-ROM or DVD-ROM, or programs that you have manually downloaded and saved to your disk before running.
Next is the LocalIntranet zone. If you don't explicitly list which Web sites are part of this zone, it's simply defined as any domain name without a "." in the name (in other words, it's a NETBIOS hostname as opposed to a DNS name). For example, would be considered part of the LocalIntranet zone by default, while would normally drop into the Internet bucket because of the dots in the name, as would. Note that the decimal equivalent will not be accepted—you can read more about the "Dotless-IP Address" bug in Michael Howard's book, "Writing Secure Code, 2cnd ed."
Default security policy assigns what you might call a "medium trust" permission set to assemblies in the LocalIntranet zone. This means code from your local network will normally run with partial trust. If you've ever tried running a managed executable from a shared drive, even one on your local machine (like z:\MySmartClient.exe), you may have run into a SecurityException or two, because the zone you're running from is no longer the MyComputer zone.
Internet Explorer defines a couple of zones designed for customization by system administrators: the Trusted Sites and Restricted Sites zones. The former is granted a "low trust" permission set, and the latter is not trusted at all, which means managed code from a restricted site will not run by default.
And, finally, the Internet zone is the bucket into which all URLs fall if they can't be sorted into any of the other zones. The default assignment for this zone has had a history of change, oscillating between low and no trust, but as of version 1.1 of the .NET Framework, it has stabilized and is mapped to the low trust permission set by default.
To begin to understand how policy works, you should spend some time experimenting with the .NET Configuration tool, which you'll find under the Administrative Tools folder on any computer that has the .NET Framework installed. Let's start by looking at the default machine policy, which I've shown below:
Figure 6
Because there are so many different permissions, and each permission has so many parameters, there is a folder called Permission Sets where you can construct sets of permissions that you'll commonly use. Default policy is already organized this way, as you can see. There are four built-in permission sets that represent the four default levels of privilege granted to code that I mentioned earlier: Full trust, medium trust, low trust, and no trust. Here are how they map onto the actual names in the policy:
By the way, don't let these names confuse you—it's an unfortunate historical artifact that the middle two are called LocalIntranet and Internet. It would be less confusing if they were named something more generic like MediumTrust and LowTrust.
You can see where these four levels come into play if you run some of the wizards provided by the .NET Configuration tool. For example, if you click on the Runtime Security Policy node (this is not shown in my previous example, but you'll see it if you run the tool), you'll see a list of tasks you can perform. One of them is called Adjust Zone Security, and I've shown the interesting part of this wizard below (note the slider bar with four levels):
Figure 7
In this screenshot, I've selected the Internet zone, and you can see that the slider bar is in the unlabeled second position, or what I call the "low trust" position. If you click around on the different zones, you'll see the slider go up and down depending on the zone; for example, the "My Computer" zone should be FullTrust by default. Pay attention to the description of the trust levels as the slider moves. This will give you a bit of a feel for what sorts of restrictions are in place at the various trust levels. If you're wondering why the phrase "might not be able to" is used a lot in these descriptions, bear with me and I'll show you soon!
Here's an educational experiment. Change the permissions assigned to the My Computer zone by dropping the slider bar all the way down "No Trust." Don't worry, your machine won't melt if you do this! After you make this change, try compiling and running a simple application like the following:
class DoesNothing {
static void Main() {}
}
Running this code should cause you to see a PolicyException that indicates the code is not allowed to execute. However, if you close the .NET Configuration tool (which is a managed application) and rerun it, you'll find that it runs just fine even with your new setting, and you can go back to the wizard and set your policy back to normal. You can also right-click the Runtime Security Policy folder and choose "Reset All" to get back to the default policies after you're done experimenting.
So what exactly is going on here? To explain what's happening, I need to talk a bit about the Code Groups folder in security policy, which is where permissions are actually granted. Each code group is a conditional permission grant. For example, right-click on the code group named My_Computer_Zone, and bring up its property sheet:
Figure 8
You can see that each code group has a membership condition and a permission set. The condition in this case is based on zone evidence. This code group consists of all assemblies in the My Computer zone (in other words, assemblies that have Zone evidence with MyComputer as the zone).
If you click on the permission set tab, you'll see that any code matching this group will be granted the full trust permission set. Each code group is simply a conditional permission grant. If your assembly matches the membership condition, you'll get all the permissions listed in the corresponding permission set. In this case, it's FullTrust.
Now you should be able to figure out what the wizard did earlier: as you moved the slider up and down and made your changes, you were really just changing the permission set for the My_Computer_Zone code group between one of the four I listed earlier: FullTrust, LocalIntranet, Internet, and Nothing.
If you spent some time experimenting, you may have noticed that even with the My_Computer_Zone code group set to grant Nothing, you are still able to run programs like the .NET Configuration tool and Visual Studio .NET. While these programs are not entirely managed, they do load managed code from the local machine that arguably should not run if you've cranked the My Computer zone down to no trust.
The answer to this riddle can be found by drilling a bit deeper into the code group folder (note that I've expanded My_Computer_Zone):
Figure 9
You see, code groups are arranged in a tree. All we were doing with the wizard was changing the permission set associated with the My_Computer_Zone code group, but there are a couple of code groups under that one that still grant permissions, no matter what My_Computer_Zone grants! These are the grants that allowed the .NET Framework and associated tools to function even in the face of radical changes such as the experiment I proposed earlier. We evaluate code groups from the top of the tree (All_Code) down, and as long as any parent node matches (All_Code matches all assemblies), the children will be evaluated. So the Microsoft_Strong_Name, which is on all CLR assemblies and tools, allowed the .NET Configuration tool to run. Because you can edit these code groups directly, the wizard we used earlier that simply changes the zone-based code groups uses the rather vague wording "might not be able to" when it comes to describing restricted actions.
While I don't have time in this article to discuss all the intricacies of policy, you can see that it's really flexible. You can use the code group tree to grant permissions based on different combinations of evidence, and you can use the Permission Sets folder to organize permissions so you're not duplicating effort.
Deploying code over a network is dangerous without a comprehensive security system to verify and constrain that code, and Code Access Security is Microsoft's solution to the problem. It's a flexible beast, if somewhat complex, and as a developer working on smart clients, you should learn all you can about it, as it will play a big role in your life!
Keith Brown is a co-founder of Pluralsight, a premier developer training company, where he focuses on security for developers. Besides writing the Security Briefs column for MSDN Magazine, he authored The .NET Developer's Guide to Windows Security (Addison Wesley, 2004) and Programming Windows Security (Addison Wesley, 2000). Keith also speaks at many conferences, including TechEd and WinDev. Check out his blog at. | http://msdn.microsoft.com/en-us/library/aa719096(VS.71).aspx | crawl-002 | refinedweb | 3,749 | 61.67 |
Computer Science Archive: Questions from October 02, 2008
- Anonymous askedWrite a program to computethe interest due, total amount due, and the minimum payment for arevolving... Show more
Write a program to computethe interest due, total amount due, and the minimum payment for arevolving credit account. The program accepts the account balanceas input, then adds on the interest to get the total amount due.The rate schedules are the following: the interest is 1.5% on thefirst $1,000 and on any amount over that. The minimum payment isthe total amount due if that is $10 or less; otherwise, it is $10or 10% of the total amount owed, whichever is larger. Your programshould include a loop that lets the user repeat this calculationuntil the user says she or he is done.• Show less1 answer
- Anonymous askedWrite a program that will read a line of text and output a list of all the letters that occur in th... More »0 answers
- Anonymous askedDefine and test a class for a type called CounterType. An object of this type is used to count thin... More »1 answer
- Anonymous askedReq. for th... Show moreFind open circuit voltage =>Voc; short circuit current =>Isc;and equivalent resistance =>Req. for the
one port circuit.
Vsource = 8V, R1=10kΩ, R2=30kΩ, R3=2.5kΩ,R4=2kΩ, R5=3kΩ, and Isource = 0.3 mA
(Voltage source = 8V = Vs, and Current source = 0.3mA =Is)
• Show less1 answer
- Anonymous asked2 answers
- Anonymous askedUsing figure 7.1 of book (Introduction toAlgorithms (2nd) by Thomas H. Cormen, Ronald L. Rivest, Cha... Show moreUsing figure 7.1 of book (Introduction toAlgorithms (2nd) by Thomas H. Cormen, Ronald L. Rivest, Charles E.Leiserson, Clifford Stein) as a model,illustrate the operation of PARTITION on the array :
A=<6,8,-4,4,3,5,2,4,9,4,5>.
• Show less1 answer
- InfamousTea6628 askedWrite a class Cirle.java that calculates volumeand area of a circle. There can be two type of constr... Show moreWrite a class Cirle.java that calculates volumeand area of a circle. There can be two type of constructors in theclass. One is the empty constructor, which does not take anyparameter input. And the other is a constructor which takes fourinputs, that is shown next:public void Circle(double ox1,double ox2, double px1, double px2)Where, O(ox1, ox2) isthe origin of the circle and P(px1, px2) isa point on the circleAn outline is provided:
public class Circle
{
//private variables should be declared here
public void Circle()
{
}
public void setOrigin(double o1, double o2)
{
}
public void setPoint(double p1, double p2)
{
}
public double getRadius()
{
}
public double getVolume()
{
}
public double getArea()
{
}
public static void main( String args[])
{
/**instantiate two Circle objects here with different origin anddifferent point on the circle.
*Then, comapre whether their area and volume are equal ornot
*/
}}• Show less***********WILLRATE************1 answer
- Anonymous asked1.) Suppose the population of the world is 6... Show moreCould you please send solution for following question.1.) Suppose the population of the world is 6 billion, and thatthere is an average of 1000 communicating devices per person. Howmany bits are required to assign a unique host address to eachcommunicating device? suppose that each device attaches to a singlenetwork and that each network on average has 10000 devices. Howmany bits are required to provide unique network ids to eachnetwork?Please send solution as soon as possibleThank youHari Putcha• Show less1 answer
- Bunnieee asked1 answer
- RottenPanther7864 askedvariable, and the input is: 276. Choose... Show morea) Suppose that x is an int variable, ch is a char variable, and the input is: 276. Choose the values after thefollowing statement executes: cin>>ch>>x;.• Show less
Suppose that alpha isan int variable andch is a char variableand the input is: 17 A.
What are the values after the above statementsexecute?
____ 3. What is the output of the code fragmentabove if the input value is 4?
____ 4. What is the output of the C++ codeabove?
____ 5. A(n) _____ statement causes animmediate exit from the switchstructure.1 answer
- RottenPanther7864 asked1 answer
-contains several sets of data in the order given below: height(meters), velocity (mph), and angle (degrees - 90 is straight up and 0results (time and distance) are printed to the screen in a wellformatted table using appropriate iomanip functions and having anappropriatethe cliff (in the x direction) was greater than the height of the cliff.1 answer
- Anonymous askedWrite an assembly program to copy data from table 1 (at location1100) to table2 (at location 1114).... Show moreWrite an assembly program to copy data from table 1 (at location1100) to table2 (at location 1114). They both have twenty values.Start program at $4000.
• Show less0 answers
- Anonymous askedHey, our homework requires us to come up with a code where it wouldcalculate the total resistance fo... Show moreHey, our homework requires us to come up with a code where it wouldcalculate the total resistance for a given number of resistors,however, the teacher wants us to write a code where it can takeinfinite values for the resistors (i.e., the teacher wants aprogram that will accept as many as possible inputs of theresistance per given resistor). Hence, if the teacher wanted to runthe code and check the total resistance of a circuit that has 50resistors, then it would do so along with it he wanted to run theprogram to test the resistance of over 100 resistors, it wouldstill work.
My question is what coding in "C+" would allow and infinite amountof data to be put in because I know there is not a code forinfinity? I already know that this program will require a "whileloop," however, I just need to know how to set-up the "infinite"values portion for the resistors - everything else I can handle.Thanks.
• Show less1 answer
- Anonymous asked1. What are the disadvantages of having too many featuresin a language? 2. What are the three fundam2 answers
- Anonymous askedI have problem with my homework I reallystuck on this. Can you please help me ?Given a set of the fo... Show moreI have problem with my homework I reallystuck on this. Can you please help me ?Given a set of the followingdiscrete numbers/grades: 78, 80, 60, 60, 95, 89
1. The "average" or "arithmetic mean" of the grades isthe sum of all grades
divided by the number of thegrades.
(78 + 80 + 60 + 60 + 95 + 89)
average = ------------------------------- = 77
6
2. The "median" is the value located in middle of thesequence. If the number
of data is odd, the middle value isthe median, but if the number of data
is even, the average of the twomiddle values is the median. Since the
number of data here is even, themedian is
(60 + 60)
median = ------------- = 60
2
3. The "mode" is the value or grade that has a higherfrequency which is 60
since it occurs twice more than anyother grade.
4. The "range" is the difference between the highestand lowest values. The
highest grade in our example is 89and the lowest is 60.
Thank you so much. • Show less2 answers
- Anonymous asked1. if you were to design your own programming language,what features y... Show moreLanguage Evaluation Criteria:1. if you were to design your own programming language,what features you would include and why so that itbecomes widely adopted. what implementation method would yourecommend for it and why?• Show less1 answer
- Anonymous askedThe greatest comm... Show moreIn this assignment you will practice implementing repetitionstatement.
Assignment:
The greatest common divisor problem:
The greatest common divisor (gdc) of two nonzero integers p and q(p > q) is the largest positive
integer that will evenly divide both p and q with no remainder. The“brute force” method of
finding the gcd involves checking each of the divisors q, q-1 ,q-2, …, 4, 3, 2, 1 until you find
one that will divide both p and q.*
Write a program that finds the gdc of any 2 numbers p and q. Yourprogram should ask the user
to input p and q and validate that p is greater than q and thatboth are nonzero. If the user enters a
value for q that is less than p your program should prompt him toretry until he inputs a valid
value.
Requirements:
• Your program should use at least 1 while and 1 do whileloop.
• Your program should use break command (will cover onTuesday Sept 30th lecture ).
For a head start check the book on using break in loops. • Show less1 answer
- Anonymous askedYou are able to service shuffles, nanos,... Show moreSuppose that you operate a small firm that services iPods.
You are able to service shuffles, nanos, classics and iTouchversions of the iPod.
You have four standard kinds of services that you do:
Diagnose a problem, replace a battery, replace memory, and generalrepair.
A customer can arrange for a diagnosis over the phone or viathe web.
Repairs and replacements must be pre-approved by the customer.
Customers should be able to track the progress of theirservice.
There are some special kinds of pricing that areavailable.
Corporations can receive a bulk discount.
Individuals can purchase a well iPod plan that covers allrepairs for a year.
1) Identify the system concepts.
2) Identify the system associations.
3) Produce a conceptual model.
• Show less1 answer
- Anonymous askedf a... Show moreWrite a program select of type 'a list * ('a -> bool) -> 'alist that takes a list and a function f as parameters.Yourfunction should apply f to each element of the list andshould return a new list containing only those elements of theoriginal list for which f returned true.(the elements ofthe new list may be in any given order) ([1,2,3,4,5,6,7,8,9,10]isPrime) should result in a list like [7,5,3,2].This is an exampleof an higher order-function.
• Show less0 answers
- Anonymous asked'a list whose output list is the same as... Show moreexercise 9. write a function del3 of type ' alist -> list ->'a list whose output list is the same as the input list, but withthe third element deleted.your function need not behave well onlistswith length less than 3.
• Show less0 answers
- Anonymous askedintthat returns the largest element of a list o... Show moreexercise 13:Write a function max of type int list -> intthat returns the largest element of a list of integers.Yourfunction need not behave well if the list isempty.hint:write a helper function maxhelper that takes asecond parameter the largest element seen so far.then you cancomplete the exercise by defining
fun max x=maxhelper (t1 x, hd x)
• Show less0 answers
- Anonymous asked'a list that takes a list and a intege... Show moreexercise 11:write a function cycle of type 'a list *int -> 'a list that takes a list and a integer n as input andreturns the same list, but with the first elementcycled to the endof the list n times.(make use of your cycle 1 function from thelist [3,4,5,6,1,2])
• Show less0 answers
- Anonymous asked'a listwhose output list is the same as the in... Show moreexercise 7.write a function cycle of type ' a list -> 'a listwhose output list is the same as the input list, but with the firstelement of the list moved to the end.For example, cycle1 [1,2,3,4]should return [2,3,4,1].
• Show less1 answer
- Anonymous askedHi! I haven't really worked with the C language (mainly knowJava). This is also the first time I'm u... Show moreHi! I haven't really worked with the C language (mainly knowJava). This is also the first time I'm using forks to createchild processes. I have a lot of thoughts about this program,but can't seem to put them onto paper. Here's the assignment,I hope someone can help. Basically I need a master.c thattakes in a set of single-digit positive integers and uses a slave.cfork process to add these together. The tricky part is that theprofessor wants it done in such a way that if it was "master 1 2 34 5" it would calculate it as 1+2, 3+4, 5+0 >> then3+7, 5+0 >> then 10+5 >> then the final sum of15. The following is the professor's assignment. [note:if you can help me with it just summing the numbers, I'd be happywith partial credit.]
You will write a program in C that uses multiple processes tocompute the sum of a set of (small) positive integers.
There are two types of processes for this homework:
I). A set of "slaves" processes: Each slave processgets two small integers from its argv, computes itssum and returns the result using the exit system call. So, a slaveprocess is created for every sum.
II) A "master" process: This process is responsible forcreating the slave processes, and coordinating the computation.Note that all the computation is done by the "slave"processes. All the numbers are provided in the command lineargv. The master process also set a timer at thestart of computation to 3 seconds. If the computation has not beenfinished by this time, the master process kills all theslaves and then exits. You should print an appropriate message(s)in this case. Note that the master process may have tocreate multiple sets of slave processes. For example, ifthere are 8 numbers to be added, then the master processwill first create 4 slaves and get the result from them.At this point there are 4 numbers, and it creates 2.
slaves. Finally one slave is created to compute theoverall sum. To make it simpler, if the number of integers to addis odd, the master adds a 0 to the list of numbers. Thismay happen at any step during the computation. The code formaster process should be compiled separately and itsexecutable code should be called master. The executablecode for the slave process should be calledslave. So, to compute the sum of the numbers 1 through 7,the command line will look like
master 1 2 3 4 5 6 7
Since the results are passed around by exit systemcall, keep the numbers small (single digit). Each slaveprocess prints its process id, its operands, and their sum. Eachtime the master gets a result from a slave, it prints thepid of the slave and the partial sum.
Test your programs for the following cases:
master 1
master 1 2 3 4 5
master 1 2 3 4 5 6 7 8
Thanks in advance for any assistance! :)
• Show less1 answer
- Anonymous askedPlease add notes or comments so I... Show moreI need help with this program in Microsoft Visual Studios 2008C++Please add notes or comments so I can learn from you,thax.Overloaded Hospitalwrite a program that computes and displays the charges for apatuent's hospital stay. First, the program should ask if thepatient was admitted as an in-patient or an out-patient. If thepatient was an in-patient, the following data should beentered:- The number of days spent in the hospital- The daily rate- Hospital medication charges- Charges for hospital services (lab tests, etc.)The program should ask for the following data if the patientwas an out-patient:- Charges for hospital services (labs, tests, ect.)- Hospital medication chargesThe program should use two overloaded functions to calculatethe total charges. One of the functions should accept arguments forthe in-patient datam while the other function accepts arguments forout-patient information. Both fuctions should return the totalcharges.Input Validation: Do not accept negative numbers for anydata.• Show less1 answer
- Anonymous asked1 answer
- Anonymous askedThe server has upload rate of 50 Kbps. The... Show moreConsider a system consisting of one server and 3clients.
The server has upload rate of 50 Kbps. The clients each
has upload rate of 50 Kbps and download rate of 20Kbps.
Assume a fluid model.
Is it possible to distributed a 300 Kb file in 10 secs?Explain.
Give a distribution scheme in which a file of 300 Kb is
distributed in 15 secs.
Assuming the upload rates of the clients are 30 Kbps each.
Give a distribution scheme in which a file of 300 Kb is
distributed in 15 secs.
-----------------------------------------------------------------
Consider a systems in which thenetwork layer can
completely corrupt a packet, but such that no more
than 2 packets in a row are corrupted. Also, all
packets are delivered in order, so there is no
re-ordering of packets. Design a reliable transport
layer protocol using this unreliable network layer.
• Show less0 answers
- airelemental135 asked0 answers
- MathNerdForever askedThe purpose of this homework is to extend input and output beyondthe keyboard and monitor. The inpu... Show more
The purpose of this homework is to extend input and output beyondthe keyboard and monitor. The input and the output in thisproblem will use files.
Write a program to compute the grades for a course. Thecourse records are in the “class data.in” file.
Each line in the input file is guaranteed to have the followingformat: a student’s last name, then a space, thestudent’s first name, a space, then 10 quiz scores. Each of the integer scores is separated by a single space. Your program will take its input from this file and put its outputinto a second file.
Your program should work for an input file with at least one lineof data.
Thedata in the output file will be the same as the data in the inputfile with one addition. At the end of each line there will bea type double number that is the average of the ten preceding quizscores.
Here is an example of an input file line and its correspondingoutput file line:
input
Skilling Tom 43 56 23 65 55 75 75 75 75 32
output
Skilling Tom 43 56 23 65 55 75 75 75 75 32 57.40
Specifications:
1. Put the input file into the same folder as your *.exe.
2. Create the output file in the same folder as your *.exe and nameit “class data.out”. (You can view this fileusing any text editor, like Notepad.)
3. Validate that the input file and output file are associated witha stream using the ifstream andofstreamfunction is_open(). Give the user error messages if either stream cannot beestablished.
4. Assume that all the values in the input file are valid for bothvalue and type.
5. Output the quiz averages with two decimal places.
6. Each time you run the program you should overwrite any existingversions of the output file.
I would suggest that you develop your code in the followingorder. I wouldn’t go to the next step until theprevious step is working perfectly.
1. Start with reading the first line of the input file intovariables.
2. Then write the first line from the input file to the outputfile.
3. Repeat steps 1 and 2 for the entire input file.
4. Then include the calculation of the quiz average, and writingthis value to the output file.
5. Then include code to verify that both the input and output filestreams have been established. Test this code by deleting theinput file from your *.exe folder and then running the program.2 answers
- contains several sets of data in the order given below: height (meters), velocity (mph), and angle (degrees - 90 is straight up and 0 results (time and distance) are printed to the screen in a well formatted table using appropriate iomanip functions and having an appropriate the cliff (in the x direction) was greater than the height of the cliff.1 answer
- Anonymous askedNumerologists claim to be able to determine a person's... Show moreCan someone please help me with this program?
Numerologists claim to be able to determine a person's charactedtraits based on the "numeric value" of a name. The value of a nameis determined by summing up the values of the letters of the namewhere 'a' is 1, 'b' is 2, 'c' is 3, etc., upto 'z' being 26. Forexample the name "Zelle" would have the value 26+5+12+12+5=60.Write a program that calculates the numeric value of a singlename(without spaces) provided as input. Note that the programshould be case-insensitive: both 'Z' and 'z' should count as26.
cannot use "if" clauses so i must use string library functions • Show less1 answer
- Anonymous askedI am making a word processor with java and I am stuck on writing a code to search for words and retu... More »0 answers
- Anonymous askedDefine a function less of type int * int list-> int list so that less(e,L) returns a list... Show moreDefine a function less of type int * int list-> int list so that less(e,L) returns a list ofall the integers in L that are less than e. • Show less0 answers
- Anonymous askedWrite an unsigned 16-bit software implementation of both themultiplier and the divider architectures... Show moreWrite an unsigned 16-bit software implementation of both themultiplier and the divider architectures found in the book.
You are required to read two positive itegers from the console.Your program will mulitply and divide them and provide the product,quotient, and remainder.
Assume all values are positive.
Here's an example run:
Enter an integer : 10
Enter an integer : 4
Product is 40
Quotient is 2
Remainder is 2
You may not use any MIPS multiply or divide instructionsfor this project!
• Show less1 answer
- airelemental135 askedoutput = polar(theta,r,'-'... Show morefunction output = polygon(n)
theta = 0:2*pi/n:2*pi;
r = ones(1, n+1)*100;
output = polar(theta,r,'-');
^The above is the code for the m-file. In the command window, typein: polygon(5). It returns a value as an output "ans =" along withthe polar graph. I want to know what is triggering the output,because I don't want ANY output in the command window. When youcreate a regular polar graph in the command window, you don't getany output. Why does my function generate an output? Can someonefigure out what is happening and how to fix this problem? I willrate LS for help.
• Show less1 answer
- Anonymous askedGiven a square matrix,write a function (and test script) to use nested loops to... Show more
function forMATLAB:
Given a square matrix,write a function (and test script) to use nested loops to sum theelements of
the diagonal (top leftto bottom right.) Now write a second function that willsum the other diagonal.
• Show less1 answer
- Anonymous askedDESIGN AN IA-32 ASSEMBLY LANGUAGEPROGRAM TO CONVERT EACH INTEGER IN X TO A... Show more
GIVEN THE DATA BELOW,
DESIGN AN IA-32 ASSEMBLY LANGUAGEPROGRAM TO CONVERT EACH INTEGER IN X TO A 32-BYTE BINARYSTRING AND A 4-BYTE HEX STRING AND SAVE THEM IN IN Y AND ZRESPECTIVELY..DATAX SDWORD 500, -500, 1000, -1000Y BYTE 4 DUP (32 DUP ('0'))Z BYTE 4 DUP (4 DUP ('0'))• Show less0 answers
- Anonymous askedwrite a program in some langugage that has both static anddynamic variables in subprograms. Create s... Show morewrite a program in some langugage that has both static anddynamic variables in subprograms. Create six large at least (100 *100 ) matrices in the subprogram - three static and threedynamic.Fill two of the static matrices and two of the dynamicmatrices with random numbers in the range of 1 to 100. The code inthe subprogram must perform a large number of matrix multiplicationoperations on the static matrices and time the process. Then itmust repeat this with the stack-dynamic matrices.Hello,Please help me. I am really in need of your help. I have asubmission tomorrow.Thanking you in advance.• Show less0 answers
- Anonymous askedWrite a generic C++ function that takes an array of genericelements and a scalar of the same type as... Show moreWrite a generic C++ function that takes an array of genericelements and a scalar of the same type as the array elements. Thetype of the array elements and the scalar is the generic parameter.The function must search the given array for the given scalar andreturn the subscript of the scalar in the array. If the scalar isnot in the array, the function must return -1. Test the functionfor int and float types.
Requirements:
· Write a main() function to make it acomplete program.· Initialize the array with random numbers.Please help me. I have a deadline tomorrow.Thanking you in advance.• Show less2 answers
- Anonymous askedA companyhas asked you to write a program that encrypts the data so that itcan be transmitted more s... Show moreA companyhas asked you to write a program that encrypts the data so that itcan be transmitted more securely. Your program should read afour-digit integer and encrypt it as follows: Replace each digit by(the sum of that digit plus 7) modulus 10. Then, swap the firstdigit with the third, swap the second digit with the fourth andprint the encrypted integer. Create an Encrypt class to do the encrypting, and write amain function to ask for input.
Example input is 1234 and the encrypted number is 124
I'm not going to lie, I have no clue how todo classes. I'm thinking I should make an array for the user'sinput then pass the array to the class (If possible?) and separatethe numbers so I can do the +7%10 to each one.
So basically...
How do I create a class that will encrypt numbers from anarray?
• Show less0 answers
- eman55 asked1 answer
- eman55 asked1 answer
- eman55 asked1 answer
- Anonymous asked1) a 2 Expr; for everyx andy inExpr, the threestrings x+y, x y,... Show more
Suppose Expr is defined asfollows:
1) a 2 Expr; for everyx andy inExpr, the threestrings x+y, x y, and (x) are in Expr.
If by an operator we mean an occurrenceof the symbol + or the symbol , show thatfor
every string x 2 Expr, the number ofa’s inx is 1 plus thenumber of operators in x.0 answers
- Anonymous askedt start or endwit... Show more
If Expr is as in the previousproblem, show that for every
string xin Expr, x doesn’t start or endwith +.• Show less0 answers
- Anonymous asked1.{b}*{ab}*... Show more
In each casefind a string in {a, b}* of minimum length
that is notin the given language.
1.{b}*{ab}*{a}*
2.({a}* [ {b}*)({a}* [ {b}*)({a}* [ {b}*)
3.{a}*({ba}{a}*)*{b}*
4. {b}*{a, ba}*{b}*• Show less1 answer
- RottenPanther7864 asked1 answer
- Farree askedA palindrome is a number or a text phrase that reads the same backwards as forwards. For example, ea... More »1 answer
- RottenPanther7864 asked3 answers
- anubis89 askedThe decimal value -85 is to be stored ina byte using 2's complement representation. What hexa... Show more
The decimal value -85 is to be stored ina byte using 2's complement representation. What hexadecimalconstant should be used?
(a) $56 (b) $AA (c) $AB (d) $D5 (e) None of these.• Show less1 answer
- RottenPanther7864 asked1. Given the declaration above, which of the followingfor loops sets the index ofgamma out of bounds2 answers
- Anonymous asked0 answers
- Anonymous askedC1. Alter the program such that only the correct outputis... Show moreAdd the following features to the program:
C1. Alter the program such that only the correct outputis sent to the
standard output stream (stdout), while error and help messages aresent
to the standard error stream (stderr). (Hint: usefprintf.)
See the expected output listed in the comment at the top of main.cfor
an example of what should go to stdout.
C2. Implement an optional command-line switch '-fFILENAME'that sends
program output to a file named FILENAME (i.e., filename specifiedas a
command line argument).
C3. Add support for matching arbitrary numbers of words, notjust 5.
(Hint: use malloc, and don't worry too much about memoryefficiency).
C4. Safeguard the program from buffer overflow attacks.(Hint: 'gets'
is BAD. Use fgets instead, which specifies the maximum numberof
characters to be read in.)
C5. Allow multiple words to be specified per line. (Hint1:Understand
'strtok'. Hint2: Be careful about the newline character '\n'at the end
of the line.)
• Show less0 answers
- Anonymous asked1 answer
- Anonymous askedIn this assignment, you will create an Agent class to represent the user. You will also create a com... More »0 answers | http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2008-october-02 | CC-MAIN-2015-06 | refinedweb | 4,686 | 65.12 |
README
swiss-ephemeris
Haskell bindings for the Swiss Ephemeris library.
See the tests in the
spec folder for thorough example usage, but here's a simple "main" that demonstrates the current abilities, inspired by the sample program in the official library:
NOTE: this library is under very active development, as such, most releases in v1.x will probably show a fastly evolving API, which is reflected by the fact that new versions have been increasing the major version numbers (in PVP, unlike semver, the first two components of the version correspond to the major version.)
import SwissEphemeris main :: IO main = do -- location of your ephemeris directory. We bundle a sample one in `swedist`. withEphemerides "./swedist/sweph_18" $ do let time = julianDay 1989 1 6 0.0 place = GeographicPosition {geoLat = 14.0839053, geoLng = -87.2750137} -- locate all bodies between the Sun and Chiron forM_ [Sun .. Chiron] $ \planet -> do -- if no ephemerides data is available for the given planetary body, a `Left` value -- will be returned. coord <- calculateEclipticPosition time planet putStrLn $ show planet <> ": " <> show coord -- Calculate cusps for the given time and place, preferring the `Placidus` system. -- note that the underlying library may decide to use the `Porphyrius` system if it can't -- calculate cusps (happens for the Placidus and Koch systems in locations near the poles.) cusps <- calculateCusps Placidus time place putStrLn $ "Cusps: " <> show cusps
The above should print the ecliptic latitude and longitude (plus some velocities) for all planets, and then the cusps and other major angles (ascendant, mc, ARMC, alternative angles.)
There's
withEphemerides to run calculations using a particular ephemerides directory and then close any used
system resources, and
withoutEphemerides to use the default ephemerides ("Moshier.")
To see actual results and more advanced usage, check out the tests. For some more advanced examples, see
swetest.c and
swemini.c in the
csrc directory: they're the test/example programs provided by the original authors. You can also play around with the C library via the authors' test page.
Notes
All the code in the
csrc folder comes directly from the latest official tarball, v2.09.03.
The
swedist folder includes the original documentation from the tarball in PDF (see the
doc) folder, and a copy of ephemeris data files.
For other formats of the original documentation, see:
The authors also host HTML versions of the manuals. Two are provided, a general reference and a programming reference. Both are very useful to get acquainted with the functionality and implementation details.
Ephemerides files
As noted in the original documentation you can omit the
setEphemerides call (or use
setNoEphemerides, or the
withoutEphemerides bracket function) and calculations will use a built-in analytical
ephemeris ("Moshier") which:
provides "only" a precision of 0.1 arc seconds for the planets and 3" for the Moon. No asteroids will be available, and no barycentric option can be used.
Note that if you're interested in the asteroid
Chiron (which is common in astrological practice these days,) you'll have to procure Ephemerides files and shouldn't use the default ephemerides.
For convenience, we bundle a few ephemerides files in this repository (see
swedist) for the time range
1800 AD – 2399 AD. If you were born before that, or plan to code e.g. transits for after that (!) or
you'd prefer even more precision, you can download more ephemerides files from the astro.com downloads page
I chose the bundled files due to this comment in the official docs:
If the [JPL] file is too big, then you can download the files sepl_18.se1 and semo_18.se1 from here:
For a full explanation of the files available, see the Description of the Ephemerides section of the original manual, also of interest is the comparison between the Swiss Ephemeris and the raw NASA JPL data.
Contributing
I've only made available the types and functions that are useful for my own, traditional, horoscope calculations. Feel free to add more! See the astro.com documentation for ideas. | https://haskell.libhunt.com/swiss-ephemeris-alternatives | CC-MAIN-2021-39 | refinedweb | 659 | 63.49 |
In this section, you will learn how to write the even numbers to file.
By using the PrintWriter class, you can write any type of data to the file. It has wide variety of methods for different primitive data types. It has a wide selection of constructors that enable you to connect it to a File, an Output Stream, or a Writer.
In the given example, we have created an instance of PrintWriter and used FileWriter as an argument. Then we have started a loop for numbers 1 to 50. If the number is totally divided by 2 leaving 0 remainder, then it should be an even number and stored in the file. We have used print() method to store the even numbers to the file.
println(): This method of PrintWriter class terminates the current line by writing the line separator string.
Here is the code:
import java.io.*; public class WriteEvenNumbersToFile { public static void main(String[] args) throws Exception { PrintWriter pw = new PrintWriter(new FileWriter("C:/evens.txt")); for (int i = 1; i <= 50; i++) { if (i % 2 == 0) { pw.print(i); pw.println(); } } pw.close(); } } | http://www.roseindia.net/tutorial/java/core/files/writeevennumbersToFile.html | CC-MAIN-2014-42 | refinedweb | 187 | 73.37 |
-
Well, we've identified a few things that are merely
'irritations' at this point, but will become serious
trouble for our use of SQLObject in the future.
First problem: The inability of classRegistry/needSet to
import classes that aren't explicitly imported. For
example, if we have three classes "ProductSaved", "Product"
and "ProductType", and we're only dealing with
"ProductSaved" instances in a given class, accessing
"savedProductInstance.product.type" throws an exception (no
attribute _SO_class_ProductType), because ProductType is
not imported by either ProductSaved or Product (just
referenced by name as a foreignKey in Product).
Is the best solution to this just to do an import of all
classes referred to in a class (even if just by name) in
the class file? ie, Product would import ProductType? This
seems to be working for us, but should probably be
explicitly mentioned in the docs if it's the best way.
(Yes, I know such access breaks Law of Demeter or whatever,
but I believe the above example is a fairly reasonable one.
:)
Second problem, and one I don't have a solution for right
now: the interpreter-global uniqueness required of class
names for classRegistry. We don't run many instances of
Webware/Webkit for multiple sites, just run them as
multiple contexts. This makes two sites who have separate
"Product" objects conflict, but it seems to be a very
realistic problem...
Possible high-level solutions I can think of right now:
making class registry be keyed by the class modules'
__file__ attribute or similar (not fully thought through, I
realize this won't work because of needSet's now-ambiguous
names....)
Or having the ability to create specific classRegistrys,
sort of like having multiple object Stores. Haven't fully
thought this through, but my first thought is to simply add
another class-level or module-level attribute called
"_registry". The SQLObject classRegistry/needset would now
become dictionaries of dicationaries/lists, with a default
registry key called perhaps "__global" or some other
unlikely name. So needSet/setNeedSet would check the class
for the _registry variable, and search within the specified
sub-keyed areas only.
Not sure I explained that too well. classRegistry would now
look like (for my multiple Product example):
{
'__global': {},
'site1': {'Product': ...},
'site2': {'Product': ...}
}
based on the two Product classes specifying site1 or site2
as their registry.
Thoughts?
- Luke
Ian,
I thought more about my "higher-level" types and believe what I really
need are two hooks:
* afterLoad() that is called just after the object has been created from
data from the DBMS
* beforeStore() that is called just before storing data in the DBMS
By default, these don't do anything and need to be overloaded if one
wants to use them.
The following example shows how to apply this to my problem:
assume _columns contains
...
FloatCol('_ptX'),
FloatCol('_ptY'),
...
so with
def afterLoad(self):
point = Point(self._ptX, self._ptY)
I could really treat my data as being of type Point...
To make sure that storage is consistent, I'd need
def beforeStore(self):
self._ptX = self.point.x()
self._ptY = self.point.y()
The same approach obviously also solves the phoneNumber type
example...
Note that some of the attributes are probably not for direct use. I
indicated that by preceding their name with an underscore... But
maybe there is a way of having a keyword argument to Col called
'hidden' or similar such that I can only access it from the above
hook routines???
My intuition says that implementing these two hooks may be quite
straight forward or that they already exist. Can you give me some
more insight on this?
many thanks
--b
/-----------------------------------------------------------------
| Bud P. Bruegger, Ph.D.
| Sistema ()
| Via U. Bassi, 54
| 58100 Grosseto, Italy
| +39-0564-411682 (voice and fax)
\-----------------------------------------------------------------
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/?viewmonth=200304&viewday=12 | CC-MAIN-2017-34 | refinedweb | 669 | 62.27 |
Hi there! We have reached the final lesson of the series Space and Time — Introduction to Finite-difference solutions of PDEs, the second module of "Practical Numerical Methods with Python".
We have learned about the finite-difference solution for the linear and non-linear convection equations and the diffusion equation. It's time to combine all these into one: Burgers' equation. The wonders of code reuse!
Before you continue, make sure you have completed the previous lessons of this series, it will make your life easier. You should have written your own versions of the codes in separate, clean IPython Notebooks or Python scripts.
You can read about Burgers' Equation on its wikipedia page. Burgers' equation in one spatial dimension looks like this:\begin{equation}\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}\end{equation}
As you can see, it is a combination of non-linear convection and diffusion. It is surprising how much you learn from this neat little equation!
We can discretize it using the methods we've already detailed in the previous notebooks of this module. Using forward difference for time, backward difference for space and our 2nd-order method for the second derivatives yields:\begin{equation}\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n - u_{i-1}^n}{\Delta x} = \nu \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2}\end{equation}
As before, once we have an initial condition, the only unknown is $u_i^{n+1}$. We will step in time as follows:\begin{equation}u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)\end{equation}
To examine some interesting properties of Burgers' equation, it is helpful to use different initial and boundary conditions than we've been using for previous steps.
The initial condition for this problem is going to be:\begin{eqnarray} u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\ \phi(t=0) = \phi_0 &=& \exp \bigg(\frac{-x^2}{4 \nu} \bigg) + \exp \bigg(\frac{-(x-2 \pi)^2}{4 \nu} }
The boundary condition will be:\begin{equation}u(0) = u(2\pi)\end{equation}
This is called a periodic boundary condition. Pay attention! This will cause you a bit of headache if you don't tread carefully.
The initial condition we're using for Burgers' Equation can be a bit of a pain to evaluate by hand. The derivative $\frac{\partial \phi}{\partial x}$ isn't too terribly difficult, but it would be easy to drop a sign or forget a factor of $x$ somewhere, so we're going to use SymPy to help us out.
SymPy is the symbolic math library for Python. It has a lot of the same symbolic math functionality as Mathematica with the added benefit that we can easily translate its results back into our Python calculations (it is also free and open source).
Start by loading the SymPy library, together with our favorite library, NumPy.
import numpy import sympy from matplotlib import pyplot %matplotlib inline from matplotlib import rcParams rcParams['font.family'] = 'serif' rcParams['font.size'] = 16
We're also going to tell SymPy that we want all of its output to be rendered using $\LaTeX$. This will make our Notebook beautiful!
from sympy import init_printing init_printing()
Start by setting up symbolic variables for the three variables in our initial condition. It's important to recognize that once we've defined these symbolic variables, they function differently than "regular" Python variables.
If we type
x into a code block, we'll get an error:
x
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-3-401b30e3b8b5> in <module>() ----> 1 x NameError: name 'x' is not defined
x is not defined, so this shouldn't be a surprise. Now, let's set up
x as a symbolic variable:
x = sympy.symbols('x')
Now let's see what happens when we type
x into a code cell:
x
The value of
x is $x$. Sympy is also referred to as a computer algebra system -- normally the value of
5*x will return the product of
5 and whatever value
x is pointing to. But, if we define
x as a symbol, then something else happens:
5*x
This will let us manipulate an equation with unknowns using Python! Let's start by defining symbols for $x$, $\nu$ and $t$ and then type out the full equation for $\phi$. We should get a nicely rendered version of our $\phi$ equation.
x, nu, t = sympy.symbols('x nu t') phi = sympy.exp(-(x-4*t)**2/(4*nu*(t+1))) + \ sympy.exp(-(x-4*t-2*numpy.pi)**2/(4*nu*(t+1))) phi
It's maybe a little small, but that looks right. Now to evaluate our partial derivative $\frac{\partial \phi}{\partial x}$ is a trivial task. To take a derivative with respect to $x$, we can just use:
phiprime = phi.diff(x) phiprime
If you want to see the unrendered version, just use the Python print command. Pythonic version of our derivative, we can finish writing out the full initial condition equation and then translate it into a usable Python expression. For this, we'll use the lambdify function, which takes a SymPy symbolic equation and turns it into a callable function.
from sympy.utilities.lambdify import lambdify_lamb = lambdify((t, x, nu), u) print("The value of u at t=1, x=4, nu=3 is {}.".format(u_lamb(1,4,3)))
The value of u at t=1, x=4, nu=3 is 3.4917066420644494.
###variable declarations nx = 101 nt = 100 dx = 2*numpy.pi/(nx-1) nu = .07 sigma = .1 dt = sigma*dx**2/nu x = numpy.linspace(0, 2*numpy.pi, nx) un = numpy.empty(nx) t = 0
We have a function
u_lamb but we need to create an array
u with our initial conditions.
u_lamb will return the value for any given time $t$, position $x$ and $nu$. We can use a
for-loop to cycle through values of
x to generate the
u array. That code would look something like this:
u = numpy.empty(nx) for i, x0 in enumerate(x): u[i] = u_lamb(t, x0, nu)
But there's a cleaner, more beautiful way to do this -- list comprehension.
We can create a list of all of the appropriate
u values by typing
[u_lamb(t, x0, nu) for x0 in x]
You can see that the syntax is similar to the
for-loop, but it only takes one line. Using a list comprehension will create... a list. This is different from an array, but converting a list to an array is trivial using
numpy.asarray().
With the list comprehension in place, the three lines of code above become one:
u = numpy.asarray([u_lamb(t, x0, nu) for x0 in x])
u = numpy.asarray([u_lamb. ])
Now that we have the initial conditions set up, we can plot it to see what $\phi(x,0)$ looks like:
pyplot.figure(figsize=(8,5), dpi=100) pyplot.plot(x,u, color='#003366', ls='--', lw=3) pyplot.xlim([0,2*numpy.pi]) pyplot.ylim([0,10]);
This is definitely not the hat function we've been dealing with until now. We call it a "saw-tooth function". Let's proceed forward and see what happens.
We will implement Burgers' equation with periodic boundary conditions. If you experiment with the linear and non-linear convection notebooks and make the simulation run longer (by increasing
nt) you will notice that the wave will keep moving to the right until it no longer even shows up in the plot.
With periodic boundary conditions, when a point gets to the right-hand side of the frame, it wraps around back to the front of the frame.
Recall the discretization that we worked out at the beginning of this notebook:\begin{equation}u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)\end{equation}
What does $u_{i+1}^n$ mean when $i$ is already at the end of the frame?
Think about this for a minute before proceeding.t*dt, xi, nu) for xi in x])
pyplot.figure(figsize=(8,5), dpi=100) pyplot.plot(x,u, color='#003366', ls='--', lw=3, label='Computational') pyplot.plot(x, u_analytical, label='Analytical') pyplot.xlim([0,2*numpy.pi]) pyplot.ylim([0,10]) pyplot.legend();
from matplotlib import animation from IPython.display import HTML
u = numpy.asarray([u_lamb(t, x0, nu) for x0 in x])
fig = pyplot.figure(figsize=(8,6)) ax = pyplot.axes(xlim=(0,2*numpy.pi), ylim=(0,10)) line = ax.plot([], [], color='#003366', ls='--', lw=3)[0] line2 = ax.plot([], [], 'k-', lw=2)[0] ax.legend(['Computed','Analytical']) def burgers(n):*dt, xi, nu) for xi in x]) line.set_data(x,u) line2.set_data(x, u_analytical) anim = animation.FuncAnimation(fig, burgers, frames=nt, interval=100)
HTML(anim.to_html5_video())
Coding up discretization schemes using array operations can be a bit of a pain. It requires much more mental effort on the front-end than using two nested
for loops. So why do we do it? Because it's fast. Very, very fast.
Here's what the Burgers code looks like using two nested
for loops. It's easier to write out, plus we only have to add one "special" condition to implement the periodic boundaries.
At the top of the cell, you'll see the decorator
%%timeit.
This is called a "cell magic". It runs the cell several times and returns the average execution time for the contained code.
Let's see how long the nested
for loops take to finish.
%%timeit u = numpy.asarray([u_lamb(t, x0, nu) for x0 in x]) for n in range(nt): un = u.copy() for i in range(nx-1): u[i] = un[i] - un[i] * dt/dx *(un[i] - un[i-1]) + nu*dt/dx**2*\ (un[i+1]-2*un[i]+un[i-1]) u[-1] = un[-1] - un[-1] * dt/dx * (un[-1] - un[-2]) + nu*dt/dx**2*\ (un[0]- 2*un[-1] + un[-2])
100 loops, best of 3: 15.6 ms per loop
Less than 50 milliseconds. Not bad, really.
Now let's look at the array operations code cell. Notice that we haven't changed anything, except we've added the
%%timeit magic and we're also resetting the array
u to its initial conditions.
This takes longer to code and we have to add two special conditions to take care of the periodic boundaries. Was it worth it?
%%timeit u = numpy.asarray([u_lamb(t, x0, nu) for x0 in x])])
1000 loops, best of 3: 1.65 ms per loop
Yes, it is absolutely worth it. That's a nine-fold speed increase. For this exercise, you probably won't miss the extra 40 milliseconds if you use the nested
for loops, but what about a simulation that has to run through millions and millions of iterations? Then that little extra effort at the beginning will definitely pay off.
from IPython.core.display import HTML css_file = '../../styles/numericalmoocstyle.css' HTML(open(css_file, "r").read()) | http://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/02_spacetime/02_04_1DBurgers.ipynb | CC-MAIN-2017-26 | refinedweb | 1,901 | 65.62 |
David wrote: > <div class="moz-text-flowed" style="font-family: -moz-fixed">Hello > Wesley, > > thanks for your reply. I was surprised about the limited information > too. Sadly (?), I can't reproduce the error any more... > > David > > > > On 10/02/10 11:13, wesley chun wrote: >>> I just wrote this message, but after restarting ipython all worked >>> fine. >>> How is it to be explained that I first had a namespace error which, >>> after a >>> restart (and not merely a new "run Sande_celsius-main.py"), went >>> away? I >>> mean, surely the namespace should not be impacted by ipython at all!? >>> : >>> # file: Sande_celsius-main.py >>> from Sande_my_module import c_to_f >>> celsius = float(raw_input("Enter a temperature in Celsius: ")) >>> fahrenheit = c_to_f(celsius) >>> print "That's ", fahrenheit, " degrees Fahrenheit" >>> >>> # this is the file Sande_my_module.py >>> # we're going to use it in another program >>> def c_to_f(celsius): >>> fahrenheit = celsius * 9.0 / 5 + 32 >>> return fahrenheit >>> >>> When I run Sande_celsius-main.py, I get the following error: >>> >>> NameError: global name 'celsius' is not defined >>> WARNING: Failure executing file:<Sande_celsius-main.py> >> >> >> > Your response to Wesley should have been here, instead of at the top. Please don't top-post on this forum. I don't use iPython, so this is just a guess. But perhaps the problem is that once you've imported the code, then change it, it's trying to run the old code instead of the changed code. Try an experiment in your environment. Deliberately add an error to Sande_celsius-main.py, and run it. Then correct it, and run it again, to see if it notices the fix. The changes I'd try in this experiment are to first change the name on the celsius= line to celsius2= and after running and getting the error, change the following line to call celsius2(). If it gets an error, notice what symbol it complains about. HTH DaveA | https://mail.python.org/pipermail/tutor/2010-February/074337.html | CC-MAIN-2014-15 | refinedweb | 312 | 66.64 |
Text::FastTemplate - Class that compiles text templates into subroutines.
# It can be as simple as ... Text::FastTemplate->new( file => 'template') ->print( \%data); # or as complex as ... Text::FastTemplate->defaults( path => [ '/apps/sales/heads_n_feet', '/apps/sales/content' ], ); Text::FastTemplate->preload( [ { file => 'template.txt', key => 'the_page' } ]); $report= Text::FastTemplate->new( key => 'the_page'); $output= $report->output( \%data ); print $output;
These features were added for the most recent releases.
*
For preload(), the programmer now can use an array to pass the list of templates to be loaded; an ARRAY-REF is no longer required and is deprecated.
*
Defaults for more than one group can be created with a single call to the defaults() class method, versus a call for each group as was previously required.
*
In addition to the template list, the preload() class method accepts a list of common attributes for each of the specified templates.
*
Templates which have been modified since being loaded can now be dynamically reloaded via the 'reload' constructor attribute.
*
The programmer can organize template objects and defaults into groups via the 'group' and 'key' constructor attributes.
Text::FastTemplate compiles templates that are written in a line-oriented syntax that resembles the C-preprocessor syntax into Perl subroutines. As much as possible, it is designed to be:
the API and the template syntax are very simple.
the generation of the template output is very fast, as fast as perl can print anyway.
the application and the presentation are completely separated.
As a template processor, its core purpose is to provide macro-substition into a text template that is provided by the user. However, simple macro substitution hardly comprises a useful template processor.
In order to be truly useful, Text::FastTemplate implements two simple flow-control mechanisms:
loops or repetitive text: #for / #endfor conditions or optional text: #if / #elsif / #else / #endif
and a mechanism for including additional templates
external template inclusion: #include
In the end, Text::FastTemplate provides simple yet powerful interface between the application and the presentation layer that provides both the programmer and the presentation designer excellent control over their respective components.
One of my common applications of this module is to derive a Page class from it. The Page class overrides the output() method with a method that adds some macroes to every message passed to the object, such as a DATE or USER_ID string. The Page class can also have an import() function that set defaults and preloads templates.
This POD has been written as a quick reference to help the programmer start using Text::FastTemplate quickly. Comprehensive documentation with examples and references will be available at the Text::FastTemplate web site.
To start using Text::FastTemplate immediately, one needs only two methods, new() and output().
As the constructor, new() expects a hash and returns a Text::FastTemplate object. Several attributes provide the programmer with useful parameters for organizing templates. However, each call to new() requires only that either the 'file' or the 'key' attribute be provided.
+ file the name of the file that contains the template + key a unique name given by the user, used for caching + path the search path to find the template file + group the template's group, used for defaults and caching + reload specifies whether templates should be reloaded + debug debug flag, increase execution verbosity
This is the name of the template to be loaded. It can be an absolute or relative pathname. If a relative pathname is used then the directories specified in 'path' are searched. If 'path' is not specified then the current working directory for the process is searched.
This is a name that the programmer associates with Text::FastTemplate object. If groups are being used then the key needs to be unique within the template's assigned group. It was to be used with the preload() class method.
This is a list of directories to be searched when a file is specified with a relative pathname. The list of paths is passed in an ARRAY-REF.
This designates the group for which to use defaults and object cache. Each group has its own object cache which enables the caller to use the same KEY in different groups.
By setting this to true, Text::FastTemplate will reload all of the template files that are used by this template that have been modified. It checks the mtime of the files. So all that is necessary to force a reload is to 'touch' the file that should be reloaded.
setting this to true prints some debugging information. This is only partially useful.
This class method simply loads a list of templates by creating a Text::FastTemplate object [ new() ] for each template in the list that is passed to it. This purpose of this method is to bypass the latency associated with reading and compiling a template the first time. This method can be called in several ways. But the most basic call requires an ARRAY-REF to a list of HASH-REFs. These hashes are passed to new() iteratively.
e.g., Text::FastTemplate->preload( [ { file => 'file1.txt', key => 'file1' }, { file => 'file2.txt', key => 'file2' }, { file => 'file3.txt', key => 'file3' }, ]);
preload() and new() differ significantl in that both the 'file' and 'key' attributes are required by the preload() constructor. That is because it uses each template's 'key' as its index when it caches them.
This is used to set default values for constructor attributes, 'path', 'reload' and 'debug'. These defaults values are used whenever a new object is instantiated. It is useful during a preload phase of a program. The For example,
Text::FastTemplate->defaults( path => [ '/apps/sales/pages' ]); Text::FastTemplate->preload( [ { file => 'page1.txt', key => 'page1' }, { file => 'page2.txt', key => 'page2' }, ]);
If the 'group' attribute is sent to defaults() then these default values are assigned to the specified group so that, whenever that group is used in a succeeding constructor, the defaults that are assigned to that group are used to instantiate that object. For example,
Text::FastTemplate->defaults( group => 'example', path => [ '/apps/mfg/pages', ], reload => 1 ); Text::FastTemplate->new( key => 'a', file => 'a.tpl', group => 'example'); Text::FastTemplate->new( key => 'b', file => 'b.tpl', group => 'example'); Text::FastTemplate->new( key => 'c', file => 'c.tpl', group => 'example');
These calls assign the example 'path' and 'reload' values to the 'example' group. During the calls to new() in which the 'example' group is specified, those default values are used to instantiate the template objects.
The 'group' attribute is not necessary since a default group, '_default', will be automatically assigned to the object without the caller's knowledge.
These are identical:
print $template->output( \%data); $template->print( \%data);
These methods take the data provided by the user in a hash and plug it into the compiled template. output() returns a scalar that contains the resultant text.
This method actually sends the the output of output() to STDOUT. This method might or might not be deleted in a future version unless a lot of people use it. It might be better to save this for a derived class which can send the output to a customized file-handle.
The syntax of Text::FastTemplate is simple. Those who are familiar with the C-prepocessor will recognize the similarity. Here is an example of a template that uses everything Text::FastTemplate offers.
This is a ##A_SIMPLE_MACRO## #include 'another_file.txt' #if ##A_FACT## It is true. See? ##A_FACT## #elsif ! ##A_FACT## It is false. See? ##A_FACT## #else What is it then? #endif #for ##A_FOR_MACRO## ##A_FOR_MACRO_LOOP_ID## : survey says, "##SOME_TEXT##" #endfor
Templates are processed in two ways.
Macro substition is performed anywhere in the text of the template. They are case-sensitive.
Statements are line-oriented. That means that a statement must exist on any lines by itself; a statement cannot be embedded in the actual content.
However, statements can be continued on separate lines by using the backslash. Also, the statement doesn't need to start at the left margin.
A statement is comprised of a keyword and a macro argument, if one is required. Whereas the macroes are case-sensitive in all contexts, keywords are case-insensitive. The exception to this are #if and #elsif which accept any perl expressions that is accepted in perl's if and elsif statements.
Text::FastTemplate offers the following keywords:
+ #include + #if, #elsif, #else and #endif + #for and #endfor
They are described in detail below.
Very simply, Text::FastTemplate identifies a macro as a word bounded by double-hashes, '##'. Macroes are case-sensitive. The regular expression looks like this,
$macro =~ m/##\w+?##/
If a template uses a macro named 'A_SIMPLE_MACRO' then it will refer to that macro in its text as '##A_SIMPLE_MACRO##'. In the program that uses this template, this macro will be referred to in a hash by its real name, 'A_SIMPLE_MACRO'.
Here is an example.
This is a ##A_SIMPLE_MACRO##
If we now assign some text, "lousy example", to the macro 'A_SIMPLE_MACRO' in our data structure that we pass to this template then the output will look like this.
This is a lousy example
[ The ability to specify a delimiter different than '##' might be provided in a future version. ]
Other templates can be included by a template.
#include 'filename' | "filename" | filename
The name of the file can be a relative or absolute pathname, just as with the 'file' constructor attribute. If a relative pathname is used then the same rules apply as during object instanatation. The 'path' of the including object, if provided, is searched; otherwise, the current working directory of the script is searched.
The filename can be enclosed in single quotes, double quotes or not at all. All are legal. Currently, a macro cannot be used; this feature is in the queue.
Beware of #include loops!!! This will cause infinite recursion in your program. Currently, Text::FastTemplate does not check for infinite recursion; this, too, is in the queue.
The condition statements should be obvious. They correspond directly with the Perl statements. The #if and #elsif statements require arguments. They don't need to be macroes; but they are not otherwise very useful.
#if ##A_FACT## It is true. See? ##A_FACT## #elsif ! ##A_FACT## It is false. See? ##A_FACT## #else What is it then? #endif
The condition statements are the exception to the separation of the presentation and the application.
That is because the argument given in a condition statement can be a full-fledged Perl expression. The only difference is that only scalars are available and only via the macro syntax. You might use something like this:
#if ##PAGE## eq 'home' highlight the home tab #elsif ##PAGE## eq 'search' highlight the search tab #endif
or
#if ##GROUP_ID## =~ /^dba-.*/ you are a group member #endif
This is really not a design flaw; it was just easier to implement it this way.
The repitition or looping construct used by Text::FastTemplate is the #for / #endfor statement. This is not part of the C-preprocessor syntax; but the resemblance is still there.
#for ##A_FOR_MACRO## some text; survey says, "##SOME_TEXT##" #endfor
There is one special feature about the #for loop that needs to be mentioned. Every #for loop has a special macro that corresponds to the number of times that the loop has iterated. This is called the LOOP_ID. To access the LOOP_ID inside of the #for loop, simply concatenate the #for macro and '_LOOP_ID'. For example,
#for ##A_FOR_MACRO##
uses a macro named 'A_FOR_MACRO'. Concatentate this with '_LOOP_ID' and the result is
A_FOR_MACRO_LOOP_ID
This special macro can be used to access to number of iterations of the #for loop. For example, if the following loop iterates three times
#for ##A_FOR_MACRO## iteration ##A_FOR_MACRO_LOOP_ID## #endfor
then the result will be
iteration 1 iteration 2 iteration 3
Clear?
A data structure that might be used for the above examples might look like this. It is a hash-ref and it is the basic data structure used by Text::FastTemplate. The keys of this hash-ref correspond with the template's macroes. The values for the hash-ref are scalars except for the #for macro.
The #for macro uses a array-ref that contains a list of these hash-refs.
{ A SIMPLE_MACRO => 'fact', A_FACT => 1, A_FOR_MACRO => [ # a #for loop { SOME_TEXT => "Iteration #1"}, { SOME_TEXT => "Iteration #2"} ] }
These bugs were deemed to be acceptable for release since they are currently in production on several web sites. They have been targeted for elimination.
Robert Lehr, bozzio@the-lehrs.com
I certainly would appreciate any feedback from people that use it, including complaints, suggestions or patches. Even people that don't use it are welcome to send comments.
Copyright (c) 2001 Robert Lehr. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Caveat emptor. Use this module at your own risk. I will accept no responsibility any loss of any kind that is the direct or indirect result of the use of this module.
Perl modules: Cwd, Carp
perl(1), perlref(1). | http://search.cpan.org/~bozzio/Text-FastTemplate-0.95/FastTemplate.pod | CC-MAIN-2016-36 | refinedweb | 2,144 | 65.32 |
[ ]
angela commented on JCR-616:
----------------------------
> 1) Clarify that the Map returned from getRegisteredNamespaces() isn't required to
> be complete,
i'm not totally happy with this. getRegisteredNamespaces should return the complete list
available at the time.
but: as listed in TODO.txt issue 11), we didn't define up to now, how a client gets informed
or informes itself about namespace modifications.
> 2) Enhance JCR2SPI to auto-generate prefixes when it encounters namespaces
> not in the registry.
hm... why shoud the jcr2spi autogenerate prefixes? it must assert, that a given namespace
is part of the namespace registry and throw if it isn't. so, we get back to the basic issue,
how we define the update of the NamespaceRegistry (defined by jsr170) over the SPI.
btw: the same applies for the nodetypes.
regards
angela
> Completeness/Freshness of Namespace Registry
> --------------------------------------------
>
> Key: JCR-616
> URL:
> Project: Jackrabbit
> Issue Type: Bug
> Components: SPI
> Reporter: Julian Reschke
>
> We need to define the requirements on completeness and freshness of RepositoryService: | http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200611.mbox/%3C17233377.1162559241507.JavaMail.root@brutus%3E | CC-MAIN-2015-14 | refinedweb | 165 | 55.64 |
Screenshots
Description
This is a VirtualBox Image that builds VirtualBox.
The virtual machine ist based on Ubuntu 12.04 LTS and uses jenkins to build VirtualBox.
Currently version 0.8 is released.
Your feedback is welcome.
Username: jenkins
Password: jenkins
Still I need to update the wiki and show how to
import the virtual machine and so on.
Any suggestions for wiki entries are welcome.
The name “VirtualBox” and the VirtualBox™ logo are registered trademarks of Oracle Corporation, and are used in this site for reference purposes only. No infringing behaviour is intended.
Build VirtualBox VM Web Site
Categories
License
User Reviews
Be the first to post a review of Build VirtualBox VM! | https://sourceforge.net/projects/buildvirtualboxvm/?source=directory | CC-MAIN-2016-44 | refinedweb | 113 | 52.76 |
Testing your application
Test source files must be placed in your application’s
test folder. You can run them from the Play console using the
test and
test-only tasks.
Using specs2
The default way to test a Play 2 application is by using specs2.
Unit specifications extend the
org.specs2.mutable.Specification trait and are using the should/in format:
import org.specs2.mutable._ import play.api.test._ import play.api.test.Helpers._ class HelloWorldSpec extends Specification { "The 'Hello world' string" should { "contain 11 characters" in { "Hello world" must have size(11) } "start with 'Hello'" in { "Hello world" must startWith("Hello") } "end with 'world'" in { "Hello world" must endWith("world") } } } | http://www.playframework.com/documentation/2.0.2/ScalaTest | CC-MAIN-2013-20 | refinedweb | 114 | 59.4 |
You can subscribe to this list here.
Showing
3
results of 3
hello to the list.
i am new to MW and SMW, but have worked with wordpress, which also
uses php+mysql, so the general procedure of installing was familiar.
BUT:
i just tried to install the SMW extension on my freshly installed MedaWiki.
i made it through steps 1-3 in the INSTALL
(),
but since then my MW does not load because of the following error:
*****
Fatal error: Call to undefined function enableSemantics() in
/var/www/vhosts/graffics.de/httpdocs/LocalSettings.php on line 124
*****
i double-checked the two modifications of the files and if the SMW
files are in the right place, but the error remains.
does anyone has a quick idea about it?
i have also some more question regarding the project i want to use SMW
for. it is intented to be a semantical image gallery (i know that now,
after reading something about your project. when we started to plan
the gallery we quickly came to this solution, but had no name for it.)
are they existing extensions, tutorials, hints or other things i
should get informed about regarding the massive use of images in a
wiki? (or say: a wiki which is all about images, not text)
thanks for now,
max
On February 11 2007, Yaron Koren wrote:
> There was an email thread around the first week of January on the
> subject of declaring "enum" types, meaning types that should have one of
> a fixed set of values. There were two ways noted to declare such a type:
>
> 1) In the field's "attribute" page, have a tag reading:
>
> [[possible values:=value1,value2,value3]]
That's what I checked in to Subversion and it's now running on
ontoworld.org, see and, but it's not guaranteed to
be in released SMW 0.7.
Pro:
simple
Con:
you hit the string size limit really quickly
harder to control numeric values
values can't have commas in them without an escape syntax
> 2) In the field's "type" page, have tags reading:
>
> [[Enum mapping:=value1=1]]
> [[Enum mapping:=value1=2]]
> [[Enum mapping:=value1=3]]
That's what Ittay Dror implemented, and sent in the patch to SMW-devel.
> Which of these is recommended/supported? Or is it both? (In theory,
> there's no conflict between the two.)
I still mean to install his patch locally and compare.
> Tied in to this is the more general question of whether descriptive
> information like this should be stored in "attribute" or "type" pages. I
> suspect this question will get more important as SMW gains more
> structured features, like, say, OWL exporting, and forms for editing
> data (I'm working on something involving the second one).
>
> I can think of good arguments either way:
> - if you put the data in "attribute" pages, you don't have to create a
> custom type for each field that you want special handling for.
> - if you put the data in "type" pages, you can have special handling for
> relations, not just attributes (if you ever needed such a thing).
I don't understand. Relations don't have types.
(SMW currently has almost no error handling for special properties like
[[has type]] , so you can slap them on any page and they'll show up in
the factbox even though only certain namespaces make use of them. Seems
like a bug, though it makes playing around easy :-) )
> Also,
> you could have different attributes for the same type, with all the same
> properties (if don't know if you'd ever need that either).
Yes, you could have a Type:Severity and then use it for
Attribute:Bug_severity, Attribute:Torture_level,
Attribute:Scale_of_natural_disaster, etc. That might be useful but why
not just separate attributes?
Some other things to consider:
* An attribute can "refine" handling of a type. Currently floats can
specify the units to display, and by overloading display units, booleans
and dates can control display format.
* You can't currently query against the underlying type.
* One type can't extend another -- [[Has Type::Type:Foo]] doesn't work
for types.
> Does anyone have any thoughts on this?
--
=S Page
- This mail is in HTML. Some elements may be ommited in plain text. -
Tel: +46 (0)8 625 46 40
ULTRA-TYST LUFTRENARE F=D6R KONTOR OCH HEM
Lider du av allergiska besv=E4r p=E5 v=E5ren?
Besv=E4ras du av illaluktande =E4mnen p=E5 din arbetsplats eller i hem=
met? En luftrenare kan g=F6ra underverk p=E5 din h=E4lsa! Vi rekommend=
erar pollenallergiska personer att st=E4lla ett luftfilter vid anslutn=
ing till arbetsplatsen och i sovrummet
Lukt- och dammsensorer k=E4nner automatiskt av luftmilj=F6n
Renar luften fr=E5n pollen, damm och partiklar
Anti-bakteriellt filter minskar infektionsrisken i gemensamma utrymmen
I princip ljudl=F6s - perfekt f=F6r sovrummet eller andra ljudk=E4nsli=
ga milj=F6er
Elegant design och l=E5g vikt (endast 7 kg)
Mycket l=E4ttanv=E4nd med fj=E4rrkontroll
Tv=E4ttbart filter
2 =E5rs fabriksgaranti
>>
Best=E4ll eller l=E4s mer genom att klicka h=E4r!
Tips:
Vill du k=F6pa Blu-ray eller =E4kta HDTV?
E-posta
vartval@...
s=E5 hj=E4lper vi dig med leverans. Avbetalning och leasing OK!
Om du inte vill ha fler erbjudanden fr=E5n ITWorks, skicka ett e-brev =
till
removeme@...
med =E4rende "remove"
If you do not want to recieve any more e-mails from ITWorks sales, ple=
ase send a message to
removeme@...
with subject "remove" | http://sourceforge.net/p/semediawiki/mailman/semediawiki-user/?viewmonth=200702&viewday=27 | CC-MAIN-2014-23 | refinedweb | 918 | 61.46 |
So far we have the basic functionality of a screensaver but it doesn’t look or behave like one. To make it look like one we have to resize the form to fill the screen and remove its border in the ShowScreenSaver function:
this.FormBorderStyle = FormBorderStyle.None;this.TopLevel = true;this.WindowState = FormWindowState.Maximized;this.Capture = true;Cursor.Hide();
To make the screensaver close when the mouse moves or if its button is clicked we need a simple event handler:
private void OnMouseEvent( object sender, System.Windows.Forms.MouseEventArgs e){ if (!MousePos.IsEmpty) { if (MousePos != new Point(e.X, e.Y)) this.Close(); if (e.Clicks > 0) this.Close(); } MousePos = new Point(e.X, e.Y);}
and another global variable to store the mouse position:
private Point MousePos;
This one-event handler can deal with MouseMove and MouseDown events. View the form in design mode and use the properties window to set both events to the OnMouseEvent function.
The screensaver can also be run in /p – preview or /c – configuration mode. The configuration mode is fairly easy to implement and it’s described in the documentation. The /p mode is much more difficult and not described anywhere so this is the puzzle to solve next!
The screensaver in miniature easy way of making a window a child of another window using the .NET framework and once again we have to resort to using the Windows API.
First we need to change the switch to deal with the hwnd passed on the command line case "/p":
//previewif (m_args.Length > 2){ this.TopLevel = false; m_hwnd = (IntPtr)Int32.Parse(m_args[2]); ShowScreenSaver();}break;
and we need to add the global variable:
private IntPtr m_hwnd;
If m_hwnd has a value then we need to display using the preview screen and the ShowScreenSaver function has to be modified to initialise the form either for full screen or preview display:
void ShowScreenSaver(){ if (this.m_hwnd == IntPtr.Zero) { All of the code already in ShowScreenSaver }
The else part of the if handles the preview and uses a number of API calls, definitions of which are given later. We need to make the form’s window a child window of the preview window. To do this we need to retrieve and modify the form’s window style to make it a child window:
else{ int style = GetWindowLong( this.Handle, GWL_STYLE); style = style | WS_CHILD; SetWindowLong(this.Handle, GWL_STYLE, style);
Next we);
Now everything is set up for the form to display within the preview window except for the fact that it’s the wrong size and still has a border. To resize it we need to know the size of the preview window:
this.FormBorderStyle = FormBorderStyle.None;RECT r=new RECT();GetClientRect(m_hwnd, out r);
To display the window a final API call is used do the resizing and place the form’s window in front of the preview window:
SetWindowPos(this.Handle, HWND_TOP, 0, 0, r.Right, r.Bottom, SWP_NOZORDER | SWP_NOACTIVATE | SWP_SHOWWINDOW);}
For all of this to work we need to add:
using System.Runtime.InteropServices;
at the start of the file and add the following declarations:Ptr hWnd, int nIndex);
const int GWL_STYLE = (-16);const int WS_CHILD = 0x40000000;
[DllImport("user32.dll")]static extern int SetWindowLong( IntPtr hWnd, int nIndex, int dwNewLong);[DllImport("user32.dll")]static extern IntPtr SetParent( IntPtr hWndChild, IntPtr hWndNewParent);
const int GWL_HWNDPARENT = (-8);
[DllImport("user32.dll")]static extern bool SetWindowPos( IntPtr hWnd,.
If you would like the code for this project then register and click on CodeBin.
<ASIN:1890774464>
<ASIN:047019135X>
<ASIN:0596514824>
<ASIN:1590599551>. If you have C# Express (or VB Express) you will discover that there is a “Screen Saver Starter Kit” project type. This creates a screensaver for you without any effort and is worth studying but, like many Microsoft supplied examples, it’s too sophisticated and does slightly too much to see the basic principles. It also shows you how to create a “traditional” screensaver rather than giving you the information you need to do something creative and it does not explain how to implement a screensaver preview. This month’s project aims to correct these omissions.
Getting started saving screens
It’s always helpful to see what the simplest example of a project type so that you can really see how things work. Although screensavers seem complicated they are just standard .EXE files but with their file names changed to end in .SCR. They also live in either the Windows/System or Windows/System32 directory. When you use the desktop properties dialog box to choose a screensaver all that happens is that Windows scans the relevant directories for all files ending in .SCR and lists them as potential screensavers.
***PROP.TIF
Select your screensaver
If you select a screensaver window.
/p Show a preview of your screensaver
(None) Show the configuration dialog box with no parent window.
The good news is that you really only need to support the /s option to create the simplest possible screensaver.
Start a new();
case "/p":
//preview
}
else
}:
using System.Windows.Forms;
to the start of the file continuing the main function.
Now just testing the screensaver a quicker way is to right click on the ScreenSaver1.scr file and select Install from the drop-down menu. This starts the Windows Screen Saver Dialog Box which you can use to select your screensaver. This isn’t a permanent installing.
A form based screensaver problem with using a Windows form project is that it is usually thought that there is no easy way to get the command line parameters to the form. In fact it’s fairly easy. If you open Program.cs you will see the form application’s main function and you can change this so that it receives any command line parameters:
The form is created using the command:
Application.Run(new Form1());
The problem is we can’t pass the form’s constructor the command line parameters in args. The solution is to simply define a new constructor that does accept the args array. Change the line that runs Form1 to:
Application.Run(new Form1(args));
Now we have to write the new constructor. Add to Form1.cs, following the default constructor Form1():
public Form1(string[] args)
InitializeComponent();
m_args = args;
this.TopLevel = false;
and add the global variable:
private string[] m_args;
Now any method in Form1 can access m_args to discover what the command line parameters > 0)
string arg = m_args[0].ToLower().
case "/c":
else
And the ShowScreenSaver function is just:
void ShowScreenSaver()
this.TopLevel = true;
Bouncing text
We really need some graphics to show the sort of thing a screensaver can do. As an example we are going to bounce a message round the form. The simplest way of creating animation is to use a timer – so place a timer control on the form and change the ShowScreenSaver to:
this.DoubleBuffered = true;
this.timer1.Interval = 20;
this.timer1.Enabled = true;
This sets the timer to 20 milliseconds which produces an update every 1/50th of a second. Setting DoubleBuffered to true makes the animation smooth. The Timer event handler does all the work. First it gets the graphics object associated with the form so that it can use it to draw:
private void timer1_Tick(
Graphics g = this.CreateGraphics();
As will become clear, you can’t assume that the screensaver is going to have a fixed sized form to draw on and everything should be done in terms of a fraction of the available space. To do this we first need the size of the display area:
RectangleF bounds =this.ClientRectangle;
Next we can set the message to display and a font size that is 1/20th of the height of the display area:
string displayM =
"Buy Computer Shopper";
int fontsize =(int) bounds.Height / 20;
We need to set a font to use and any font from the Arial family will do:
FontFamily fontFamily =
new FontFamily("Arial");
Font font = new Font(
fontFamily,
fontsize,
FontStyle.Regular,
GraphicsUnit.Pixel);
To blank the message out at its current position we can use the TextRenderer object and its MeasureText method. This is new in .NET 2.0 and returns a size struct giving the height and width of the rectangle that the text fits into:
Size psize=new Size(int.MaxValue,
int.MaxValue);
Size size = TextRenderer.
MeasureText(g, displayM,
font, psize,
TextFormatFlags.SingleLine);
Now that we have the size of the text the FillRectangle method can be used to draw a rectangle of just the correct size in the current form’s background colour:
SolidBrush b = new SolidBrush(this.BackColor);
g.FillRectangle(b, textpos.X,
textpos.Y,
size.Width,
size.Height);
To move the text its position is updated and if it is just about to go off the edge of the form we reverse its direction of motion, i.e. bounce it:
textpos.X +=v.X;
textpos.Y +=v.Y;
if (textpos.X+size.Width >= bounds.Width
|| textpos.X<=0) v.X = -v.X;
if (textpos.Y +size.Height>= bounds.Height
|| textpos.Y < 0) v.Y = -v.Y;
Finally the text is drawn at its new position on the screen:
TextRenderer.DrawText(g, displayM,
font, textpos,
Color.Red);
g.Dispose();
We also need some additional global variables:
private Point textpos =
new Point(0, 0);
private Point v =
new Point(1, 1);
If you now try the screensaver out you should see the message bounce its way around the form and if you resize the form the text will automatically resize and bounce at the new edge.
More like a screensaver
this.FormBorderStyle =
FormBorderStyle.None;
this.TopLevel = true;
this.WindowState =
FormWindowState.Maximized;
this.Capture = true;
Cursor.Hide();
private void OnMouseEvent(
object sender,
System.Windows.Forms.MouseEventArgs e)
if (!MousePos.IsEmpty)
if (MousePos != new Point(e.X, e.Y))
this.Close();
if (e.Clicks > 0)
MousePos = new Point(e.X, e.Y);
***PROP2.TIF Setting the event handler.
Now when you run the program it should fill the screen and close when the mouse moves or is clicked. You can improve the way that it works by adding code to close only if the mouse moves more than a set amount and you can trap keyboard events to close it.
Preview
The screensaver can also be run in /p – preview or /c – configuration mode. The configuration mode is fairly easy to implement and it’s described in the starter kit. The /p mode is much more difficult and not described anywhere so this is the puzzle to solve next! way of making a window a child of another window using the .NET framework and once again we have to resort to using the Windows API. First we need to change the switch to deal with the hwnd passed on the command line:
case "/p":
//preview
if (m_args.Length > 1)
m_hwnd = (IntPtr)Int32.
Parse(m_args[1]);
if (this.m_hwnd == IntPtr.Zero)
All of the code
already in ShowScreenSaver
else
int style = GetWindowLong(
this.Handle, GWL_STYLE);
style = style | WS_CHILD;
SetWindowLong(this.Handle,
GWL_STYLE, style);
Next we set);
RECT r=new RECT();
GetClientRect(m_hwnd, out r);
SetWindowPos(this.Handle,
HWND_TOP, 0, 0,
r.Right, r.Bottom,
SWP_NOZORDER |
SWP_NOACTIVATE | SWP_SHOWWINDOW); nIndex);
const int GWL_STYLE = (-16);
const int WS_CHILD = 0x40000000;
static extern int SetWindowLong(
int nIndex,
int dwNewLong);
static extern IntPtr SetParent(
IntPtr hWndChild,
IntPtr hWndNewParent);
static extern bool SetWindowPos(.
**PREVIEW.TIF The screensaver in miniature | http://www.i-programmer.info/projects/38-windows/350-useful-screensavers.html?start=3 | CC-MAIN-2014-52 | refinedweb | 1,886 | 65.22 |
Source code organization (C++ Templates)
When defining a class template, you must organize the source code in such a way that the member definitions are visible to the compiler when it needs them. You have the choice of using the inclusion model or the explicit instantiation model. In the inclusion model, you include the member definitions in every file that uses a template. This approach is simplest and provides maximum flexibility in terms of what concrete types can be used with your template. Its disadvantage is that it can increase compilation times. The impact can be significant if a project and/or the included files themselves are large. With the explicit instantiation approach, the template itself instantiates concrete classes or class members for specific types. This approach can speed up compilation times, but it limits usage to only those classes that the template implementer has enabled ahead of time. In general, we recommend that you use the inclusion model unless the compilation times become a problem.
Background
Templates are not like ordinary classes in the sense that the compiler does not generate object code for a template or any of its members. There is nothing to generate until the template is instantiated with concrete types. When the compiler encounters a template instantiation such as
MyClass<int> mc; and no class with that signature exists yet, it generates a new class. It also attempts to generate code for any member functions that are used. If those definitions are in a file that is not #included, directly or indirectly, in the .cpp file that is being compiled, the compiler can't see them. From the compiler's point of view, this isn't necessarily an error because the functions may be defined in another translation unit, in which case the linker will find them. If the linker does not find that code, it raises an unresolved external error.
The inclusion model
The simplest and most common way to make template definitions visible throughout a translation unit, is to put the definitions in the header file itself. Any .cpp file that uses the template simply has to #include the header. This is the approach used in the Standard Library.
#ifndef MYARRAY #define MYARRAY #include <iostream> template<typename T, size_t N> class MyArray { T arr[N]; public: // Full definitions: MyArray(){} void Print() { for (const auto v : arr) { std::cout << v << " , "; } } T& operator[](int i) { return arr[i]; } }; #endif
With this approach, the compiler has access to the complete template definition and can instantiate templates on-demand for any type. It is simple and relatively easy to maintain. However, the inclusion model does have a cost in terms of compilation times. This cost can be significant in large programs, especially if the template header itself #includes other headers. Every .cpp file that #includes the header will get its own copy of the function templates and all the definitions. The linker will generally be able to sort things out so that you do not end up with multiple definitions for a function, but it takes time to do this work. In smaller programs that extra compilation time is probably not significant.
The explicit instantiation model
If the inclusion model is not viable for your project, and you know definitively the set of types that will be used to instantiate a template, then you can separate out the template code into an .h and .cpp file, and in the .cpp file explicitly instantiate the templates. This will cause object code to be generated that the compiler will see when it encounters user instantiations.
You create an explicit instantiation by using the keyword template followed by the signature of the entity you want to instantiate. This can be a type or a member. If you explicitly instantiate a type, all members are instantiated.
template MyArray<double, 5>;
//MyArray.h #ifndef MYARRAY #define MYARRAY template<typename T, size_t N> class MyArray { T arr[N]; public: MyArray(); void Print(); T& operator[](int i); }; #endif //MyArray.cpp #include <iostream> #include "MyArray.h" using namespace std; template<typename T, size_t N> MyArray<T,N>::MyArray(){} template<typename T, size_t N> void MyArray<T,N>::Print() { for(const auto v : arr) { cout << v << "'"; } cout << endl; } template MyArray<double, 5>;template MyArray<string, 5>;
In the previous example, the explicit instantiations are at the bottom of the .cpp file. A
MyArray may be used only for double or
String types.
Note
In C++11 the export keyword was deprecated in the context of template definitions. In practical terms this has little impact because most compilers never supported it. | https://docs.microsoft.com/en-us/cpp/cpp/source-code-organization-cpp-templates?view=vs-2019 | CC-MAIN-2020-24 | refinedweb | 764 | 53.31 |
Tableau REST API - Installation
The Tableau REST API is a great resource to help you automate any admin Tasks on your Tableau Server. This post will explain how to use the Tableau REST API on Python 2.7. I will use the awesome library created by the brilliant Bryant Howell. This post will explain how to get everything you need installed. If you already have Python 2.7, the library installed, to see what you can do with the library and the REST API go to Part 2 to see how you can programmatically:
- delete old/useless content
- download all of your workbook and datasource
- automate the creation of users
- affix data to a Tableau extract.
I would recommend you make your way directly to Bryant's blog who covers the library deeper than I. For a step-by-step guide you're in the right place.
Level required: Advanced on Tableau and basic coding skills.
Software required: Tableau Server and Python 2.7
Step 1: Python 2.7
I will assume you have a Tableau Server you can access with a login a password.
The next step consists of installing Python and PyPi. If you already have Python installed, you can go to Step 2 to download and install the library.
Install Python.
Download python 2.7 here.
If you are serious about coding in Python, I would recommend using PyCharm.
If you are not using PyCharm:
Once your Python is installed, make sure you can run it. Open a cmd line and go to C:Python27 then run "python". Make sure you have something that looks like this:
Add python to your environment path to make sure you can call the python cmd from anywhere.
right click on your computer, -> Advance System Settings then add a semicolon and the python path to your environment variable as shown below.
Now make sure you can run the cmd line. It is now shipped in any new python installation. Run "pip" as a command line.
Step 2: Install the REST API Library
Once pip is in working order, type the following command line to install the tableau REST API:
pip install tableau_rest_api pip install tableau_rest_api --upgrade
Let's make sure everything works as it should by creating simple script that is going to use the library.
Open new file, and copy paste the following script. **Update your password & Login on line 11**.
import math import xml.etree.ElementTree as ET # Contains methods used to build and parse XML import requests # Contains methods used to make HTTP requests import sys from tableau_rest_api.tableau_rest_api import * print('Testing... By You') logger = Logger(u"log_file.txt") t = TableauRestApi(u"", u"admin", u"admin") #Connect to different stes t.enable_logging(logger) t.signin()
This script will use some libraries to make HTTP calls on our Tableau server. We'll understand what this code does in part 2 of this tutorial. For the moment let's run it and make sure everything works as it should. If you have several sites, you can specify which site you want to connect to by editing line 11
t = TableauRestApi(u"", u"admin", u"admin", site_content_url="test")
Instead of testing, look at your URL when you are on a specific site on your Tableau Server:
Go back to your tabcmd and run your new script with:
python test.py
If everything worked well, you should get the following line prompted in your cmd line:
And you can read the Xml response from your Server here: C:Python27/log_file.txt.
So what have we done so far? Just logged in to our Tableau Server as a user and sent a simple HTTP request.
WHAAAT?!! All of that to just connect to my server?
Indeed. But now you are ready for the magic to begin in PART 2. You will learn how to manage your Tableau server from only a couple of lines of code. | https://www.axxio.io/tableau-rest-api/ | CC-MAIN-2019-18 | refinedweb | 653 | 81.53 |
How to Create a Program in C Sharp. Instruction below describe both "FOSS oriented" and "Windows oriented" approaches.
Set up (Windows way)
Create your first program.
- Run Visual C# 2005 Express Edition.
- Go to File → New → Project.
- Select Visual C# → Windows → Console Application.
- Press OK.
You should see this:
using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { } } }
- Beneath static void Main(string[] args), after the first curly brace, type:
Console.WriteLine("Hello, World!"); Console.ReadLine();
It should look like this:
using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Console.WriteLine("Hello, World!"); Console.ReadLine(); } } }
- Click the Run [►] button on the toolbar.
Congratulations! You created your first C# program!
Set up (Free software way)
- You need CVS and GNU build tools. This should be included into majority of Linux distributions.
- Go to DotGNU project () that provides FOSS implementation of C#. Read the chapter about the installation. These instructions are simple to follow ever for beginner.
- You can choose to get the source code and build you C# environment from scratch or you may try precompiled distributions first.).
- Under Linux, you can use KWrite or gedit to edit the C# code - the recent versions of both editors support the syntax highlight for this language.
- Figure out yourself how to compile the short example, given in the "Windows way" section. If the project web site does not provided enough documentation, try web search. If this does not help, post questions to the project mailing lists.
Congratulations, you are both C# - aware and not bound to any single C# provider!
[edit] Tips
- There are more good C# implementations than the two described here. Mono project may be interesting for you as well.
-.
[edit] Recommended Books
- ISBN 0-7645-8955-5: Visual C# 2005 Express Edition Starter Kit - Newbie
- ISBN 0-7645-7847-2: Beginning Visual C# 2005 - Novice
- ISBN 0-7645-7534-1: Professional C# 2005 - Intermediate+ | http://www.wikihow.com/Create-a-Program-in-C-Sharp | crawl-002 | refinedweb | 333 | 60.31 |
SalesforceDX — View from the Coal Face
At Dreamforce 2016 Salesforce announced the Salesforce Developer Experience, aka SalesforceDX, which promises major enhancements to the development lifecycle with Salesforce.
SalesforceDX is a collection of tools to make developing on the Salesforce platform easier, and more in keeping with the way that enterprise software is developed for other platforms.
SalesforceDX Features
New Force.com IDE
The Force.com IDE has had some love recently, with Lightning Components finally supported after being GA for around 2 years! It still lags too far behind the org capabilities, so hopefully SalesforceDX will improve this.
The new IDE will be tightly integrated with the new CLI (see below) and the org that is being developed, so we should be able to access functionality from the Salesforce org direct from the IDE — hopefully things like previewing Visualforce and Lightning App pages, rather than switching between the IDE and the Salesforce UI.
We’re also promised that synchronisation between the IDE and the target org will be a lot more granular than it has been. Deploying to the org will only send up aspects of metadata that have changed, so if you add a field to a custom object, for example, only that field will be sent to the org. The same applies to deploying to the org — only changes from the local filesystem will be pushed up. For those familiar with GIT, this is a similar concept to staging hunks of a file rather than staging the entire file regardless of whether much of it has changed.
Scratch Orgs
This is probably the area that I’m most pleased about. Being able to spin up an org on demand is something we’ve had for a while, via the environment hub for partners or a new developer edition. Where things slow down is enabling features, which involves raising a case with Salesforce support and then receiving a call to repeat exactly the same information that you’ve already put in the case. For our BrightMedia development environment we have to get the Apex character limit increased before we can deploy our codebase which always adds a few hours to the process.
Scratch Orgs are potentially short-lived orgs that can be created with specific features enabled declaratively. Thus we’ll be able to create a fresh org, set up the features as required, install and test our code out of version control and then drop the org. I haven’t seen anything around how we will actually drop the org as yet, but I did hear some statements in Dreamforce sessions that the orgs can disappear at any time, which makes it more important than ever to commit your work. I assume at any time doesn’t mean it will disappear from under you while you are half way through writing a new method!
Scratch orgs can also have namespace, so as I understand it this will allow developers to share the same namespace across multiple orgs which will make developing packages for the app exchange much easier.
Scratch orgs sound great, but I don’t think that I’ll be moving all of my development over to them. Usually my dev orgs have a lot of data that I’ve set up intermittently over time and can’t replicate without some effort, so I’ll be keeping a few of those going. What it will allow me to do, in addition to automated testing etc, is set up new developers with a minimum of fuss an effort.
New, Improved CLI
Anyone who reads my blog knows that I’m a big fan of the existing Force CLI.
In one of my earlier blog posts around Lightning Components deployment ( Deploying Lightning Components - Go Big or Go…bobbuzzard.blogspot.co.uk
SalesforceDX brings a new CLI, combining the Heroku toolbelt with the Force CLI and introducing some new commands. I’ve read a few posts about how the new CLI will allow data to be loaded into an org, but the existing CLI supports that through the record create:bulk, so I guess that shows that some of the people excited about SalesforceDX haven’t been using the existing tools to their full capability.
One thing the new CLI does provide is the capability to create and configure scratch orgs from the command line, allowing this to form part of automated processes such as a continuous integration build, deploy and test.
Another feature I really like is that the new CLI is based on a pluggable architecture, so anyone will be able to add commands and potentially publish them out to the wider community. This means I’ll have the option of extending the CLI with commands specific to my BrightMedia processes, rather than the current situation where I’ve built my own that leverages the Force CLI as and when it needs to. It also runs on Node, which is handy as I’ve just rewritten my BrightMedia CLI tool in JavaScript on Node!
Heroku Flow
I haven’t spent a huge amount of time investigating this, as a number of the features appear to rely on Github integration, whereas we use a different cloud provider for source control.
The key feature that is more generic is pipelines, which should allow continuous delivery workflows to be defined. This is the kind of thing that many of us have been rolling our own solutions to for years, using tools like Jenkins and Bamboo, or building from scratch with command line scripts. I’ll be watching how this progresses with interest!
What this means for developers
SalesforceDX is undoubtedly good news, but it’s not available as yet so for now it’s an aspiration. Salesforce usually take an iterative approach to most new features, so I’d imagine it will take a few releases to have all the features that we feel we can’t live without in place.
I’ve been critical of the developer tooling in Salesforce for quite a while now — having to bounce between tools (some official, some community provided) based on what type of development you are carrying out is something that makes developers from Java or .Net world roll their eyes and doesn’t exactly scream that Salesforce is an enterprise class development platform. We all get that one of the USPs of Salesforce is the fact that you can build simple applications and process automation through declarative tools, but large-scale implementations and app exchange products typically require quite a bit of custom development and at times it feels like the current tooling makes things harder rather than easier.
The most important aspect of this to my mind will be how it is priced. As a general rule developers don’t pay for anything, ever. If the SalesforceDX team can get this through the bean counters as a free offering I’m sure it will see huge take-up—you only have to look at how successful the free Developer Edition has been over the years. A freemium model would probably work quite well, allowing us to start out with the basics for nothing and then paying once we consume a larger number of scratch orgs or set several automated workflows.
If SalesforceDX is pay to play from the word go, I fear it stands a chance of having a low take-up and withering on the vine — after all we’ve been building our own solutions to this for years now and we can continue in this vein. It’s always hard to ask for an investment of time and money when the outcome is that we’ll be in exactly the same place but using someone else’s tools rather than our own.
Tell me more
If you want to know more about SalesforceDX, the best place to start is the official site.
Learn an entirely new way to manage and develop Salesforce apps across the entire lifecycle, enabling new levels of…
If you have questions, tweet them with the hashtag #SalesforceDX — the product team are monitoring this and my experience is that they do respond and in a pretty timely fashion. | https://medium.com/@bob_buzzard/salesforcedx-view-from-the-coal-face-864fee29ddd8 | CC-MAIN-2017-17 | refinedweb | 1,362 | 60.48 |
This notebook contains snippets of code that are useful when working with MATLAB in IPython Notebooks.
Displaying images from MATLAB¶
Passing variables into cells using the
%%matlab magic requires
h5py which is tricky to install. If you only need to pass in simple variables (strings) you can instead pass it in using Python string formatting and
mlab.run_code. Unfortunately, this means we don't benefit from
pymatbridge automatically rendering figures. The class below is just a helper Python class to make it simpler to show the resulting figures from MATLAB commands.
import base64, os os.environ['http_proxy'] = '' class ImageOut(object): def __init__(self, img): with open(img) as f: data = f.read() self.image = base64.b64encode(data) def _repr_png_(self): return self.image
from pymatbridge import Matlab mlab = Matlab() mlab.start() r = mlab.run_code(''' plot([1,2,3]); ''') ImageOut( r['content']['figures'][0] )
Starting MATLAB on visit to shut down same .....MATLAB started and connected!
Pass widget variables into %%matlab magic¶
If you want to use widget controls in IPython notebooks you might find yourself wanting to pass those values into matlab cells using %%matlab magic. This snippet shows how you can use a callback function together with
interact to automatically update global vars, and then pass these into the matlab session using
-i.
Note: you need hdf5 and h5py installed for this to work
%load_ext pymatbridge
Starting MATLAB on visit to shut down same ...MATLAB started and connected!
def widget_callback(**kwargs): for k,v in kwargs.items(): globals()[k] = v
myvar=5
Note: We define the variable first above, and pass the output var in as an default value. This means the widget wont reset if the cell is re-run, but will instead keep the current value. To reset the value just run the cell above.
from IPython.html.widgets import interact from IPython.html import widgets i = interact(widget_callback, myvar=widgets.IntSliderWidget(min=1, max=50, step=1, value=myvar, description="Reference spc:"), )
%%matlab -i myvar myvar
myvar = 13 | https://www.twobitarcade.net/article/snippets-matlab/ | CC-MAIN-2019-09 | refinedweb | 332 | 50.12 |
Exception when loading testrunner module explicitly
Reported by Martin Polden | March 20th, 2011 @ 12:34 PM
As far as I understand 'play test' addes the testrunner module to the classpath automatically.
When I create a run configuration in IntelliJ IDEA for running Play in test-mode, I need to explicitly load the testrunner either by setting the MODULES environment variable or by setting %test.module.testrunner=${play.path}/modules/testrunner in application.conf. Adding just -Dplay.id=test to the VM parameters does not work and does not load the testrunner module.
When loading the testrunner module explicitly, returns the following error:
Compilation error
The file {module:testrunner}/app/controllers/TestRunner.java could not be compiled. Error raised is : The type TestRunner is already defined In {module:testrunner}/app/controllers/TestRunner.java (around line 18)
14: import play.templates.TemplateLoader; 15: import play.test.*; 16: import play.vfs.*; 17: 18: public class TestRunner extends Controller { 19: 20: public static void index() { 21: List<Class> unitTests = TestEngine.allUnitTests(); 22: List<Class> functionalTests = TestEngine.allFunctionalTests(); 23: List<String> seleniumTests = TestEngine.allSeleniumTests(); 24: render(unitTests, functionalTests, seleniumTests);
I could set the test ID to something different, like ideatest but then the test routes won't be loaded.
Play Framework version: 1.1.1
Platforms: Ubuntu 10.10 amd64 and Windows 7 x64 SP1
Martin Polden June 24th, 2011 @ 01:40 PM
I managed to work around this issue by adding play-testrunner.jar to the classpath (Modules->Dependencies) in IDEA, and adding -Dplay.id=test as a VM parameter.
Nicolas Leroux July 4th, 2011 @ 07:46 PM
- Tag set to feature ide idea
I think it is just that the correct configuration is not implemented for IDEA. Can you confirm you still have the issue with the 1.2.1 version?
Thanks
Nicolas Leroux July 4th, 2011 @ 07:47 PM
- Assigned user set to Nicolas Leroux
- Tag changed from feature ide idea to feature request, ide, idea
Martin Polden July 4th, 2011 @ 07:57 PM
Confirmed with 1.2.1. I'm not sure how it can be solved differently, but I just did the following to make it work:...
Maybe the testrunner could be added to the classpath dynamically if enviroment id is 'test'.
AnnySally January 12th, 2019 @ 12:58 PM
AnnySally January 14th, 2019 @ 09:45 AM
Thank you for very usefull information.. best hair growth oil
garymfalbo January 17th, 2019 @ 11:34 AM
AnnySally January 18th, 2019 @ 02:02 PM
This is very educational content and written well for a change. It's nice to see that some people still understand how to write a quality post.! ukraynada üniversite
AnnySally January 21st, 2019 @ 08:11 AM
garymfalbo January 22nd, 2019 @ 03:15 PM
Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon. Big thanks for the useful info. voyance amour gratuite
garymfalbo January 23rd, 2019 @ 05:34 AM
I think this is an informative post and it is very useful and knowledgeable. therefore, I would like to thank you for the efforts you have made in writing this article. lire la suite
AnnySally January 24th, 2019 @ 12:38 PM
A very awesome blog post. We are really grateful for your blog post. You will find a lot of approaches after visiting your post. dental instrument sharpening
AnnySally January 26th, 2019 @ 06:42 AM
I really appreciate this wonderful post that you have provided for us. I assure this would be beneficial for most of the people. dm ne demek
crystal01 January 26th, 2019 @ 08:47 AM
Very great stuff. i really appreicate this!
AnnySally January 28th, 2019 @ 09:29 AM
We have sell some products of different custom boxes.it is very useful and very low price please visits this site thanks and please share this post with your friends. all the information
garymfalbo January 28th, 2019 @ 12:32 PM
Thanks for sharing this quality information with us. I really enjoyed reading. Will surely going to share this URL with my friends. news-todays
Johan Smith January 29th, 2019 @ 05:57 AM
garymfalbo January 29th, 2019 @ 09:02 AM
Great job for publishing such a beneficial web site. Your web log isn’t only useful but it is additionally really creative too. all the information
Johan Smith February 2nd, 2019 @ 08:29 AM
I love the way you write and share your niche! Very interesting and different! Keep it coming! Monument Sign
AnnySally February 4th, 2019 @ 09:45 AM
This is why it's a good idea that you can proper study ahead of producing. You are able to generate greater distribute using this method. pliable-smartphone.fr
AnnySally February 4th, 2019 @ 11:30 AM
That appears to be certainly great. Most of these teeny specifics are designed having great deal of track record expertise. I'm keen on the item lots
AnnySally February 4th, 2019 @ 01:01 PM
This specific seems to be definitely excellent. These very small facts are produced using wide range of qualifications know-how. I favor the idea a good deal dog news magazine statistics
AnnySally February 6th, 2019 @ 09:10 AM
You want many of the experiences, Want certainly wanted, We want addiitional tips over it, because is without a doubt pretty wonderful., Bye used just for showing. look at this now
AnnySally February 6th, 2019 @ 01:12 PM
I prefer the whole of the wide range stuff, Taken into consideration in fact really enjoyed, Now i need ideas. on that, bearing in mind that it can be a little outstanding., Thanks a lot very much for the purpose of explaining. san diego gas and carwash
AnnySally February 6th, 2019 @ 03:26 PM
Admiring the time and effort you put into your blog and detailed information you offer!.. voyance audiotel
AnnySally February 6th, 2019 @ 03:27 PM
Admiring the time and effort you put into your blog and detailed information you offer!.. voyance audiotel
AnnySally February 7th, 2019 @ 12:31 PM
This original shows entirely desirable. All of limited data files have decided by way of great number from past experiences efficient practical knowledge. So i am inclined it again ever again substantially. SAO online
AnnySally February 9th, 2019 @ 07:24 AM
This excellent is undoubtedly fantastic. Most of these minuscule truth is generated applying broad range connected with accreditation know-how. When i benefit taking that approach lots. wap videos
AnnySally February 10th, 2019 @ 09:23 AM
Attractive, post. I just stumbled upon your weblog and wanted to say that I have liked browsing your blog posts. After all, I will surely subscribe to your feed, and I hope you will write again soon! טיול מאורגן לאירופה
Johan Smith February 11th, 2019 @ 08:41 AM
This excellent is undoubtedly fantastic. Most of these minuscule truth is generated applying broad range connected with accreditation know-how. When i benefit taking that approach lots. red rocks art
AnnySally February 11th, 2019 @ 01:40 PM
Punctually the following web link could irrefutably find themselves well-known amongst any crafting most people, owing to hardworking reports plus assessments plus comparisons. NYC carpet cleaning
Johan Smith February 11th, 2019 @ 01:54 PM
AnnySally February 12th, 2019 @ 05:56 PM
Thanks for taking the time to discuss this, I feel strongly about it and love learning more on this topic. If possible, as you gain expertise, would you mind updating your blog with extra information? It is extremely helpful for me. tangkasnet
Johan Smith February 13th, 2019 @ 07:19 AM
AnnySally February 13th, 2019 @ 02:52 PM
There are several dissertation online websites on-line while you at the same time attain evidently maintained in your own web-site. Blockchain Research
Johan Smith February 14th, 2019 @ 09:43 AM
I havent any word to appreciate this post.....Really i am impressed from this post....the person who create this post it was a great human..thanks for shared this with us. Discover More Here
Johan Smith February 15th, 2019 @ 08:13 AM
When my partner and i started to be on your own web site while wearing distinct consideration effortlessly some touch submits. Attractive technique for long term, I will be book-marking at this time have got designs attain rises proper upwards. information
Johan Smith February 16th, 2019 @ 10:05 AM
A number of dissertation websites on the internet for those who purchase needless to say publicised as part of your page.
Johan Smith February 16th, 2019 @ 11:40 AM
A number of dissertation websites on the internet for those who purchase needless to say publicised as part of your page. perte de poids
Johan Smith February 16th, 2019 @ 01:00 PM
A number of dissertation websites on the internet for those who purchase needless to say publicised as part of your page. audio engineering school
Johan Smith February 18th, 2019 @ 07:57 AM
There are several dissertation online websites on-line while you at the same time attain evidently maintained in your own web-site. improve the visibility of your company
Johan Smith February 19th, 2019 @ 06:05 AM
I prefer the submit. It really is excellent to find out an individual verbalize from your coronary heart and also quality with this crucial subject matter may be effortlessly witnessed.
alexandermedina February 19th, 2019 @ 01:19 PM
Johan Smith February 20th, 2019 @ 05:36 AM
When you use a genuine service, you will be able to provide instructions, share materials and choose the formatting style. plant based
Johan Smith February 20th, 2019 @ 09:18 AM
What a fantabulous post this has been. Never seen this kind of useful post. I am grateful to you and expect more number of posts like these. Thank you very much. sign up hcpnow
Johan Smith February 22nd, 2019 @ 11:30 AM
I recently found many useful information in your website especially this blog page. Among the lots of comments on your articles. Thanks for sharing. how to get votes on facebook fast
Johan Smith February 25th, 2019 @ 09:16 AM
I’m some reader for the purpose of much of the content, I just utterly savored, Appraisal in fact give preference to further data files in relation to this unique, as long as its wonderful., Thankyou ideal for post. Crochet hair
Johan Smith February 26th, 2019 @ 06:26 AM
I found that site very usefull and this survey is very cirious, I ' ve never seen a blog that demand a survey for this actions, very curious... digital signature certificate
Johan Smith February 26th, 2019 @ 07:28 AM
This is my first time i visit here and I found so many interesting stuff in your blog especially it's discussion, thank you. 2025 independence day india
Johan Smith February 26th, 2019 @ 10:37 AM
What a fantabulous post this has been. Never seen this kind of useful post. I am grateful to you and expect more number of posts like these. Thank you very much. cricket app development
Johan Smith March 1st, 2019 @ 06:13 AM
Very efficiently written information. It will be beneficial to anybody who utilizes it, including me. Keep up the good work. For sure i will check out more posts. This site seems to get a good amount of visitors. Bobbi Boss wigs
Johan Smith March 2nd, 2019 @ 07:17 AM
Took me time to read all the comments, but I really enjoyed the article. It proved to be Very helpful to me and I am sure to all the commenters here! It’s always nice when you can not only be informed, but also entertained! therapeutische oele
Johan Smith March 2nd, 2019 @ 12:02 PM
Took me time to read all the comments, but I really enjoyed the article. It proved to be Very helpful to me and I am sure to all the commenters here! It’s always nice when you can not only be informed, but also entertained! pearland seo
Johan Smith March 4th, 2019 @ 01:29 PM
uzair awan March 5th, 2019 @ 11:06 AM
I will be happy a good deal. It is actually superb to ensure plenty of people verbalize through the style including means regarding the fact that primary niche space are in general normally uncovered. check this out
Johan Smith March 7th, 2019 @ 08:25 AM
This is my first time i visit here. I found so many interesting stuff in your blog especially its discussion. From the tons of comments on your articles, I guess I am not the only one having all the enjoyment here keep up the good work NEARBY SUSHI
Johan Smith March 7th, 2019 @ 09:47 AM
Johan Smith March 9th, 2019 @ 07:03 AM
Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon. Big thanks for the useful info. Philippines Medical College
Johan Smith March 11th, 2019 @ 07:43 AM
That'sthe rationale advertising and marketing for you to proper studying before writing. Also, it is attainable to write down superior writing because of this. discount vacation rentals
Johan Smith March 11th, 2019 @ 09:51 AM
Thanks for a very interesting blog. What else may I get that kind of info written in such a perfect approach? I’ve a undertaking that I am simply now operating on, and I have been at the look out for such info. Henderson locksmith
Johan Smith March 11th, 2019 @ 11:09 AM
I am happy to find this post Very useful for me, as it contains lot of information. I Always prefer to read The Quality and glad I found this thing in you post. Thanks locksmith Las Vegas
Johan Smith March 11th, 2019 @ 12:23 PM
I got what you mean , thanks for posting .Woh I am happy to find this website through google. car locksmith Las Vegas
Johan Smith March 12th, 2019 @ 12:04 PM
I read that Post and got it fine and informative. best quad exercise
Johan Smith March 12th, 2019 @ 12:45 PM
Positive site, where did u come up with the information on this posting?I have read a few of the articles on your website now, and I really like your style. Thanks a million and please keep up the effective work. bebek kamera sistemleri
uzair awan March 28th, 2019 @ 01:22 PM
Robins April 3rd, 2019 @ 01:26 PM
If you are looking for more information about flat rate locksmith Las Vegas check that right away. effects of testo max
uzair awan April 10th, 2019 @ 05:52 PM
That's give attention to you need to certain research prior to authoring. Will probably be achievable to be able to a lot more attractive post this way. car dealerships near me
uzair awan April 21st, 2019 @ 10:07 PM
Each time when i evolved into with your web page while using unique focus simply a little little bit submits. Eye-catching technique for years to come, We will be book-marking presently include products obtain arises suitable in place. article submission sites with instant approval
Stephanie Butler May 12th, 2019 @ 10:11 AM
Right away this website will probably unquestionably usually become well known with regards to most of website customers, as a result of meticulous accounts and in addition tests. tips for triceps
seoexpert June 1st, 2019 @ 12:43 PM
I’m going to read this. I’ll be sure to come back. thanks for sharing. and also This article gives the light in which we can observe the reality. this is very nice one and gives indepth information. thanks for this nice article... Escorts in lahore
seoexpert June 2nd, 2019 @ 02:03 PM
seoexpert June 8th, 2019 @ 10:03 AM
Many people have adopted online-based marketing strategies because of the immense benefits they get from doing so.
seoexpert June 10th, 2019 @ 09:20 AM
Excellent information on your blog, thank you for taking the time to share with us. Amazing insight you have on this, it's nice to find a website that details so much information about different artists.
Agen SBOBET
Stephanie Butler June 10th, 2019 @ 01:56 PM
Excellent blog! I found it while surfing around on Google. Content of this page is unique as well as well researched. Appreciate it.
Stephanie Butler June 13th, 2019 @ 12:54 PM
Excellent blog! I found it while surfing around on Google. Content of this page is unique as well as well researched. Appreciate it. protein foods
seoexpert June 15th, 2019 @ 10:22 AM
Artificial grass is just like natural grass
this website buy turf. | https://play.lighthouseapp.com/projects/57987/tickets/665-exception-when-loading-testrunner-module-explicitly | CC-MAIN-2019-26 | refinedweb | 2,794 | 63.49 |
Hi Brandon,
You can use the reflection API for that.
If you want to use a String as a name, you can use
Class.forName("your.target.package." + name).instance(), and set name to
"Book", you can get an instance of Book, and proceed like you do. You might
even be able to use Class.forName("your.target.package." + name
+"Peer").getMethod(...) to execute the doSelect of the peer directly
without creating an instance of Book.
HOWEVER, personally I am not sure whether your approach makes sense. Each
time you use the reflection approach, you lose type safety. The deeper you
bury the generation in your framework, the more difficult is it to find the
cause for class cast exceptions and the like. Also, you cannot use IDE
features like "refactore" and "search references" any more, because your
binding is too loose.
Of course, you have introduced additional flexibility of the classes which
you can produce with one method. But In practice, I personally have never
missed such flexibility. Usually, I know that I want to read books in a
place, and therefore I can also write "BookPeer.doSelect()" instead of
"GeneralObjectRetriever.retrieve()". If this is not flexible enough for
you, you can try Torque's inheritance mechanism (no idea how good it works,
never used it myself). If this is still not flexible enough, well, then you
have to do something like you plan to do.
Again, this is just my personal experience and opinion, and might not at
all be applicable to your specific situation.
Thomas
"Brandon" <eff4eye@hotmail.com> schrieb am 23.02.2005 20:27:28:
> In order to simply the Torque programming model even further, I am trying
to
> create CRUD methods for a database that will essentially do the stuff
that
> I'm requesting for me. That way instead of writing all of this stuff for
a
> retrieve everytime, I just call a generic method retrieve() and it
returns
> what I wanted to select on from the database. So I've noticed that all
of
> my auto-generated databases implement this interface called Persistent.
> This is essentially the way I see things going right now:
>
> retrieve (Persistent p)
> {
> retrieve (p, null);
> }
>
>
> retrieve (Persistent p, Map m)
> {
> // create a criteria here
> // if m != null then go through a for loop that iterates through the
map
> and adds to
> // the criteria object
> return ((BaseObject)p).getPeer.doSelect(criteria);
> }
>
>
> This seems to simplify things a bit, but I think that passing in a String
> rather than a Persistent makes much more sense. It just seems akward to
> program this way. Using the example database the tutorial provides, in
> order to perform a statement equivalent to "select * from book;" one
would
> have to pass in a new Book object to retrieve. Essentially the call
would
> look like "List bookList = retrieve(new Book());" and I think it makes
much
> more sense to do something like "List bookList = retrieve("Book");".
Does
> anyone know how I could accomplish something like this or am I stuck
passing
> in a new Book object?
>
> -Brand | http://mail-archives.apache.org/mod_mbox/db-torque-user/200502.mbox/%3COF5619E53A.80ACD0FB-ONC1257086.002136F8-C1257086.002554C4@seitenbau.net%3E | CC-MAIN-2017-26 | refinedweb | 511 | 62.38 |
On 2016-09-19 05:12, Tilak Waelde wrote:
Hope this helps. Happy to share my LXD configurations with anyone...
AdvertisingPlease do! I'd really love to see a description of a production lxd / lxc setup with proper networking and multiple hosts! I haven't played around with it yet, but is it possible to include some sort of VRF-lite[0] into such a setup for multi tenancy purposes? Other than by using VLANs one could use the same IP ranges multiple times from what I've come to understand? I'm not sure how a user could put the containers interfaces into a different network namespace..
Hi,after some experimenting with VXLAN, I've summed up a working "LAN for multiple LXC servers" here: is using in kernel VXLAN, and thus performs very well (almost wire speed, and much much better than any userspace programs).
On the other hand, it provides no encryption between LXD servers (or, in fact, any other virtualization), so may depend on your exact requirements.On the other hand, it provides no encryption between LXD servers (or, in fact, any other virtualization), so may depend on your exact requirements.
Tomasz Chmielewski _______________________________________________ lxc-users mailing list lxc-users@lists.linuxcontainers.org | https://www.mail-archive.com/lxc-users@lists.linuxcontainers.org/msg06309.html | CC-MAIN-2016-40 | refinedweb | 207 | 60.65 |
What is the easiest way to reverse a sting? I thought there was a function, but I cannot remember it. Is there a page that lists all functions of all classes I could look at?
Printable View
What is the easiest way to reverse a sting? I thought there was a function, but I cannot remember it. Is there a page that lists all functions of all classes I could look at?
easy way would be to copy is to use a loop and some functions from the string header file
>What is the easiest way to reverse a sting?
If your compiler supports strrev or anything similar then that's the easiest way. Otherwise, a loop that walks from either end of the string toward the middle and swaps until the two indices are the same would come a close second.
Here is an implementation of strrev
You could easily implement one yourself, e.g.
1. Iterate from the last character to the first, placing them into a new string.
2. Swap the nth character with the (length - n)th character.
>Here is an implementation of strrev
A good example of the programmer trying to be clever...
huh Prelude ????
>huh Prelude ????
I was just commenting on this:
It's cute, but shouldn't be used in any real code without an exceptionally good reason. It's hard to use correctly, and very few people understand the nuances.It's cute, but shouldn't be used in any real code without an exceptionally good reason. It's hard to use correctly, and very few people understand the nuances.Code:
*p1 ^= *p2;
*p2 ^= *p1;
*p1 ^= *p2;
aha :)
Wow. Thanks for all the help! I thought there was a built in function, but aparrently my GCC compiler does not support it. So I just wrote my own:
BTW, what does the ^= operator do?BTW, what does the ^= operator do?Code:
char* reverse_str(char *m)/*reflects rotor*/
{
char rev[26];
strcpy(rev,m);/*stores rotor for reversal*/
unsigned length=strlen(m);
unsigned count;/*counter for string reversal*/
for(count=0;count<length;count++)
{
m[count]=rev[length-count];
}
return m;
}
hi dragon, i am following this topic. i got segmentation fault while calling your reverse_str function.
my main code:
Code:
int main()
{
char*s ;
s = reverse_str("mango");
printf("%s",s);
}
output > segmentation fault
why ??
another question, what is unsigned ?? it should be unsigned int. r u sure ur code is ok ?
N.B : is it not necesarry to null terminate inside ?
Hi all,
Just for my opinion, and the nice implentation, I'd definetly go for the strrev() code. It's really nice in my opinion, and it worked perfectly first time I compiled it:
Code 1.1: Using my_strrev()Code 1.1: Using my_strrev()Code:
#include <stdlib.h> // for printf
char* my_strrev(char *);
char* my_strrev(char *string1) {
char *p1, *p2;
if (!string1 || !*string1)
return string1;
for (p1 = string1, p2 = string1 + strlen(string1) - 1; p2 > p1; ++p1, --p2) {
*p1 ^= *p2;
*p2 ^= *p1;
*p1 ^= *p2;
}
return string1;
}
int main () {
char str[256] = {"!looc si sihT"};
printf("%s\n", my_strrev(str) ); // call on function in here for ex.
return 0;
}
Now I only called the function my_strrev() because any previous header file or DLL linkage might try figthing over which function to use and etc...
I take my hat off to noob2c for finding this function :) I'm making a replica of the string library, so many thanks, I'll have to add this implementation to my project.
Edit (after noob2c post): Ah, ok. Much thanks to you too Prelude :)
Hope this helps,
- Stack Overflow
no problem..... it was prelude who knew the name of the function .... i did a search to find it. As for ^= that means exclusive or or XOR
Exclusive OR (XOR) returns true only when the two values are different.
Truth table for XOR:^ is a bitwise XOR so if you have:^ is a bitwise XOR so if you have:Code:
0 1
0| 0 1
1| 1 0
1010 and 1101 you would get: 0111
I always prefer to take the easist route until I'm sure of what is going on. As such I would recommend what Prelude suggest and avoid the codes that use ^= until you are 125% sure you know what it is doing.
Below is a quicky I threw together that uses 0 tricks and should be easy enough to walk through. Hope it is of help:
A test program could look like this:A test program could look like this:Code:
char *strrev(char *str)
{
unsigned len = strlen(str) - 1; /* Minus one so we don't move the null character */
int count;
char ch;
for (count=0; count < len; count++, len --)
{
ch = str[count];
str[count] = str[len];
str[len] = ch;
}
return str;
}
Note: Changing the declaration of msg1 and msg2 from char [] to char * will result in a segmentation fault since that area of memory will be marked as read only.Note: Changing the declaration of msg1 and msg2 from char [] to char * will result in a segmentation fault since that area of memory will be marked as read only.Code:
#include <stdio.h>
int main (void)
{
char msg1[]="Hello my baby hello my darling hello my ragtime gal!";
char msg2[]="Save me a kiss by wire, baby my heart is on fire!";
puts("Before");
puts(msg1);
puts(msg2);
puts("After");
puts(strrev(msg1));
puts(strrev(msg2));
return 0;
} | http://cboard.cprogramming.com/c-programming/51840-easy-way-reverse-string-printable-thread.html | CC-MAIN-2015-48 | refinedweb | 905 | 72.26 |
Win32::Daemon::Simple - framework for Windows services
0.2.5
use FindBin qw($Bin $Script); use File::Spec; use Win32::Daemon::Simple Service => 'SERVICENAME', Name => 'SERVICE NAME', Version => 'x.x', Info => { display => 'SERVICEDISPLAYNAME', description => 'SERVICEDESCRIPTION', user => '', pwd => '', interactive => 0, # parameters => "-- foo bar baz", }, Params => { # the default parameters Tick => 0, Talkative => 0, Interval => 10, # minutes LogFile => "ServiceName.log", # ... Description => <<'*END*', Tick : (0/1) controls whether the service writes a "tick" message to the log once a minute if there's nothing to do Talkative : controls the amount of logging information Interval : how often does the service look for new or modified files (in minutes) LogFile : the path to the log file ... *END* }, Param_modify => { LogFile => sub {File::Spec->rel2abs($_[0])}, Interval => sub { no warnings; my $interval = 0+$_[0]; die "The interval must be a positive number!\n" unless $interval > 0; return $interval }, Tick => sub {return ($_[0] ? 1 : 0)}, }, Run_params => { # parameters for this run of the service #... }; # initialization ServiceLoop(\&doTheJob); # cleanup Log("Going down"); exit; # definition of doTheJob() # You may want to call DoEvents() within the doTheJob() at places where it # would be safe to pause or stop the service if the processing takes a lot of time. # Eg. DoEvents( \&close_db, \&open_db, sub {close_db(); cleanup();1})
This module will take care of the instalation/deinstalation, reading, storing and modifying parameters, service loop with status processing and logging. It's a simple to use framework for services that need to wake up from time to time and do its job and otherwise should just poll the service status and sleep as well as services that watch something and poll the Service Manager requests from time to time.
You may leave the looping to the module and only write a procedure that will be called in the specified intervals or loop yourself and allow the module to process the requests when it fits you.
This module should allow you to create your services in a simple and consistent way. You just provide the service name and other settings and the actuall processing, the service related stuff and commandline parameters are taken care off already.
All the service parameters are passed to the module via the use statement. This allows the module to fetch the service parameters before your script gets compiled, set the constants according to the parameters and to the way the script was started. Thanks to this Perl will be able to inline the constant values and optimize out statements that are not needed. Eg:
print "This will print only if you start the script on cmd line.\n" if CMDLINE;
The internal system name of the service (for example "w3svc"). The service parameters will be stored in the registry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\$Service.
The name of the service as it will be printed into the log file and to the screen when installing/uninstalling/modifying the service.
The version number. This will be printed to the screen and log files and used by PDKcompile to set the version info of the EXE generated by PerlApp.
This is a hash that is with minor changes passed to the Win32::Daemon::CreateService.
The display name of the service. This is the name that will be displayed in the Service Manager. Eg. "World Wide Web Publishing Service".
The description displayed alongside the display name in the Service Manager.
The username and password that the service will be running under. The accont must have the SeServiceLogonRight right. You can change user rights using Win32::Lanman::GrantPrivilegeToAccount() or the User Manager.
Whether or not is the service supposed to run interactive (visible to whoever is logged on the server's console).
The path to the script/program to run. This should either be full path to Perl, space and full path to your raw script OR a full path to the EXE created by PerlApp or Perl2Exe. This option will be set properly by the module and you should never specify it yourself. You should really know what you are doing and what before you do.
The "command line" parameters that are to be passed to the service. Please see below for the explanation of commandline parameter processing !!!
This hash specifies the parameters that the service uses and their DEFAULT values. When the service is installed these values will be stored in the registry (under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\$Service\Parameters) and whenever the service starts the current values will be read and the module will define a constant for each of the subkeys.
Some of the parameters are used by the Win32::Daemon::Simple itself, so they should be always present.
Controls whether the module prints
"tick: " . strftime( "%Y/%m/%d %H:%M:%S", localtime()) . "\n"
into the log file once a minute. (So that you could see that the service did not hang, but just doesn't have anything to do.) The ticking will be done only if you let the module do the looping (see
ServiceLoop below).
If you do not specify this parameter here it will always be OFF.
Specifies how often should the module call your callback subroutine (see
ServiceLoop below). In minutes, though it doesn't have to be a whole number, you can specify interval=0.5. The module will not call your callback more often than once a second though!
Not necessary if you do the looping yourself.
The path to the log file. You should include this parameter so that the user will be able to change the path where the logging information is written. Currently it's not possible to turn the loggin off except by overwriting the
Logging_code.
If you do not specify this parameter or use
undef then the log file will be created in the same directory as the script and named ScriptName.log.
This value of this parameter is included in the help printed when the script is executed with -help parameter. It should describe the various parameters that you can set for the service.
The values of these options are available as TICK, INTERVAL, LOGFILE and DESCRIPTION constants.
Here you may specify what functions to call when the user tries to update a service parameter. The function may modify or reject the new value. If you want to reject a value die("with the message\n"), otherwise return the value you want to be stored in the registry and used by the service.
Param_modify => { LogFile => sub {File::Spec->rel2abs($_[0])}, Interval => sub { no warnings; my $interval = 0+$_[0]; die "The interval must be a positive number!\n" unless $interval > 0; return $interval; }, Tick => sub {return ($_[0] ? 1 : 0)}, SMTP => sub { my $smtp = shift; return $smtp if Mail::Sender::TestServer($smtp); # assuming you have Mail::Sender 0.8.07 or newer }, },
(ADVANCED) This option allows you to overwrite the functions that will be used for logging. You can log into the EvenLog or whereever you like.
Logging_code => <<'*END*', sub LogStart {}; # called once when the service starts sub Log {}; # called many times. Appends a timestamp. sub LogNT {}; # called many times. Doesn't append a timestamp. sub OpenLog {}; # called once, just before printing the params sub CloseLog {}; # called once, just after printing the params sub CatchMessages {}; # not caled by Win32::Daemon::Simple sub GetMessages {}; # not caled by Win32::Daemon::Simple *END*
See below for more information about the functions.
(ADVANCED) Here you can overwrite the service parameters. The values specified here take precedence over the values stored in the registry or specified in Params=> hash.
Run_params => { LogFile => (condition ? "$Bin\\Foo.log" : "$Bin\\Bar.log"), }
ServiceLoop( \&processing)
Starts the event processing loop. The subroutine you pass will be called in the specified intervals.
In the loop the module tests the service status and processes requests from Service Manager, ticks (writes "Tick at $TimeStamp" messages once a minute if the Tick parameter is set) and calls your callback if the interval is out. Then it will sleep(1).
DoEvents() DoEvents( $PauseProc, $UnPauseProc, $StopProc)
You may call this procedure at any time to process the requests from the Service Manager. The first parameter specifies what is to be done if the service is to be paused, the second when it has to continue and the third when it's asked to stop.
If $PauseProc is:
undef : the service is automaticaly paused, DoEvents() returns after the Service Manager asks it to continue not a code ref and true : the service is automaticaly paused, DoEvents() returns after the Service Manager asks it to continue not a code ref and false : the service is not paused, DoEvents() returns SERVICE_PAUSE_PENDING immediately. a code reference : the procedure is executed. If it returns true the service is paused and DoEvents() returns after the service manager asks the service to continue, if it returns false DoEvents() returns SERVICE_PAUSE_PENDING.
If $UnpauseProc is:
a code reference : the procedure will be executed when the service returns from the paused state. anything else : nothing will be done
If $StopProc is:
undef : the service is automaticaly stopped and the process exits not a code ref and true : the service is automaticaly stopped and the process exits not a code ref and false : the service is not stopped, DoEvents() returns SERVICE_STOP_PENDING immediately. a code reference : the procedure is executed. If it returns true the service is stopped and the process exits, if it returns false DoEvents() returns SERVICE_PAUSE_PENDING.
Pause() Pause($UnPauseProc, $StopProc)
If the DoEvents() returned SERVICE_PAUSE_PENDING you should do whatever you need to get the service to a pausable state (close open database connections etc.) and call this procedure. The meanings of the parameters is the same as for DoEvents().
Writes the parameters to the log file (and in commandline mode also to the console). Appends " at $TimeStamp\n" to the message.
Writes the parameters to the log file (and in command line mode also to the console). Only appends the newline.
$value = ReadParam( $paramname, $default);
Reads the value of a parameter stored in HKLM\SYSTEM\CurrentControlSet\Services\SERVICENAME\Parameters If there is no value with that name returns the $default.
SaveParam( $paramname, $value);
Stores the new value of the parameter in HKLM\SYSTEM\CurrentControlSet\Services\SERVICENAME\Parameters.
CatchMessages( $boolean);
Turns on or off capturing of messages passed to Log() or LogNT(). Clears the buffer.
$messages = GetMessages();
Returns the messages captured since CatchMessages(1) or last GetMessages(). Clears the buffer.
These two functions are handy if you want to mail the result of a task. You just CatchMessages(1) when you start the task and GetMessages() and CatchMessages(0) when you are done.
Constant. If set to 1 the service is running in the command line mode, otherwise set to 0.
For each parameter specified in the
params={...}> option the module reads the actual value from the registry (using the value from the
params={...}> option as a default) and defines a constant named
uc($parametername).
The parameters passed to a script using this module will be processed by the module! If you want to pass some paramters to the script itself use -- as a parameter. If you do then the parameters before the -- will be processed by the module and the ones behind will be passed to the script. If you do not use the -- but do call the program with some parameters then the parameters will be processed by Win32::Daemon::Simple and your program will end! You may use either -param or /param. This makes no difference.
The service created using this module will accept the following commandline parameters:
Installs the service and stores the default values of the parameters to the registry into HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ServiceName\Parameters
If you get an error like
Failed to install: The specified service has been marked for deletion.
or
Failed to install: The specified service already exists.
close the Services window and/or the regedit and try again!
Uninstalls the service.
Starts the service.
Stops the service.
Prints the actual values of all the parameters of the service.
Prints the name and version of the service and the list of options. If the parameters=>{} option contained a Description, then the Description is printed as well.
Sets all parameters to their default values.
Sets the value of PARAM to 1. The parameter names are case insensitive.
Sets the value of PARAM to 0. The parameter names are case insensitive.
Sets the value of PARAM to value. The parameter names are case insensitive.
You may validate and/or modify the value with a handler specified in the Param_modify=>{} option. If the handler die()s the value will NOT be changed and the error message will be printed to the screen.
Deletes the parameter from registry, therefore the default value of that parameter will be used each time the service starts.
Let's you overwrite the service ID specified in the
use Win32::Daemon::Simple Service => 'TestSimpleService',
If you use this BEFORE -install, the service will be installed into HKLM\SYSTEM\CurrentControlSet\Services\[$name]
This allows you to install several instances of a service, each under a different name. Each instance will remember its name which you can access as
SERVICEID.
If you want to change the parameters of one of the instances use
service.pl -service=name -tick -logfile=name.log
without the -service parameter you are chaning the default service.
Let's you overwrite the service display name and the name written to the log file. That is both
use Win32::Daemon::Simple ... Name => 'Long Service Name', ... Info => { display => 'Display Service Name',
You may get the name as
SERVICENAME.
You can specify what user account to use for the service. These parameters are ONLY effective if followed by
-install !
Let's you specify whether the service is allowed to interact with the desktop. This parameter is ONLY effective if followed by
-install and if you do not specify the
user and
pwd!
Stop processing parameters, run the script and leave the rest of @ARGV intact. The -install, -uninstall, -stop, -start, -help and -params parameters cannot be used before the --.
If the service parameters contain -- then all the -param, -noparam, -param=value, -defaultparam and -default only affect the current run and are not written into the registry.
script.pl -install
Installs the script.pl as a service with the default parameters.
script.pl -uninstall
Uninstalls the service.
script.pl -tick -interval=10
Changes the options in the registry. When the service starts next time it will tick and the callbacl will be called each 10 minutes.
script.pl -notick -interval=5 --
Start the service without ticking and with the interval of 5 minutes. Do not make any changes to the registry.
script.pl -interval=60 -start
Set the interval to 60 minutes in the registry, start the service (via the service manager) and exit.
script.pl -- foo fae fou
Start the service and set @ARGV = qw(foo fae fou).
The scripts using this module are sensitive to the way they were started.
If you start them with a parameter they process that parameter as explained above. Then if you started them from the Run dialog or by doubleclicking they print (press ENTER to continue) and wait for the user to press enter, if you started them from the command prompt they exit immediately
If they are started without parameters or with -- by the Service Manager they register with the Manager and start your code passing it whatever parameters you specified after the --, if they are started without parameters from command prompt they start working in a command line mode (all info is printed to the screen as well as to the log file) and if they are started by doubleclicking on the script they show the -help screen.
-install=name A way to override the service name set by the script. I will have to append -s_v_c_n_a_m_e=name to the service parameters! Needed for ability to run several instances of a service.
Jenda@Krynicky.cz With comments and suggestions by extern.Lars.Oeschey@audi.de | http://search.cpan.org/~jenda/Win32-Daemon-Simple-0.2.6/Simple.pm | CC-MAIN-2015-06 | refinedweb | 2,652 | 64.2 |
Testing.
- See the chain of redirects (if any) and check the URL and status code at each step.
- Test that a given request is rendered by a given Django template, with a template context that contains certain values.
Note that the test client is not intended to be a replacement for Selenium or other “in-browser” frameworks. Django’s test client has a different focus. In short:
- Use Django’s test client to establish that the correct template is being rendered and that the template is passed the correct context data.
- Use in-browser frameworks like Selenium to test rendered HTML and the behavior of Web pages, namely JavaScript functionality. Django also provides special support for those frameworks; see the section on LiveServerTestCase for more details..
- class={}, follow=False, **extra)[source.
CGI specification
The headers sent via **extra should a URL /redirect_me/ that redirected to /next/, that redirected to /final/, this is what you’d see:
>>> response = c.get('/redirect_me/', follow=True) >>> response.redirect_chain [(u'', 302), (u'', 302)]
- post(path, data={}, content_type=MULTIPART_CONTENT, follow=False, **extra)[source') as fp: ... c.post('/customers/wishes/', {'name': 'fred', 'attachment': fp})
)[source]¶
Makes a HEAD request on the provided path and returns a Response object. This method works just like Client.get(), including the follow and extra arguments, except it does not return a message body.
- options(path, data='', content_type='application/octet-stream', follow=False, **extra)[source]¶
Makes an OPTIONS request on the provided path and returns a Response object. Useful for testing RESTful interfaces.
When data is provided, it is used as the request body, and a Content-Type header is set to content_type.Changed in Django 1.5:
Client.options() used to process data like Client.get().
The follow and extra arguments act the same as for Client.get().
- put(path, data='', content_type='application/octet-stream', follow=False, **extra)[source]¶
Makes a PUT request on the provided path and returns a Response object. Useful for testing RESTful interfaces.
When data is provided, it is used as the request body, and a Content-Type header is set to content_type.Changed in Django 1.5:
Client.put() used to process data like Client.post().
The follow and extra arguments act the same as for Client.get().
- patch(path, data='', content_type='application/octet-stream', follow=False, **extra)[source]¶
Makes a PATCH request on the provided path and returns a Response object. Useful for testing RESTful interfaces.
The follow and extra arguments act the same as for Client.get().
- delete(path, data='', content_type='application/octet-stream', follow=False, **extra)[source]¶
Makes an DELETE request on the provided path and returns a Response object. Useful for testing RESTful interfaces.
When data is provided, it is used as the request body, and a Content-Type header is set to content_type.Changed in Django 1.5:
Client.delete() used to process data like Client.get().
The follow and extra arguments act the same as for Client.get().
- login(**credentials)[source.
- logout()[source]¶ by 2616 for a full list of HTTP status codes. documentation of the Cookie module:
from django.utils.client.RequestFactory
Provided test case classes¶
Normal Python unit test classes extend a base class of unittest.TestCase. Django provides a few extensions of this base class:
Regardless of the version of Python you’re using, if you’ve installed unittest2, django.utils.unittest will point to that library.
SimpleTestCase¶
A thin subclass of unittest.TestCase, it extends it with some basic functionality like:
- Saving and restoring the Python warning machinery state.
- is performed by the app.
- Robustly testing two HTML fragments for equality/inequality or containment.
- Robustly testing two XML fragments for equality/inequality.
- Robustly testing two JSON fragments for equality.
- The ability to run tests with modified settings.
- Using the client Client.
- Custom test-time URL maps.
The latter two features were moved from TransactionTestCase to SimpleTestCase in Django 1.6.
If you need any of the other more complex and heavyweight Django-specific features like:
- Testing or using the ORM.
- Database fixtures.
- Test skipping based on database backend features.
- The remaining specialized assert* methods.
then you should use TransactionTestCase or TestCase instead.
SimpleTestCase inherits from django.utils.unittest.TestCase.
TransactionTestCase¶
Django’s TestCase class (described below) makes use of database transaction facilities after the test runs by truncating all tables. A TransactionTestCase may. Both explicit commits like transaction.commit() and implicit ones that may be caused by transaction.atomic() are replaced with a nop operation. This guarantees that the rollback at the end of the test restores the database to its initial state.
When running on a database that does not support rollback (e.g. MySQL with the MyISAM storage engine), TestCase falls back to initializing the database by truncating tables and reloading initial data.
Warning
While commit and rollback operations still appear to work when used in TestCase, no actual commit or rollback will be performed by the database. This can cause your tests to pass or fail unexpectedly. Always use TransactionTestCase when testing transactional behavior or any code that can’t normally be executed in autocommit mode (select_for_update() is an example). behavior, but for legacy tests that do, the reset_sequences attribute can be used until the test has been properly updated.
The order in which tests are run has changed. See Order in which tests are executed.
TransactionTestCase inherits from SimpleTestCase.
TestCase¶, including:
- Automatic loading of fixtures.
- Wraps each test in a transaction.
- Creates a TestClient instance.
- Django-specific assertions for testing for things like redirection and form errors.
The order in which tests are run has changed. See Order in which tests are executed.
TestCase inherits from TransactionTestCase.
LiveServerTestCase¶
LiveServerTestCase does.
By default the live server’s address is 'localhost):
./manage.py test myapp.tests.MySeleniumTests.test_login
LiveServerTestCase makes use of the staticfiles contrib app so you’ll need to have your project configured accordingly (in particular by setting STATIC_URL)., simply checking for the presence of <body> in the response might not necessarily be appropriate for all use cases. Please refer to the Selenium FAQ and Selenium documentation for more information.
Test cases features¶:
from django.utils import unittest from django.test.client just¶
If you want to use a different Client class (for example, a subclass with customized behavior), use the client_class class attribute:
from django.test import TestCase from django.test.client import Client class MyTestClient(Client): # Specialized methods for your environment ... class MyTest(TestCase): client_class = MyTestClient def test_my_stuff(self): # Here self.client is an instance of MyTestClient... call_some_test_code()
Fixture loading¶
A test case for a database-backed Web site isn’t much use if there isn’t any data in the database. To make it easy to put test data into the database, Django’s custom TransactionTest.
By default, fixtures are only loaded into the default database. If you are using multiple databases and set multi_db=True, fixtures will be loaded into all databases. classes provide.
The multi_db flag also affects into which databases the attr.
For testing purposes it’s often useful to change a setting temporarily and revert to the original value after running the testing code. For this use case Django provides a standard Python context manager (see PEP 343) LOGIN_URL setting for the code in the with block and reset its value to the previous state afterwards..
Finally, avoid aliasing your settings as module-level constants as override_settings() won’t work on such values since they are only evaluated the first time the module is imported.
You can also simulate the absence of a setting by deleting it after settings have been overriden,¶
If you use any of Django’s custom TestCase classes, the test runner will clear the contents of the test email outbox at the start of each test case.
For more detail on email services during tests, see Email services below.
Assertions¶
As Python’s normal unittest.TestCase class argument. This string will be prefixed to any failure message generated by the assertion. This allows you to provide additional details that may help you to identify the location and cause of an failure in your test suite.
- SimpleTestCase.assertRaisesMessage(expected_exception, expected_message, callable_obj=None, *args, **kwargs)¶
Asserts that execution of callable callable_obj raised the expected_exception exception and that such exception has an expected_message representation. Any other outcome is reported as a failure. Similar to unittest’s assertRaisesRegexp() with the difference that expected_message isn’t a regular expression.
- SimpleTestCase.assertFieldOutput(fieldclass, valid, invalid, field_args=None, field_kwargs=None, empty_value=u'')¶
Asserts that a form field behaves correctly with various inputs.
For example, the following code tests that an EmailField accepts “a@a.com” as a valid email address, but rejects “aaa” with a reasonable error message:
self.assertFieldOutput(EmailField, {'a@a.com': 'a@a.com'}, {'aaa': [u'Enter a valid email address.']})
- SimpleTestCase.assertFormError(response, form, field, errors, msgFormsetError(response, formset, form_index, field, errors, msg_prefix='')¶
- New in Django 1.6.)¶
Asserts that a Response instance produced the given status_code and that text appears in the content of the response. If count is provided, text must occur exactly count times)¶
Asserts that a Response instance produced the given status_code and that text does not appears in the content='')¶
Asserts that the template with the given name was used in rendering the response.
The name is a string such as 'admin/index.html'.='')¶
Asserts that the template with the given name was not used in rendering the response.
You can use this as a context manager in the same way as assertTemplateUsed().
- SimpleTestCase.assertRedirects(response, expected_url, status_code=302, target_status_code=200, host=None, msg_prefix=''.
The host argument sets a default host if expected_url doesn’t include one (e.g. "/bar/"). If expected_url is an absolute URL that includes a host (e.g. ""), the host parameter will be ignored. Note that the test client doesn’t support fetching external URLs, but the parameter may be useful if you are testing with a custom HTTP host (for example, initializing the test client with Client(HTTP_HOST="testhost").
- SimpleTestCase.assertHTMLEqual(html1, html2, msg=None)¶
Asserts that the strings html1 and html2 are).)¶
- New in Django 1.5.
Asserts that the strings xml1 and xml2 are equal. The comparison is based on XML semantics. Similarily to assertHTMLEqual(), the comparison is made on parsed content, hence only semantic differences are considered, not syntax differences. When unvalid XML is passed in any parameter, an AssertionError is always raised, even if both string are identical.
Output in case of error can be customized with the msg argument.
- SimpleTestCase.assertXMLNotEqual(xml1, xml2, msg=None)¶
- New in Django 1.5.='')¶
- New in Django 1.5.)¶
- New in Django 1.5.
Asserts that the JSON fragments raw and expected_data are equal. Usual JSON non-significant whitespace rules apply as the heavyweight is delegated to the json library.
Output in case of error can be customized with the msg argument.
- TransactionTestCase.assertQuerysetEqual(qs, values, transform=repr, ordered=True Python set comparison.Changed in Django 1.6:
The method now checks for undefined order and raises ValueError if undefined order is spotted. The ordering is seen as undefined if the given qs isn’t ordered and the comparison is against more than one ordered values.
- TransactionTestCase.assertNumQueries(num, func, *args, **kwargs").)
During test running, each outgoing email is saved in django.core.mail.outbox. This is a simple list of all EmailMessage instances that have been sent. The outbox attribute is a special attribute that is created only when the locmem email¶())
Skipping tests¶
The unittest library provides the @skipIf and @skipUnless decor django.db.backends.BaseDatabaseFeatures class for a full list of database features that can be used as a basis for skipping tests.
Skip the decorated test if the named database feature
Skip the decorated test if the named database feature | https://docs.djangoproject.com/en/1.6/topics/testing/tools/ | CC-MAIN-2015-40 | refinedweb | 1,951 | 50.63 |
Originally posted on my blog
Introduction
Pipenv is a dependency manager for Python projects. If you’re familiar with Node.js’ npm or Ruby’s bundler, it is similar in spirit to those tools.
Everytime you want to create a new Python project or you follow a new course, you get sucked with the VirtualEnv ?
What the heck virtualenv is ?
How you can setup it Correctly ?
Requirements
Python : 3.4 or later
pip
Package Manager VS Dependency Manager
Based on a StackOverFlow answer
Package Manager - is used to configure the system, ie to setup your development environment, and with these settings you can build many projects.
Dependency Manager - Is specific to project. You manage all dependencies for a single project and those dependencies are going to be saved on your project. When you start another project you should manage your dependencies again.
Why we need it
The main purpose of using a dependency manager is to separate your application dependencies, that will give you the ability to use the one Framework in different project with different version.
A simple use case :
Imagine we have two(2) Django applications and we want to install differents versions of Django.
What we do ?
We've the choice to install all versions in one machine, but this is not the ideal way to do.
How to setup your virtualenv Correctly
Installation
As Pythonist we've many choices
- Poetry
- Pip
- Pipenv
- etc...
In this tutorial, we will use pipenv and i think is the easiest to setup.
And it is recommended for collaborative and Team projects.
Make sure you've Python and Pip installed in your machine
Let's check your installation
$ python --version
The output should look something like this
Python 3.7.5
And check the pip installation
$ pip --version
The output
pip 19.3.1 from /usr/local/lib/python3.7/dist-packages/pip (python 3.7
Installing pipenv using pip
$ pip install --user pipenv
Check the installation
$ pipenv --version
The output
pipenv, version 11.9.0
Create new project
$ mkdir test_pipen && cd test_pipenv $ touch app.py
Installing packages for your project
$ pipenv install requests
Creating a virtualenv for this project… Using /usr/bin/python3 (3.7.5) to create virtualenv… ⠋Already using interpreter /usr/bin/python3 Using base prefix '/usr' New python executable in /home/username/.local/share/virtualenvs/test_pipenv-3gXMtvzy/bin/python3 Also creating executable in /home/username/.local/share/virtualenvs/test_pipenv-3gXMtvzy/bin/python Installing setuptools, pip, wheel... done. Virtualenv location: /home/username/.local/share/virtualenvs/test_pipenv-3gXMtvzy Creating a Pipfile for this project… Installing requests… Looking in indexes: Collecting requests Using cached Collecting idna<2.9,>=2.5 Using cached Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 Using cached Collecting chardet<3.1.0,>=3.0.2 Using cached Collecting certifi>=2017.4.17 Using cached Installing collected packages: idna, urllib3, chardet, certifi, requests Successfully installed certifi-2019.11.28 chardet-3.0.4 idna-2.8 requests-2.22.0 urllib3-1.25.7 Adding requests to Pipfile's [packages]… Pipfile.lock not found, creating… Locking [dev-packages] dependencies… Locking [packages] dependencies… Updated Pipfile.lock (5a8f8c)! Installing dependencies from Pipfile.lock (5a8f8
Activate your new environment
$ pipenv shell $ (test_pipenv-3gXMtvzy) ....
Add this line in your app.py file
import requests response = requests.get('') print('Your IP is {0}'.format(response.json()['origin']))
Run the application
$ python app.py
You should get output similar to this:
Your IP is 0.0.0.224, 0.0.0.224
See you next :-)
Discussion (2)
Highly recommend pipx, pipxproject.github.io/pipx/, for any global python packages, especially pipenv. Also recommend pyenv in Unix.
Interesting, i will test it. | https://dev.to/xarala221/python-how-to-setup-your-virtualenv-correctly-4l1 | CC-MAIN-2021-31 | refinedweb | 614 | 52.15 |
This is the mail archive of the cygwin-apps@cygwin.com mailing list for the Cygwin project.
On Mon, Apr 16, 2001 at 01:48:45PM +1000, Robert Collins wrote: >I've done some debug tracking and patched gcj to apparently work >correctly under cygwin. libjava just seems to have some header issues. > >Would inserting the __builtin_alloca define be a reasonable workaround? >(__CYGWIN__ protected of course). IMO, newlib's stdlib.h should have something like this in it: #ifdef __GNUC__ # define alloca __builtin_alloca #endif Does this make sense? Or is this a little too generic for newlib? It may be that some platforms, supported by newlib and gcc, do not support __builtin_alloca. The alternative would be to do something like: #if defined(__GNU__) && defined(_USE_BUILTIN_ALLOCA) # define alloca __builtin_alloca #endif and have features.h: #define _USE_BUILTIN_ALLOCA Comments? cgf | https://cygwin.com/ml/cygwin-apps/2001-04/msg00043.html | CC-MAIN-2019-30 | refinedweb | 137 | 60.82 |
The aim of this tutorial is to give a gentle introduction to Objective-C blocks while paying special emphasis to their syntax, as well as exploring the ideas and implementation patterns that blocks make possible.
In my opinion, there are two main stumbling blocks (pun intended!) for beginners when attempting to truly understanding blocks in Objective-C:
- Blocks have a somewhat arcane and "funky" syntax. This syntax is inherited from function pointers as part of the C roots of Objective-C. If you haven't done a lot of programming in pure C, then it's likely you haven't had occasion to use function pointers, and the syntax of blocks might seem a bit intimidating.
- Blocks give rise to "programming idioms" based on the functional programming style that the typical developer with a largely imperative programming-style background is unfamiliar with.
In simpler words, blocks look weird and are used in weird ways. My hope is that after reading this article, neither of these will remain true for you!
Admittedly, it is possible to utilize blocks in the iOS SDK without an in-depth understanding of their syntax or semantics; blocks have made their way into the SDK since iOS 4, and the API of several important frameworks and classes expose methods that take blocks as parameters: Grand Central Dispatch (GCD),
UIView based animations, and enumerating an
NSArray, to name a few. All you need to do is mimic or adapt some example code and you're set.
However, without properly understanding blocks, you'll be limited in your use of them to this method-takes-block-argument pattern, and you'll only be able to use them where Apple has incorporated them in the SDK. Conversely, a better understanding of blocks will let you harness their power and it will open the door to discovering new design patterns made possible by them that you can apply in your own code.
Running the Code Samples
Since in this tutorial we'll be discussing the core concepts of blocks, which apply to recent versions of both Mac OS X and iOS, most of the tutorial code can be run from a Mac OS X command-line project.
To create a command line project, choose OS X > Application from the left-hand pane and choose the "Command Line Tool" option in the window that comes up when you create a new project.
Give your project any name. Ensure that ARC (Automatic Reference Counting) is enabled!
Most of the time code will be presented in fragments, but hopefully the context of the discussion will make it clear where the code fragment fits in, as you experiment with the ideas presented here.
The exception to this is at the end of the tutorial, where we create an interesting
UIView subclass, for which you obviously need to create an iOS project. You shouldn't have any problem writing a toy app to test out the class, but regardless, you'll be able to download sample code for this project if you'd like.
The Many Facets of Blocks
Have you ever heard of a strange animal called the platypus? Wikipedia describes it as an "egg-laying, venomous, duck-billed, beaver-tailed, otter-footed mammal". Blocks are a bit like the platypus of the Objective-C world in that they share aspects of variables, objects, and functions. Let's see how:
A "block literal" looks a lot like a function body, except for some minor differences. Here's what a block literal looks like:
^(double a, double b) // the caret represents a block literal. This block takes two double parameters. Note we don't have to explicitly specify the return type { double c = a + b; return c; // }
So, syntax-wise, what are the differences in comparison with function definitions?
- The block literal is "anonymous" (i.e. nameless)
- The caret (^) symbol
- We didn't have to specify the return type - the compiler can "infer" it. We could've explicitly mentioned it if we wanted to.
There's more, but that's what should be obvious to us so far.
Our block literal encapsulates a bunch of code. You might say this is what a function does too, and you'd be right, so in order to see what else blocks are capable of, read on!
A block pointer lets us handle and store blocks, so that we can pass around blocks to functions or have functions return blocks - stuff that we normally do with variables and objects. If you're already adept with using function pointers, you'll immediately remark that these points apply to function pointers too, which is absolutely true. You'll soon discover that blocks are like function pointers "on steroids"! If you aren't familiar with function pointers, don't worry, I'm actually not assuming you know about them. I won't go into function pointers separately because everything that can be achieved with function pointers can be achieved with blocks - and more! - so a separate discussion of function pointers would only be repetitive and confusing.
double(^g)(double, double) = ^(double a, double b) { double c = a + b; return c; };
The above statement is key and deserves a thorough examination.
The right-hand side is the block literal that we saw a moment ago.
On the left side, we've created a block pointer called
g. If we want to be pedantic, we'll say the '^' on the left signifies a block pointer, whereas the one on the right marks the block literal. The block pointer has to be given a "type", which is the same as the type of the block literal it points to. Let's represent this type as
double (^) (double, double). Looking at the type this way, though, we should observe that the variable (
f) is "ensconced" within its type, so the declaration needs be read inside out. I'll talk a bit more about the "type" of functions and blocks a bit later.
The "pointing" is established through the assignment operator, "=".
The above line is like a typical C statement - note the semicolon in the end! We've just defined a block literal, created a block pointer to identify it, and assigned it to point to the block. In that sense, it's similar to a statement of the following type:
char ch // ch identifies a variable of type char = // that we assigned 'a' // the character literal value 'a'. ; // semicolon terminates the statement
Of course, the above statement doesn't involve pointers, so beware of this when drawing semantic comparisons.
I want to "milk" the previous comparison a bit more, just so you that you begin to see blocks in the same light as our humble
char ch = 'a' statement:
We could split the block pointer declaration and the assignment:
double (^g)(double, double); // declaration g = ^(double m, double n) { return m * n; }; // the above is like doing: // char ch; // ch = 'a';
We could reassign
g to point to something else (although this time the analogy is with a pointer or reference to an object instance):
double (^g)(double, double); g = ^(double m, double n) { return a + b; }; // .. later g = ^(double x, double y) { return x * y; }; // g reassigned to point to a new block; same type, so no problem // but not: // g = ^(int x) { return x + 1; }; // types are different! The literal on the right has type int(^)(int), not double(^) (double, double)! double (^h)(double, double) = g; // no problem here! h has the correct type and can be made to point to the same block as g // compare with: int i = 10, j = 11; // declaring a coupling of integers int *ptrToInt; // declaring a pointer to integers ptrToInt = &i; // ptrToInt points to i // later... ptrToInt = &l // ptrToInt now points to j float f = 3.14; // ptrToInt = &f; // types are different! technically can be done, but compiler will warn you of the type difference. And typically you don't want to do this! int *anotherPtrToInt; anotherPtrToInt = ptrToInt; // both pointers point to the same integer's location in memory
Another key difference between ordinary functions and blocks is that functions need to be defined in the "global scope" of the program, meaning a function can't be defined inside the body of another function!
int sum(int, int); // This is a declaration: we tell the compiler about a function sum that takes two ints and returns an int (but will be defined somewhere else). //... int main() { //... // we are *inside* main. Can't define a function here! int s = sum(5, 4); // sum function being invoked (called). This must happen inside another function! (except for main() itself) } int sum(int a, int b) // This is the function definition. It must be outside any function! { int c = a + b; return c; // we're *inside* the function body for sum(). We can't define a new function here! }
A lot of the power of blocks comes from the fact that they can be defined anywhere a variable can! Compare what we just saw with functions to what we can do with blocks:
int main() { // inside main(). Block CAN be defined here! int (^sum)(int, int) = ^(int a, int b) { return a + b; }; // the point here is that we're doing this inside the function main()! }
Again, it helps to recall the
char ch = 'a' analogy here; a variable assignment would ordinarily happen within the scope of a function. Except if we were defining a global variable, that is. Although we could do the same with blocks - but then there wouldn't be any practical difference between blocks and functions, so that's not very interesting!
So far, we've only looked at how blocks are defined and how their pointers are assigned. We haven't actually used them yet. It is important that you realise this first! In fact, if you were to type in the above code into Xcode, it would complain that the variable sum is unused.
The invocation of a block - actually using it - looks like a normal function call. We pass arguments to them through their pointers. So after the line of code we just saw, we could go:
#import <Foundation/Foundation.h> int main() { int (^sum)(int, int) = ^(int a, int b) { return a + b; }; // point here is that we're doing this inside main()! double x = 5.1, y = 7.3; double s = sum(5.1, 7.3); // block invoked. Note we're passing arguments (x,y) whose type matches the block's parameters. Return type matches assigned variable (s) as well NSLog(@"the sum of %f and %f is %f", x, y, s); }
Actually, we can do even better than that: we could define and invoke our block all in one go.
Observe the following:
#import <Foundation/Foundation.h> int main() { double x = 5.1, y = 7.3; double s = ^(double x, double y) { return x + y; }(x, y); // we're applying the arguments directly to the block literal! NSLog(@"the sum of %f and %f is %f", x, y, s); }
Perhaps the last one looked a bit like a "parlor trick" - flashy but not terribly useful - but in fact the ability to define blocks at the point of their use is one of the best things about them. If you've ever called block-accepting methods in the SDK, you've probably already encountered this use. We'll talk about this in more detail, shortly.
Let's exercise our block syntax writing skills first. Can you declare a block pointer for a block that takes a pointer to an integer as a parameter and then returns a character?
char(^b)(int *); // b can point to a block that takes an int pointer and returns a char
OK, now how about defining a block that takes no parameters, and returns no value, and encapsulates code to print "Hello, World!" to the console (upon invocation)? Assign it to a block pointer.
void (^hw)(void) = ^{NSLog(@"hello, world!"); }; // block literal could also be written as ^(void){ NSLog(@"hello, world!"); };
Note that for a block literal that takes no parameters, the parantheses are optional.
Question: Will the above line actually print anything to the console?
Answer: No. We would need to call (or invoke) the block first, like this:
hw();
Now, let's talk about functions that can take blocks as parameters or return blocks. Anything we say here applies to methods that take and return blocks, too.
Can you write the prototype of an ordinary function that takes a block of the same type as the "Hello, world" block we defined above, and returns nothing? Recall that a prototype just informs the compiler about the signature of a function whose definition it should expect to see at some later point. For example,
double sum(double, double); declares a function sum taking two
double values and returning a
double. However, it is sufficient to specify the type of the arguments and return value without giving them a name:
void func(void (^) (void));
Let's write a simple implementation (definition) for our function.
void func(void (^b) (void)) // we do need to use an identifier in the parameter list now, of course { NSLog(@"going to invoke the passed in block now."); b(); }
At the risk of repeating myself too many times,
func is an ordinary function so its definition must be outside the body of any other function (it can't be inside main(), for instance).
Can you now write a small program that invokes this function in main(), passing it our "Hello, World" printing block?
#include <Foundation/Foundation.h> void func(void (^b) void) // if the function definition appears before it is called, then no need for a separate prototype { NSLog(@"going to invoke the passed in block now."); b(); } int main() { void (^hw)(void) = ^{NSLog(@"hello, world!"); }; func(hw); // Note the type of hw matches the type of b in the function definition } // The log will show: // going to invoke the passed in block now. // hello, world!
Did we have to create a block pointer first only so we could pass the block in?
The answer is a resounding no! We could've done it inline as follows:
// code in main(): func(^{NSLog(@"goodbye, world!"); }); // inline creation of block!
Question:
ff is a function whose prototype or definition you haven't seen, but it was invoked as follows. Assume that the right kind of arguments were passed in to the function. Can you guess the prototype of
ff?
int t = 1; int g = ff(^(int *x) { return ((*x) + 1); }, t);
This is an exercise in not getting intimidated by syntax! Assuming no warnings are generated,
ff has a return type
int (because its return value is being assigned to
g, which is an
int). So we have
int f(/* mystery */) What about the parameters?
Notice that the function is invoked with an inline block which is where the scariness comes from. Let's abstract this out and represent it by "blk". Now the statement looks like
int g = ff(blk, t); Clearly, ff takes two parameters, the second one being an int (since t was an int). So we say tentatively,
int ff(type_of_block, int) where we only have to work out the type of the block. To do that, recall that knowing the block type entails knowing the types of its parameters and its return type (and that's all). Clearly, block takes one parameter of type
int * (pointer to
int). What about the return type? Let's infer it, just like the compiler would:
*x is dereferencing a pointer to an
int, so that yields an
int, adding one to which is also an int.
int *xand the meaning of * in the expression
intvalue stored at the address x.
So, our block returns an int. The
type_of_block is thus
int(^)(int *), meaning ff's prototype is:
int ff(int (^) (int *), int);
Question: Could we have passed in the "hello, world" printing block we created a while ago to
ff?
Answer: Of course not, its type was
void(^)(void)), which is different from the type of the block that
ff accepts.
I want to digress briefly and talk a bit more about something we've been using implicitly: the "type" of a block. We defined the type of a block to be determined by the types and number of its arguments and the type of its return value. Why is this a sensible definition? First, keep in mind that a block is invoked just like a function, so let's talk in terms of functions.
A C program is just a pool of functions that call each other: from the perspective of any function, all other functions in the programs are "black boxes". All that the calling function needs to concern itelf with (as far as syntactical correctness is concerned) is the number of arguments the "callee" takes, the types of these arguments, and the type of the value returned by the "callee". We could swap out one function body with another having the same type and the same number of arguments and the same return type, and then the calling function would be none the wiser. Conversely, if any of these were different, then we won't be able to substitute one function for another. This should convince you (if you needed convincing!) that our idea of what constitutes the type of a function or a block is the right one. Blocks reinforce this idea even more strongly. As we saw, we could pass an anonymous block to a function defined on the fly, as long as the types (as we defined them!) match.
We could now talk about functions that return blocks! This is even more interesting. If you think about it, essentially you're writing a function that is returning code to the caller! Since the "returned code" will be in the form of a block pointer (which would be invoked exactly as a function) we'd effectively have a function that could return different functions (blocks, actually).
Let's write a function that takes on options integer representing different types of binary operations (like addition, subtraction, multiplication, etc.) and returns a "calculator block" that you can apply to your operands in order to get the result of that operation.
Suppose we're dealing with
double operands. Our calculations also return a
double. What's the type of our binary operation block? Easy! It would be
double(^) (double, double).
Our function (let's called it "operation_creator") takes an
int which encodes the type of operation we want it to return. So, for example, calling
operation_creator(0) would return a block capable of performing addition,
operation_creator(1) would give a subtraction block, etc. So
operation_creator's declaration looks like
return-type operation_creator(int). We just said that return-type is
double (^)(double, double). How do we put these two together? The syntax gets a little hairy, but don't panic:
double (^operation_creator(int)) (double, double);
Don't let this declaration get the better of you! Imagine you just came across this declaration in some code and wanted to decipher it. You could deconstruct it like this:
- The only identifier (name) is
operation_creator. Start with that.
operation_creatoris a function. How do we know? It's immediately followed by an opening paranthesis
(. The stuff between it and the closing paranthesis
)tells us about the number and types of parameters this function takes. There's only argument, of type
int.
- What remains is the return type. Mentally remove
operation_creator(int)from the picture, and you're left with
double (^) (double, double). This is just the type of a block that takes two
doublevalues and returns a
double. So, what our function
operation_creatorreturns is a block of this type. Again, make a mental note that if the return type is a block, then the identifier is "ensconced" in the middle of it.
Let's digress with another practice problem for you: Write the declaration of a function called
blah that takes as its only parameter a block with no parameters and returns no value, and returns a block of the same type.
void(^blah(void(^)(void)))(void);
If you had difficulty with this, let's break down the process: we want to define a function blah() that takes a block that takes no parameters and returns nothing (that is,
void), giving us
blah(void(^)(void)). The return value is also a block of type
void(^)(void) so ensconce the previous bit, starting immediately after the
^, giving
void(^blah(void(^)(void)))(void);.
OK, now you can dissect or construct complex block and function declarations, but all these brackets and
voids are probably making your eyes water! Directly writing out (and making sense of) these complex types and declarations is unwieldy, error prone, and involves more mental overhead than it should!
It's time to talk about using the
typedef statement, which (as you hopefully know), is a C construct that lets you hide complex types behind a name! For example:
typedef int ** IntPtrPtr;
Gives the name
IntPtrPtr to the type "pointer to pointer to
int". Now you can substitute
int ** anywhere in code (such as a declaration, a type cast, etc.) with
IntPtrPtr.
Let's define a type name for the block type
int(^)(int, int). You can do it like this:
typedef int (^BlockTypeThatTakesTwoIntsAndReturnsInt) (int, int);
This says that
BlockTypeThatTakesTwoIntsAndReturnsInt is equivalent to
int(^)(int, int), that is, a block of the type that takes two
int values and returns an
int value.
Again, notice that the identifier (BlockTypeThatTakesTwoIntsAndReturnsInt) in the above statement, which represents the name we want to give the type we're defining, is wrapped up between the details of the type being
typedef'd.
How do we apply this idea to the
blah function we just declared? The type for the parameter and the return type is the same:
void(^)(void), so let's typedef this as follows:
typedef void(^VVBlockType)(void);
Now rewrite the declaration of
blah simply as:
VVBlockType blah(VVBlockType);
There you go, much nicer! Right?
At this point, it's very important that you are able to distinguish between the following two statements:
typedef double (^BlockType)(double); // (1) double(^blkptr)(double); // (2)
They look the same, barring the
typedef keyword, but they mean very different things.
With (2), you've declared a block pointer variable that can point at blocks of type
double(^)(double).
With (1), you've defined a type by the name
BlockType that can stand in as the type
double (^)double. So, after (2), you could do something like:
blkptr = ^(double x){ return 2 * x; };. And after (1), you could've actually written (2) as
BlockType blkptr;(!)
Do not proceed unless you've understood this distinction perfectly!
Let's go back to our
operation function. Can you write a definition for it? Let's
typedef the block type out:
typedef double(^BinaryOpBlock_t)(double, double); BinaryOpBlock_t operation_creator(int op) { if (op == 0) return ^(double x, double y) { return x + y; }; // addition block if (op == 1) return ^(double x, double y) { return x * y; }; // multiplication block // ... etc. } int main() { BinaryOpBlock_t sum = operation_creator(0); // option '0' represents addition NSLog(@"sum of 5.5 and 1.3 is %f", sum(5.5, 1.3)); NSLog(@"product of 3.3 and 1.0 is %f", operation_creator(1)(3.3, 1.0)); // cool, we've composed the function invocation and the block invocation! }
enumtype) to represent the options integer that our function accepts.
Another example. Define a function that takes an array of integers, an integer representing the size of the array, and a block that takes an
int and returns an
int. The job of the function will be to apply the block (which we imagine represents a mathematical formula) on each value in the array.
We want the type of our block to be
int(^)(int). Let's
typedef and then define our function:
typedef int(^iiblock_t)(int); void func(int arr[], int size, iiblock_t formula) { for ( int i = 0; i < size; i++ ) { arr[i] = formula(arr[i]); } }
Let's use this in a program:
int main() { int a[] = {10, 20, 30, 40, 50, 60}; func(a, 6, ^(int x) { return x * 2; }); // after this function call, a will be {20, 40, 60, 80, 100, 120} }
Isn't that cool? We were able to express our mathematical formula right where we needed it!
Add the following statements after the function call in the previous code:
// place the following lines after func(a, 6, ^(int x) { return x * 2; }); int n = 10; // n declared and assigned func(a, 6, ^(int x) { return x - n; } ); // n used in the block! } // closing braces of main
Did you see what we did there? We used the variable
n which was in our block's lexical scope in the body of the block literal! This is another a tremendously useful feature of blocks (although in this trivial example it might not be obvious how, but let's defer that discussion).
A block can "capture" variables that appear in the lexical scope of the statement calling the block. This is actually a read-only capture, so we couldn't modify
n within the block's body. The value of the variable is actually copied by the block at the time of its creation. Effectively, this means if we were to change this variable at some point after the creation of the block literal but before the block's invocation, then the block would still use the "original" value of the variable, that is, the value held by the variable at the time of the block's creation. Here's a simple example of what I mean, based on the previous code:
int n = 5; iiblock_t b = ^(int r) { return r * n; }; // created block, but haven't invoked it yet // .. stuff n = 1000; // we've the value of n, but that won't affect the block which was defned previously func(a, 6, b); // after this block gets invoked, each element in the array is multiplied by 5, not 1000!
So, how is this ability of blocks useful? In high-level terms, it allows us to use "contextual information" in an inline block from the scope where the block is defined. It might not make sense to pass this information as a parameter to the block as it is only important in certain contexts, yet in those particular situations we do want to utilize this information in the implementation of our block.
Let's make this idea concrete with an example. Suppose you're working on an educational app about the countries of the world, and as part of the app you want to be able to rank (i.e. sort) countries with respect to different metrics, such as population, natural resources, GDP, or any complex formula you come up with combining data you have about these countries. There are different sorting algorithms available, but most of them work on the principle of being able to compare two entities and decide which one is greater or smaller.
Let's say you come up with the following helper method in one of your classes:
-(NSArray *) sortCountriesList:(NSArray *)listOfCountries withComparisonBlock: BOOL(^cmp) (Country *country1, Country *country2) { // Implementation of some sorting algorithm that will make several passes through listOfCountries // whatever the algorithm, it will perform several comparisons in each pass and do something based on the result of the comparison BOOL isGreater = comp(countryA, countryB); // block invoked, result is YES if countryA is "greater" than countryB based on the passed in block if (isGreater) // do something, such as swapping the countries in the array // ... }
Note that before this example we only talked about blocks taken or returned by functions, but it's pretty much the same with Objective-C methods as far as the block syntax and invocation is concerned.
Again, you should appreciate that the ability to "plug in code" to our method in the form of a comparison block gives us the power to sort the countries according to some formula we can specify on-the-spot. All well and good. Now suppose you realise it would also be great if we could also rank the countries from lowest rank to highest. This is how you could achieve this with blocks.
// Inside the body of some method belonging to the same class as the previous method bool sortInAscendingOrder = YES; // calling our sorting method: NSArray *sortedList = [self sortCountriesList:list withComparisonBlock:^(Country *c1, Country *c2) { if (c1.gdp > c2.gdp ) // comparing gdp for instance { if (sortInAscendingOrder) return YES; else return NO; } }];
And that's it! We used the flag
sortInAscendingOrder that carried contextual information about how we wanted the sort to be carried out. Because this variable was in the lexical scope of the block declaration, we were able to use its value within the block and not have to worry about the value changing before the block completes. We did not have to touch our
-sortCountriesList: withComparisonBlock: to add a
bool parameter to it, or touch its implementation at all! If we'd used ordinary functions, we would be writing and rewriting code all over the place!
Let's end this tutorial by applying all we've learned here with a cool iOS application of blocks!
If you've ever had to do any custom drawing in a UIView object, you know you have to subclass it and override its
drawRect: method where you write the drawing code. You probably find it to be a chore, create an entire subclass even if all you're wanting to do is draw a simple line, plus having to look into the implementation of
drawRect: whenever you need to be reminded of what the view draws. What a bore!
Why can't we do something like this instead:
[view drawWithBlock:^(/* parameters list */){ // drawing code for a line, or whatever }];
Well, with blocks, you can! Here,
view is an instance of our custom
UIView subclass endowed with the power of drawing with blocks. Note that while we're still having to subclass
UIView, we only have to do it once!
Let's plan ahead first. We'll call our subclass
BDView (BD for "block drawing"). Since drawing happens in
drawRect:, we want
BDView's
drawRect: to invoke our drawing block!. Two things to think of: (1) how does
BDView hold on to the block? (2) What would be the block's type?
Remember I mentioned way at the beginning that blocks are also like objects? Well, that means you can declare block properties!
@property (nonatomic, copy) drawingblock_t drawingBlock;
We haven't worked out what the block type should be, but after we do we'll
typedef it as
drawingblock_t!
Why have we used
copy for the storage semantics? Truthfully, in this introductory tutorial we haven't talked about what happens behind the scenes, memory-wise, when we create a block or return a block from a function. Luckily, ARC will do the right thing for us under most circumstances, saving us from having to worry about memory maangement, so for our purposes now it is enough to keep in mind that we want to use
copy so that a block gets moved to the heap and doesn't disappear once the scope in which it was created ends.
What about the type of the block? Recall that the
drawRect: method takes a
CGRect parameter that defines the area in which to draw, so our block should take that as a parameter. What else? Well, drawing requires the existence of a graphics context, and one is made available to us in
drawRect: as the "current graphics context". If we draw with UIKit classes such as
UIBezierPath, then those draw into the current graphics context implicitly. But if we choose to draw with the C-based Core Graphics API, then we need to pass in references to the graphics context to the drawing functions. Therefore, to be flexible and allow the caller to use Core Graphics functions, our block should also take a parameter of type
CGContextRef which in
BDView's
drawRect: implementation we pass in the current graphics context.
We're not interested in returning anything from our block. All it does is draw. Therefore, we can
typedef our drawing block's type as:
typedef void (^drawingblock_t)(CGContextRef, CGRect);
Now, in our
-drawWithBlock: method we'll set our
drawingBlock property to the passed in block and call the
UIView method
setNeedsDisplay which will trigger
drawRect:. In
drawRect:, we'll invoke our drawing block, passing it the current graphics context (returned by the
UIGraphicsGetCurrentContext() function) and the drawing rectangle as parameters. Here's the complete implementation:
//BDView.h #import <UIKit/UIKit.h> typedef void (^drawingblock_t)(CGContextRef, CGRect); @interface BDView : UIView - (void)drawWithBlock:(drawingblock_t) blk; @end
// BDView.m #import "BDView.h" @interface BDView () @property (nonatomic, copy) drawingblock_t drawingBlock; @end @implementation BDView - (void)drawWithBlock:(drawingblock_t)blk { self.drawingBlock = blk; // copy semantics [self setNeedsDisplay]; } - (void)drawRect:(CGRect)rect { if (self.drawingBlock) { self.drawingBlock(UIGraphicsGetCurrentContext(), rect); } } @end
So, if we had a view controller whose
view property was an instance of
BDView, we could call the following method from
viewDidLoad (say) to draw a line that ran diagonally across the screen from the top-left corner to the bottom right corner:
- (void)viewDidLoad { [super viewDidLoad]; BDView *view = (BDView *)self.view; [view drawWithBlock:^(CGContextRef context, CGRect rect){ CGContextMoveToPoint(context, rect.origin.x, rect.origin.y); CGContextAddLineToPoint(context, rect.origin.x + rect.size.width, rect.origin.y + rect.size.height); CGContextStrokePath(context); }]; }
Brilliant!
More to Learn
In this introductory tutorial on blocks, we've admittedly left out a few things. I'll mention a couple of these here, so you can look into them in more advanced articles and references:
- We haven't talked much about memory semantics of blocks. Although automatic reference counting relieves us of a lot of burden in this regard, in certain scenarios more understanding is required.
- The
__blockspecifier that allows a block to modify a variable in its lexical scope.
Conclusion
In this tutorial, we had a leisurely introduction to blocks where we paid special attention to the syntax of block declaration, assignment, and invocation. We tried to leverage what most developers already know from C regarding variables, pointers, objects, and functions. We ended with an interesting practical application of blocks that should get the wheels in your head turning and get you excited about using blocks effectively in your own code. Proper use of blocks can improve code understandability and sometimes offer very elegant solutions to problems.
I also recommend you take a look at BlocksKit, which includes many interesting utilities of blocks that can make your life as an iOS developer easier. Happy coding!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/understanding-objective-c-blocks--mobile-14319 | CC-MAIN-2016-44 | refinedweb | 5,801 | 61.06 |
Re: menu controls on 2.0 ..
- From: jason@xxxxxxxxxxxxx
- Date: 26 Sep 2006 12:38:31 -0700
Steve, nice site.
Off topic, regarding the NumericTextbox example on the site, I'm kinda
noob, how do I implement that?
I tried adding in a new class and importing the new namespace into the
codebehind and then doing this on the aspx page.
<asp:textbox
but 2.0 does not like OnKeyPress
I also tried,
<asp:NumericTextbox
but that did not work either.
Perhaps I needed to add it as a new web component or something.. it was
not clear to me, maybe I missed something in the docs.
Thanks.
Steve C. Orr [MVP, MCSD] wrote:
There are a variety of menu controls out there that you could use to spiff
up a web site.
Here are some free ones:
--
I hope this helps,
Steve C. Orr
MCSD, MVP, CSM
<jason@xxxxxxxxxxxxx> wrote in message
news:1159221453.993867.72810@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
As I surf the net I see so many cool drop down menus.. I'm wondering
are most of those non .net or third party? Perhaps I just don't know
what I'm doing in vs.net, but the delivered menus do seem kinda lame.
I've read there is no way to remove that little arrow when there are
sub menu items? True?
Also, how do build a horizontal menu where individual items have a
border around them and transparent or blank spaces between them? do I
have to insert my own image to do this? I hope not.
.
- Follow-Ups:
- Re: menu controls on 2.0 ..
- From: Mark Rae
- References:
- menu controls on 2.0 ..
- From: jason
- Re: menu controls on 2.0 ..
- From: Steve C. Orr [MVP, MCSD]
- Prev by Date: Re: Case Solved!
- Next by Date: Formview Nested in Repeater?
- Previous by thread: Re: menu controls on 2.0 ..
- Next by thread: Re: menu controls on 2.0 ..
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2006-09/msg03584.html | crawl-002 | refinedweb | 323 | 86.1 |
Scheduling APIs
RTOS APIs
The RTOS APIs handle creation and destruction of threads in Arm Mbed OS, as well as mechanisms for safe interthread communication. Threads are a core component of Mbed OS (even your
main function starts in a thread of its own), so understanding how to work with them is an important part of developing applications for Mbed OS.
- ConditionVariable: The ConditionVariable class provides a mechanism to safely wait for or signal a single state change. You cannot call ConditionVariable functions from ISR context.
- EventFlags: An event channel that provides a generic way of notifying other threads about conditions or events. You can call some EventFlags functions from ISR context, and each EventFlags object can support up to 31 flags.
- IdleLoop: Background system thread, executed when no other threads are ready to run.
- Kernel interface functions: Kernel namespace implements functions to control or read RTOS information, such as tick count.
- Mail: The API that provides a queue combined with a memory pool for allocating messages.
- Mutex: The class used to synchronize the execution of threads.
- Queue: The class that allows you to queue pointers to data from producer threads to consumer threads.
- Semaphore: The class that manages thread access to a pool of shared resources of a certain type.
- ThisThread: The class with which you can control the current thread.
- Thread: The class that allows defining, creating and controlling parallel tasks.
Event handling APIs
If you are using the bare metal profile, the only APIs of the RTOS group you can use are those that do not rely on the RTX:
- Event: The queue to store events, extract them and execute them later.
- EventQueue: The class that provides a flexible queue for scheduling events.
- UserAllocatedEvent: The class that provides APIs to create and configure static events
Note that you can also use these APIs while using the full, RTOS-based profile. | https://os.mbed.com/docs/mbed-os/v6.2/apis/scheduling-rtos-and-event-handling.html | CC-MAIN-2020-34 | refinedweb | 313 | 61.26 |
Opened 11 years ago
Closed 11 years ago
#1183 closed Bug (Fixed)
FileSetTime can't set filetime
Description (last modified by Valik)
FileSetTime can't set filetime of a file. It seems that AutoIt can't find the file when located in a subdirectory.
It is an issue of the beta version. In AutoIt v3.3.0.0 everything
works fine.
Script used for testing:
$sFilename="C:\Test\file.zip" If FileExists($sFilename) Then MsgBox(4096, "Test", FileGetTime($sFilename, 1, 1)) MsgBox(4096, "Test", FileSetTime($sFilename, "20031101", 0, 0)) endif $sFilename="C:\Test\NextLevel\file.zip" If FileExists($sFilename) Then MsgBox(4096, "Test", FileGetTime($sFilename, 1, 1)) MsgBox(4096, "Test", FileSetTime($sFilename, "20031101", 0, 0)) endif
Results: correct Timestamp, 1, correct Timestamp, 0
See similar forumpost:
Attachments (0)
Change History (9)
comment:1 follow-up: ↓ 2 Changed 11 years ago by Jpm
- Resolution set to Works For Me
- Status changed from new to closed
comment:2 in reply to: ↑ 1 Changed 11 years ago by anonymous
comment:3 Changed 11 years ago by Valik
comment:4 Changed 11 years ago by Valik
- Resolution Works For Me deleted
- Status changed from closed to reopened
I can confirm the behavior on Windows XP SP3 with the following script:
Local Const $sRootDir = "C:\Test" Local Const $sFile1 = $sRootDir & "\file.zip" Local Const $sFile2 = $sRootDir & "\NextLevel\file.zip" Local Const $sTime = "200311010000" If Not TouchFile($sFile1) Then Exit -1 If Not TouchFile($sFile2) Then Exit -2 SetTime($sFile1, $sTime) SetTime($sFile2, $sTime) DirRemove($sRootDir, True) Func TouchFile($sFile) Local $hFile = FileOpen($sFile, 10) If $hFile = -1 Then ConsoleWrite("Unable to open file: " & $sFile) Return False EndIf FileWrite($hFile, "Delete this file if present.") FileClose($hFile) Return True EndFunc ; TouchFile() Func SetTime($sFile, $sTime) ConsoleWrite("Time for: " & $sFile & @CRLF) Local $sResult = FileGetTime($sFile, 0, 1) ConsoleWrite(@TAB & "Before: " & $sResult & " (" & @error & ")" & @CRLF) ConsoleWrite(@TAB & " Result: " & FileSetTime($sFile, $sTime, 0, 0) & @CRLF) $sResult = FileGetTime($sFile, 0, 1) ConsoleWrite(@TAB & "After: " & $sResult & " (" & @error & ")" & @CRLF) EndFunc ; SetTime()
comment:5 Changed 11 years ago by Jpm
I can reconfirm that's working under Vista/SP1 !!!
comment:6 follow-up: ↓ 7 Changed 11 years ago by Jpm
running under XP/SP2 under vmware is OK too
>Running:(3.3.0.0):C:\Program Files\AutoIt3\autoit3.exe "C:\Users\Jpm\Desktop\#1183 FilesetTime subdir.au3" Time for: C:\Test\file.zip Before: 20090920105937 (0) Result: 1 After: 20031101000037 (0) Time for: C:\Test\NextLevel\file.zip Before: 20090920105937 (0) Result: 1 After: 20031101000037 (0) +>10:59:38 AutoIT3.exe ended.rc:0
comment:7 in reply to: ↑ 6 Changed 11 years ago by Jos
>Running:(3.3.0.0):C:\Program Files\AutoIt3\autoit3.exe "C:\Users\Jpm\Desktop\#1183 FilesetTime subdir.au3"
JP, Have you also tried with the Latest Beta because the OP stated that 3.3.0.0 was not showing the issue?
For me it works fine with 3.3.0.0 and fails with 3.3.1.1 on the second datechange.
Jos
comment:8 Changed 11 years ago by Jpm
Jos,
True I was checking only under 3.3.0.0.
But for what I don't understand the tentative 3.3.1.2 is working OK.
comment:9 Changed 11 years ago by Valik
- Milestone set to 3.3.1.2
- Resolution set to Fixed
- Status changed from reopened to closed.
Guidelines for posting comments:
- You cannot re-open a ticket but you may still leave a comment if you have additional information to add.
- In-depth discussions should take place on the forum.
For more information see the full version of the ticket guidelines here.
That's working fine under my Vista/SP1 system | https://www.autoitscript.com/trac/autoit/ticket/1183 | CC-MAIN-2020-24 | refinedweb | 611 | 56.05 |
A Primer on Python Metaclasses
Most readers are aware that Python is an object-oriented language. By
object-oriented, we mean that Python can define classes, which bundle
data and functionality into one entity. For example, we may
create a class
IntContainer which stores an integer and allows
certain operations to be performed:
class IntContainer(object): def __init__(self, i): self.i = int(i) def add_one(self): self.i += 1
ic = IntContainer(2) ic.add_one() print(ic.i)
3
This is a bit of a silly example, but shows the fundamental nature of classes: their ability to bundle data and operations into a single object, which leads to cleaner, more manageable, and more adaptable code. Additionally, classes can inherit properties from parents and add or specialize attributes and methods. This object-oriented approach to programming can be very intuitive and powerful.
What many do not realize, though, is that quite literally everything in the Python language is an object.
For example, integers are simply instances of
the built-in
int type:
print type(1)
<type 'int'>
To emphasize that the
int type really is an object, let's derive from it
and specialize the
__add__ method (which is the machinery underneath
the
+ operator):
(Note: We'll used the
super syntax to call methods from the parent class: if you're unfamiliar with this, take a look at
this StackOverflow question).
class MyInt(int): def __add__(self, other): print "specializing addition" return super(MyInt, self).__add__(other) i = MyInt(2) print(i + 2)
specializing addition 4
Using the
+ operator on our derived type goes through our
__add__
method, as expected.
We see that
int really is an object that can be subclassed and extended
just like user-defined classes. The same is true
of
floats,
lists,
tuples, and everything else in the Python
language. They're all objects.
We said above that everything in python is an object: it turns out that this is true of classes themselves. Let's look at an example.
We'll start by defining a class that does nothing
class DoNothing(object): pass
If we instantiate this, we can use the
type operator to see the type
of object that it is:
d = DoNothing() type(d)
__main__.DoNothing
We see that our variable
d is an instance of the class
__main__.DoNothing.
We can do this similarly for built-in types:
L = [1, 2, 3] type(L)
list
A list is, as you may expect, an object of type
list.
But let's take this a step further: what is the type
of
DoNothing itself?
type(DoNothing)
type
The type of
DoNothing is
type. This tells us that the class
DoNothing is itself an object, and that object is of type
type.
It turns out that this is the same for built-in datatypes:
type(tuple), type(list), type(int), type(float)
(type, type, type, type)
What this shows is that in Python, classes are objects, and they are objects of
type
type.
type is a metaclass: a class which instantiates classes.
All new-style classes
in Python are instances of the
type metaclass, including
type itself:
type(type)
type
Yes, you read that correctly:
the type of
type is
type. In other words,
type is an
instance of itself. This sort of circularity cannot (to my knowledge)
be duplicated in pure Python, and the behavior is created through a bit of a
hack at the implementation level of Python.
Now that we've stepped back and considered the fact that classes in Python
are simply objects like everything else, we can think about what is known
as metaprogramming. You're probably used to creating functions which
return objects. We can think of these functions as an object factory: they
take some arguments, create an object, and return it. Here is a simple example
of a function which creates an
int object:
def int_factory(s): i = int(s) return i i = int_factory('100') print(i)
100
This is overly-simplistic, but any function you write in the course
of a normal program can be boiled down to this: take some arguments,
do some operations, and create & return an object.
With the above discussion in mind, though, there's nothing to stop
us from creating an object of type
type (that is, a class),
and returning that instead -- this is a metafunction:
def class_factory(): class Foo(object): pass return Foo F = class_factory() f = F() print(type(f))
<class '__main__.Foo'>
Just as the function
int_factory constructs an returns an instance of
int,
the function
class_factory constructs and returns an instance of
type:
that is, a class.
But the above construction is a bit awkward: especially if we were going to do some
more complicated logic when constructing
Foo, it would be nice to avoid all the
nested indentations and define the class in a more dynamic way.
We can accomplish this by instantiating
Foo from
type directly:
def class_factory(): return type('Foo', (), {}) F = class_factory() f = F() print(type(f))
<class '__main__.Foo'>
In fact, the construct
class MyClass(object): pass
is identical to the construct
MyClass = type('MyClass', (), {})
MyClass is an instance of type
type, and that can be seen
explicitly in the second version of the definition.
A potential confusion arises from the more common use of
type as
a function to determine the type of an object, but you should strive
to separate these two uses of the keyword in your mind:
here
type is a class (more precisely, a metaclass),
and
MyClass is an instance of
type.
The arguments to the
type constructor are:
nameis a string giving the name of the class to be constructed
basesis a tuple giving the parent classes of the class to be constructed
dctis a dictionary of the attributes and methods of the class to be constructed
So, for example, the following two pieces of code have identical results:
class Foo(object): i = 4 class Bar(Foo): def get_i(self): return self.i b = Bar() print(b.get_i())
4
Foo = type('Foo', (), dict(i=4)) Bar = type('Bar', (Foo,), dict(get_i = lambda self: self.i)) b = Bar() print(b.get_i())
4
This perhaps seems a bit over-complicated in the case of this contrived example, but it can be very powerful as a means of dynamically creating new classes on-the-fly.
Now things get really fun. Just as we can inherit from and extend a class we've
created, we can also inherit from and extend the
type metaclass, and create
custom behavior in our metaclass.
Let's use a simple example where we want to create an API in which the user can create a set of interfaces which contain a file object. Each interface should have a unique string ID, and contain an open file object. The user could then write specialized methods to accomplish certain tasks. There are certainly good ways to do this without delving into metaclasses, but such a simple example will (hopefully) elucidate what's going on.
First we'll create our interface meta class, deriving from
type:
class InterfaceMeta(type): def __new__(cls, name, parents, dct): # create a class_id if it's not specified if 'class_id' not in dct: dct['class_id'] = name.lower() # open the specified file for writing if 'file' in dct: filename = dct['file'] dct['file'] = open(filename, 'w') # we need to call type.__new__ to complete the initialization return super(InterfaceMeta, cls).__new__(cls, name, parents, dct)
Notice that we've modified the input dictionary (the attributes and methods of the class) to add a class id if it's not present, and to replace the filename with a file object pointing to that file name.
Now we'll use our
InterfaceMeta class to construct and instantiate
an Interface object:
Interface = InterfaceMeta('Interface', (), dict(file='tmp.txt')) print(Interface.class_id) print(Interface.file)
interface <open file 'tmp.txt', mode 'w' at 0x21b8810>
This behaves as we'd expect: the
class_id class variable is created,
and the
file class variable is replaced with an open file object.
Still, the creation of the
Interface class
using
InterfaceMeta directly is a bit clunky and difficult to read.
This is where
__metaclass__ comes in
and steals the show. We can accomplish the same thing by
defining
Interface this way:
class Interface(object): __metaclass__ = InterfaceMeta file = 'tmp.txt' print(Interface.class_id) print(Interface.file)
interface <open file 'tmp.txt', mode 'w' at 0x21b8ae0>
by defining the
__metaclass__ attribute of the class, we've told the
class that it should be constructed using
InterfaceMeta rather than
using
type. To make this more definite, observe that the type of
Interface is now
InterfaceMeta:
type(Interface)
__main__.InterfaceMeta
Furthermore, any class derived from
Interface will now be constructed
using the same metaclass:
class UserInterface(Interface): file = 'foo.txt' print(UserInterface.file) print(UserInterface.class_id)
<open file 'foo.txt', mode 'w' at 0x21b8c00> userinterface
This simple example shows how metaclasses can be used to create powerful and flexible APIs for projects. For example, the Django project makes use of these sorts of constructions to allow concise declarations of very powerful extensions to their basic classes.
Another possible use of a metaclass is to automatically register all subclasses derived from a given base class. For example, you may have a basic interface to a database and wish for the user to be able to define their own interfaces, which are automatically stored in a master registry.
You might proceed this way:
class DBInterfaceMeta(type): # we use __init__ rather than __new__ here because we want # to modify attributes of the class *after* they have been # created def __init__(cls, name, bases, dct): if not hasattr(cls, 'registry'): # this is the base class. Create an empty registry cls.registry = {} else: # this is a derived class. Add cls to the registry interface_id = name.lower() cls.registry[interface_id] = cls super(DBInterfaceMeta, cls).__init__(name, bases, dct)
Our metaclass simply adds a
registry dictionary if it's not already
present, and adds the new class to the registry if the registry is already
there. Let's see how this works:
class DBInterface(object): __metaclass__ = DBInterfaceMeta print(DBInterface.registry)
{}
Now let's create some subclasses, and double-check that they're added to the registry:
class FirstInterface(DBInterface): pass class SecondInterface(DBInterface): pass class SecondInterfaceModified(SecondInterface): pass print(DBInterface.registry)
{'firstinterface': <class '__main__.FirstInterface'>, 'secondinterface': <class '__main__.SecondInterface'>, 'secondinterfacemodified': <class '__main__.SecondInterfaceModified'>}
It works as expected! This could be used in conjunction with
a function that chooses implementations from the registry,
and any user-defined
Interface-derived objects would be
automatically accounted for, without the user having to remember
to manually register the new types.
I've gone through some examples of what metaclasses are, and some ideas about how they might be used to create very powerful and flexible APIs. Although metaclasses are in the background of everything you do in Python, the average coder rarely has to think about them.
But the question remains: when should you think about using custom metaclasses in your project? It's a complicated question, but there's a quotation floating around the web that addresses it quite succinctly:
Metaclasses are deeper magic than 99% of users should ever worry about. If you wonder whether you need them, you don’t (the people who actually need them know with certainty that they need them, and don’t need an explanation about why).
– Tim Peters
In a way, this is a very unsatisfying answer: it's a bit reminiscent of the wistful and cliched explanation of the border between attraction and love: "well, you just... know!"
But I think Tim is right: in general, I've found that most tasks in Python that can be accomplished through use of custom metaclasses can also be accomplished more cleanly and with more clarity by other means. As programmers, we should always be careful to avoid being clever for the sake of cleverness alone, though it is admittedly an ever-present temptation.
I personally spent six years doing science with Python, writing code nearly on a daily basis, before I found a problem for which metaclasses were the natural solution. And it turns out Tim was right:
I just knew.
This post was written entirely in an IPython Notebook: the notebook file is available for download here. For more information on blogging with notebooks in octopress, see my previous post on the subject. | https://jakevdp.github.io/blog/2012/12/01/a-primer-on-python-metaclasses/ | CC-MAIN-2018-39 | refinedweb | 2,063 | 50.67 |
{{{ #!html Form submission }}} '''Inner links''' {{{ #!html Part 1 Ajax basic form submission, Django server answers Ajax call. Part 2 Handling the form when JavaScript is deactivated. Part 3 Fixing the frozen fading when user resend the form without waiting for the first fading to end. }}} {{{ #!html Ajax basic form submission, Django server answers Ajax call. }}} '''Ajax form submission using Dojo for the client side javaScript toolkit, and simpleJson for the client/server communication.''' This will take you through all the steps required to get started and to develop a handy form submission without reload. == What do you need == - Django - Dojo (v0.3) [] an open source javascript toolkit. - SimpleJson (v1.3) [] used for javascript <-> python communication. == What do we want to achieve == I will use the model of my current website, a recipe one. When a registered user sees the details of a recipe, the user can rate (mark) it or update their rating (mark) by selecting the mark with a small drop down list. But when the user clicks ok the whole page reloads. We will use some Ajax to make it transparent and fancy effect to show the change to the user. we can add a fading status message like "Your rating has been updated". The select box proposes a rating from 1 to 5, it is actually a form which is sent to the server via POST, which in django links to a method in the view.py : ''def details(request)'' which does the innerwork for the details page generation. == Installing Json == get SimpleJson from svn: {{{ svn co json cd json sudo python setup.py install }}} or get the latest released version from the [ CheeseShop]: {{{ sudo python easy_install simplejson }}} update: changeset [ 3232] now offers v1.3 of simplejson in django's utils, so you do not need to download and install simplejson -- just use django.utils.simplejson. {{{ from django.utils import simplejson #in replace of import simple_json }}} == Django part == '''view.py''' {{{ #!python def details(request): [more stuff] if request.POST: # Get all the mark for this recipe list_mark = Mark.objects.values('mark').filter(recipe__pk=r.id) # loop to get the total total = 0 for element in list_mark: total+= element['mark'] # round it total = round((float(total) / len(list_mark)),1) # update the total r.total_mark= total # save the user mark r.save() # Now the intersting part for this tut import simple_json # it was a french string, if we dont't use unicode # the result at the output of json in Dojo is wrong. message = unicode( message, "utf-8" ) # jsonList = simple_json.dumps((my_mark, total, form_message ,message)) return HttpResponse(jsonList) [more stuff, if not POST return render_to_response('recettes/details.html' ...] }}} '''url.py''' Just normal url.py, remember the path which will point to the wanted method. {{{ #!python from django.conf.urls.defaults import * urlpatterns = patterns('', [...more...] (r'^recettes/(?P[-\w]+)/(?P[-\w]+)/$', 'cefinban.recettes.views.details'), [...more...] }}} == Html template with Dojo javascript == '''Dojo use''' {{{ {% load i18n %} {% extends "base.html" %} {% block script %} Score: {{ r.total_mark }}/5{% endif %} [....] {% if not user.is_anonymous %} {% ifnotequal user.id r.owner_id %} {{mark_message}} 1 2 3 4 5 Notez {{ mark_status }} {% endifnotequal %} {% endif %} }}} And, voila. To have a demo use guest as login, guest as password here [] or if not working here []. Go to index and pick a recipe, update the rating. You can also have a look at the screenshot here : [] == Dreamhost and Simplejson == If you are using dreamhost for hosting please be aware that simplejson is not installed. Instead you will have to install the source of simplejson in a folder in your home directory eg /proz/json/simple_json The simple_json directory contains the required __init__.py for it to be loaded as a python module. Then in your ~/.bash_profile add the directory to your python path like below. {{{ export PYTHONPATH=$PYTHONPATH:$HOME/django/django_src:$HOME/django/django_projects:$HOME/progz/json }}} That will allow yout to use simpl_json in python shell. But '''dont forget''' to change django.fcgi ! Add {{{ sys.path +=['/home/coulix/progz/json'] }}} log out/in and try import simple_json (or simplejson depends on what source you picked) {{{ #!html Handling the form when JavaScript is deactivated. }}} If a user has deactivated his browser's javascript support, or is using a text mode browser, we need a way of making the previous rating button submit the rating to the server which should this time return an html template instead of data to the Ajax call. == Updating the form HTML (details.html template) == This time we put a submit type input inside the form instead of the button type in part 1. type="submit" as indicates its name, submits the form to the server, we will need a way of stopping this behavior using javaScript. {{{ {{mark_message}} [...] }}} Now, how can we tell our details method in view.py to know if it comes from a normal submit request or an Ajax request ? Two solutions, The '''first''' uses a hidden variable in form.html and an added content element in the JS part. {{{ function sendForm() { dojo.byId("mark_status").innerHTML = "Loading ..."; dojo.io.bind({ url: '.', handler: sendFormCallback, content: {"js", "true"}, formNode: dojo.byId('myForm') }); } [...] [...] }}} With this, in our django method in view.py we can test for request["js"]=="true" it would means that Js is activatd and we return the appropriate answer to the Ajax request. The '''second''' uses the url to pass a new variable ajax_or_not to the detail method. {{{ #!python def details(request, r_slug, r_cat, ajax_or_not=None): [...] }}} We modify the url.py to accept this new parameter. {{{ #!python (r'^recettes/(?P[-\w]+)/(?P[-\w]+)/(?P.*)$', 'cefinban.recettes.views.details'), }}} The dojo binding needs to append a variable to the original document url, to make ajax_or_not not None. {{{ function sendForm() { dojo.byId("mark_status").innerHTML = "Loading ..."; dojo.io.bind({ url: './ajax/', handler: sendFormCallback, formNode: dojo.byId('myForm') }); } }}} == New details method in view.py == We just need to test for the existence of ajax_or_not {{{ #!python def details(request, r_slug, r_cat, ajax_or_not=None): [...] if request.POST: [...] same as part 1 # except here we check ajax_or_not if ajax_or_not: # use json for python js exchange # it was a french string, if we dont't use unicode # the result at the output of json in Dojo is wrong. message = unicode( message, "utf-8" ) jsonList = simple_json.dumps((my_mark, total, form_message ,message)) return HttpResponse(jsonList) return render_to_response('recettes/details.html', {'r': r, 'section':'index', 'mark_status':message , 'mark_message':form_message, 'my_mark': my_mark}, context_instance=RequestContext(request),) }}} {{{ #!html Fixing the frozen fading when user resend the form without waiting for the first fading to end. }}} If you haven't realised yet, if two or more calls are sent to the javascript function sendForm in a short time, the fading effect of the current sendForm Callback method might get stuck / froze / bugged. We need a way of avoiding this by desactivating the connection between the submit button and the sendForm method while the fading '''animation''' is active. Thanks Dojo there is such things ! in two lines of code its done. {{{ function sendFormCallback(type, data, evt) { [...as before ...] // and the fancy fading effect // first disconnect the listener ! dojo.event.disconnect(sendFormButton, 'onclick', 'sendForm'); // assign our fading effect to an anim variable. var anim = dojo.lfx.html.highlight("mark_status", [255, 151, 58], 700).play(300); // When this anim is finish, reconnect dojo.event.connect(anim, "onEnd", function() { dojo.event.connect(sendFormButton, 'onclick', 'sendForm'); }); } }}} how nice is this ! Careful, while talking about how to fix the problem using onEnd in Dojo IRC chanel, they realised play() method didnt behave properly and updated it to react to onEnd and such. su you need at least revision '''4286'''. Update your dojo source {{{ svn co dojo }}} '''Note''' It might be completely wrong. More questions / complaints: coulix@gmail.com | https://code.djangoproject.com/wiki/AjaxDojoFormSub?format=txt | CC-MAIN-2015-35 | refinedweb | 1,271 | 58.99 |
Type conversion and type casting are the same in C#. It is converting one type of data to another type. In C#, type casting has two forms −
Implicit type conversion − These conversions are performed by C# in a type-safe manner. For example, are conversions from smaller to larger integral types and conversions from derived classes to base classes.
Explicit type conversion − These conversions are done explicitly by users using the pre-defined functions. Explicit conversions require a cast operator.
The following is an example showing how to cast double to int −
using System; namespace Demo { class Program { static void Main(string[] args) { double d = 9322.46; int i; // cast double to int i = (int)d; Console.WriteLine(i); Console.ReadKey(); } } }
9322 | https://www.tutorialspoint.com/What-is-the-difference-between-type-conversion-and-type-casting-in-Chash | CC-MAIN-2021-49 | refinedweb | 122 | 56.96 |
27 February 2002
These are the minutes of the W3C Technical Plenary, held on 27 February 2002 at the Royal Hôtel Casino in Cannes Mandelieu, France. This public event consisted of five presentations by W3C participants, followed by an open "Town Hall" session. This was the second Technical Plenary; (the first took place in March 2001).
In addition to the one-day Technical Plenary event, over 20 W3C Working Groups and Interest Groups held face-to-face meetings over four days at the same location.
Stephen Watt (Math Working Group, University of Western Ontario): What is the current resources allocation, say for our Working Group?
Steve Bratt: Members are able to refer to the W3C Effort Table (Member only link).
Janet Daly: MathML is in "life after Recommendation." As such, the Math Working Group has the mandate to produce additional materials (schemas...), which may not require as much W3C Team resources as one needs before Recommendation. So, the amount of resources spent on a Working Group depends on their state. Other Working Groups require more Team resources. Math Working Group Team support is provided by Max Froumentin.
Paul Cotton (XML Query Working Group Chair, TAG, Microsoft): The amount of Team resources required by a Working Group until Recommendation varies. For the moment, we have a constant % in the week from the Team. When a document goes to Last Call, you need more Team resources. So I wonder how realistic are our charters. Any thoughts?
Steve Bratt: It is an area we can explore, particularly in evaluating future charters, and to provide better future planning.
Moderator: Paul Cotton (XML Query Working Group Chair, TAG, Microsoft). Panelists: Dan Connolly (Semantic Web Activity, TAG, W3C), Chris Lilley (Graphics Activity lead, TAG, W3C), David Orchard (XML Protocol Working Group, Web Services Architecture Working Group, BEA), Norm Walsh (XML Core Working Group, XSL Working Group, TAG, Sun), Stuart Williams (XML Protocol Working Group, TAG, Hewlett-Packard).
Michael Rys (XML Query Working Group, Microsoft): If a Working Group brings issues it is having trouble resolving to the TAG, won't the process grind to a halt? What is the expected impact of TAG work on your average Working Group?
Paul Cotton: In several cases so far, the TAG has not been the first locus of resolution for a problem, but another group within W3C has been more appropriate. I've redirected questions to more appropriate forums. My observation in the first weeks of the TAG is that groups had a number of dormant issues they were waiting to raise.
Note that the TAG charter does state that the TAG's mission includes resolving issues involving general Web architectural issues brought to before it. TAG resolutions will be binding (as are other architectural Recommendations such as the Internationalization Activity's Character Model in review), but after review (on the Recommendation track). For example: Register your MIME type before your format specification becomes a Recommendation. Dissemination of our findings is important; we are learning better how to do this with practice.
Roland Merrick (XForms Working Group, IBM): We want to reuse specifications, but this creates lots of interdependencies, and as you move up the stack using a modular approach, this drags a lot of weight with it. What does it mean to implement against modular specs? How do we organize modules? How do they work together?
David Orchard: This is a buy v. build question. Modularity is an important design principle, notably from a feedback perspective. So far in W3C, I don't think we've done as well as we'd like with respect to using modularity. Perhaps the TAG should look at different factorings of specifications. The TAG has looked at modularity of XML, but I don't think TAG will undertake this itself. The TAG doesn't have a process yet to determine what we think from a modularity perspective. I'd like more input. We don't want the TAG to be seen as a "higher power with dictatorial authority." Rather, we hope to persuade and build consensus.
Noah Mendelsohn (XML Protocol Working Group, XML Schema Working Group, IBM): It's important for the TAG in the early days to establish the right relationship with other groups. I would strongly urge the TAG to foster consensus rather than to provide answers in the form of "The TAG decided this or that." Help people find answers (e.g., by helping them understand costs and benefits and trade-offs in modular design). I also think that education is important (e.g., in the face of misunderstanding about URIs).
Paul Cotton: Like other W3C Working Groups, the TAG makes decisions, gets consensus, and is accountable for those decisions. The architecture Recommendations we will produce will follow the Recommendation track process. How do we lead by example? Partly by being up here (before you) today, but also by reporting and keeping people informed.
Noah Mendelsohn: In my mind, TAG would be more effective if it were not like every other Working Group; instead, consider everyone to be an extended participant in the group instead. Do whatever you can to let people know when they should be involved, and get them involved.
Rigo Wenning (Privacy Activity lead, W3C): Please add "Security" and "Privacy" to Dan Connolly's list of principles.
Chris Lilley: That list is not exhaustive.
Dan Connolly: I note that other groups are producing architectural Recommendations (e.g., Internationalization, Web Accessibility Initiative guidelines).
David Orchard: Documenting trade-offs is important.
David Orchard: I'm wary to take on security issues in the TAG since we don't have the expertise. When issues arise, we will turn to the most appropriate group for help in dealing with the issue.
Jim Larson (Voice Browser Working Group Chair, Intel): So far, the TAG seems mostly reactive - people bring problems to TAG. What about proactive work? Will the TAG work on predicting upcoming problems?
Dan Connolly: Collecting pieces will help us spot future problems. I hope that shared understanding will, in general, help alleviate problems.
Jim Larson: I didn't mean spotting problems as much as providing guidance towards future technologies, etc.
Dan Connolly: There have been comments that we aren't doing enough top-down work.
Chris Lilley: Our charter says:
."
Paul Cotton: Yes, our charter includes requirements to write. But I think that it will take the TAG a while to get to where you'd like us to be.
Jeremy Carroll (Hewlett-Packard): Are there plans to take the TAG's architectural Recommendations to Candidate Recommendation? What would that mean?
Paul Cotton: That's a good question for us to take back and think about.
Stuart Williams: How about: we need to find at least two people who find our output useful? The output has to meet a need, solve problems, be useful.
Dave Hollander (XML Schema/XML Coordination Group co-Chair, Contivo): I heard outpour for modularity. One unintended consequence of modularity is a loss of interoperability. Please consider that in your discussions.
T. V. Raman (Voice Browser and Multimodal Working Groups, IBM): The Web took off because authors could publish without worrying about which browsers people would be using. That principle seems to have been lost. I'd like to ensure that we are building a Web where you don't have to ask the user what browser he or she is using.
(There was general support and applause for T. V.'s comments.)
Dan Connolly: Hear! Hear! I'd like an hour to talk about this! The Web is breaking because people assume everyone has only one piece of software.
Tantek Çelik (CSS Working Group, Microsoft): On Dan Connolly's list of principles, where is "design for ease of authoring"?
Dan Connolly: This is related to the principle of least power.
Tantek Çelik: It's not clear to me that cleanest architecture gives you the easiest authoring environment. The Web grew since it was easy to author a document for the Web.
Paul Biron (XML Schema Working Group, Health Level Seven/Kaiser Permanente): I suggest that one way the TAG can be more proactive and improve dissemination of ideas is to bring Chairs together to work more frequently. Have the TAG do new Chair orientation.
(There was general support for the TAG sending representatives to Working Groups for education and outreach.)
Janet Daly noted Team resources (Ian Jacobs) have been allocated to the TAG to promote education and outreach.
Daniel Veillard (XML Core Working Group invited expert): I would like the TAG to clearly document mistaken choices so that we can learn from them, too. The TAG represents technical expertise. W3C is mature enough to know what works and what doesn't work. There may be a number of reasons why an effort does not succeed: Lack of Member interest, doesn't quite fit in the Web architecture, didn't appear at the right time....
David Orchard: Taking the XML Fragment example - this might be a case of a technology being too early. E.g., the problem we were trying to solve may not have been common at the time. I'm not sure how we could have prevented that particular problem from occurring.
Resources:
Moderator: Eric Miller (Semantic Web Activity lead, W3C). Panelists: Brian McBride (RDF Core Working Group co-Chair, HP Labs), Dan Connolly (Web Ontology Working Group Team contact, W3C), Ralph Swick (Semantic Web Advanced Development Initiative, W3C)
Martin Dürst (Internationalization Activity lead, W3C): As a Working Group, we invest a lot of time in tracking Last Call comments. ... Are there any tools for this?
Dan Connolly: Ian, Tim and I are trying to automate this in one group.... State of the art is: no tools right now.
Martin Dürst: I would like to help.
Brian McBride: RDF Core is coming to Last Call soon, and if I can find time I'll work on it also.
Andrew Hunt (SpeechWorks): Have you looked at commercial or publicly available products for issues tracking? Such as Bugzilla? ... Suggestion: use CVS repository for shared specifications. ... Jigedit doesn't help as much as I want.
Ted Guild (W3C): Bugzilla is not specific enough for W3C.... We started with the TAG Working Group to look at it.... but would like input from Working Groups.
Dan Connolly: Jigedit uses CVS underneath and uses people resources to maintain Jigedit accounts.... but it is available.... There is more info in the Guide (Member only link) about using it.
Steven Pemberton (W3C/CWI): How do you decide which tools to use?
Dan Connolly: Decided mostly in Chairs meetings. No formal process. How many of you have your own home-grown issues tracking system?
(Many attendees indicate that they use their own issue tracking system.)
Steven Pemberton: I suggest that W3C address issues tracking tool need.
Eric Miller: I suggest a birds-of-a-feather (BOF) table on this at lunch. CSS spec and Bugzilla creating an annotated version of the spec with direct links to bugs.
Janet Daly: But remember: the Semantic Web group is not SysTeam II.
Ian Jacobs (W3C): I want to use Jena tomorrow. How can I use it?
Brian McBride: Jena is not really born yet.... I have to recompile each time to use it! ... If there is demand, we will try to make it available.
(Several hands are raised indicating interest.)
T. V. Raman: Archived mailing lists are extremely valuable.... but it's painful to screen-scrape it out of HTML.... Can it be available in other ways? ... More specifically, can we have IMAP access to the archive?
Gerald Oskoboiny (W3C): We discussed it before. Decided to make monthly mailboxes available i.e., permanent URLs for each list.
Colas Nahaboo (ILOG): Suggestion: Use TWiki.
(Session adjourned for lunch.)
Moderator: David Fallside (XML Protocol Working Group Chair, Web Services Coordination Group Chair, IBM). Panelists: Jonathan Marsh (Web Services Description Working Group Chair, Microsoft), Jeff Mischkinsky (Web Services Architecture Working Group participant, Oracle), Philippe Le Hégaret (Web Services Description Working Group Team contact, W3C)
Henrik Frystyk Nielsen (XML Protocol Working Group, Microsoft): I have a problem with the definition of a Web service.... It sounds like any other machine communication definition.... How can we make sure the groups stay in sync? ... The definition of a Web service seems at odds with SOAP 1.2.
Jonathan Marsh: We have a Web Services Coordination Group also. We hope that will resolve that.
Jeff Mischkinsky: I think you might be reading too much into "request/response".... I think it is meant more as a generic "You send an XML doc, and it's processed" ... there are several scenarios.... People seem to consider most to be Web services.
Henrik Frystyk Nielsen: Would an event notification scenario be a Web service? ... e.g., "Your boat is on fire."
Jeff Mischkinsky: I think it fits the definition of a Web service.
Leigh Klotz (XForms Working Group, Xerox) on IRC: What about WS-I? What is its relation to the W3C Web Services Activity?
Philippe Le Hégaret: From their Web site, the stated goals are to build test suites and profiles, not specifications. We have sent questions on behalf of W3C; as they are just getting started, we expect they will come back to us with answers. At this point, we hope they will do systems integration and define test suites for interoperability.
David Fallside: One of the Coordination Group's roles is to figure out liaisons. ... WS-I is at early stages. This will evolve. Both sides are aware of it and want the right things to happen.
David Orchard (BEA): Interesting that the 3 groups listed have requirements and use cases ... and there's discussion of rechartering, etc. ... Is there the notion of reusing these things? Usage scenarios, etc.
David Fallside: Excellent suggestion.
Jeff Mischkinsky: I agree. No reason to reinvent the wheel.
Jonathan Marsh: I agree. Perhaps the architecture group could own master list.
Jacek Kopecky (XML Protocol and Web Services Description Working Groups, Systinet): ... There is an ongoing debate about intermediaries in XML Protocol..... WSDL doesn't stand intermediaries.... How much are the other Web services Working Groups aware of this issue?
Jonathan Marsh: We're too new as a Working Group to know. Haven't discussed it yet. But since you're on the Working Group, I'm sure we'll hear it!
Jeff Mischkinsky: We have started an issues list. Already have one from another Working Group.
Henrik Frystyk Nielsen: How can the Web Services Description Working Group deal with the type of extension you see in SOAP? ... There's no reason why extensibility could not handle this.
Jean-Jacques Moreau (participant in both Working Groups, Canon): ... I forwarded requirements and usage scenarios from the XML Protocol Working Group.... One, e.g., talks about intermediaries.
Paul Cotton (participant in XML Protocol Working Group): Possible strawman answer: Don't recharter XML Protocol, but give an extension. Pass the question about attachments to the architecture Working Group to make a recommendation to the Membership.
Janet Daly: Sounds great until it's time to draw a charter with different deliverables. The Membership still needs an opportunity to review that.
Paul Cotton: I was suggesting that the XML Protocol Working Group charter be extended with its existing deliverables.
Noah Mendelsohn: The Web Services Architecture Working Group should be involved in making decisions about attachments.... Another idea: Maybe W3C will eventually bless some kind of attachment architecture.... Maybe we need an enabling hook in SOAP.
David Fallside: I don't think putting in such a hook requires a new charter.... Typically you can do a "pro forma" recharter to buy more time without changing the charter.... But to change charter requires Membership consent.
Noah Mendelsohn: My recollection is that the Working Group was forbidden to work on attachments, but I may be wrong.
David Fallside: Someone noted that the definition of "Web service" did not include the word "XML"!
John Ibbotson (IBM): For SOAP in Web services, there will be additional headers... e.g., message description header, ... routing proposals, reliable delivery.... Does the Web Services Architecture Working Group intend to identify such things and define them?
Jeff Mischkinsky: That would be something to put to the architecture group. ... I could see other info also. ... It would be useful to have a registry of these [scribe lost comment]. ... for important ones we need a methodology for them.
David Orchard: Prioritization issue: For Web Services Architecture Working Group, time to market should be explicitly written as a goal. ... Does the Coordination Group have responsibility for scope? ... How do the 3 groups deal with scoping of new/recharter groups? ... Use case: Web Services Architecture group says a reliability header is needed. What happens?
David Fallside: The Coordination Group charter includes responsibility to ID new groups. ... Coordination Group also has responsibility to bump things up to the Advisory Committee and Team.
Jeff Mischkinsky: Part of the reason these groups were chartered was to address these questions. ... We'll need to address them using consensus, and have them considered in the larger scope of the organization.
Moderator: Daniel Dardailler (W3C Deputy Director for Europe, QA Activity lead, WAI Technical Activity lead). Panelists: Lofton Henderson (QA Working Group co-Chair, CGMO/OASIS), Steven Pemberton (Chair HTML and XForms Working Groups, W3C/CWI), Ian Jacobs (User Agent Guidelines Working Group Team contact; Process, AB, and TAG editor; W3C), Paul Grosso (XML Core Working Group co-Chair, Arbortext)
Misha Wolf (Internationalization Working Group Chair, Reuters): I'm happy with the XML Core Working Group errata system, but what happens with Working Group approval of errata? We should avoid ambiguity in specs and corrections, so I want clear and rapid process for errata. The problem is approval of Advisory Committee representatives. I propose that AC reps register for each spec errata they want to monitor; then they will receive email about concerned errata and then there will be a voting period. This would ensure a minimal period of vagueness.
Jeremy Carroll: Consider the cost of test cases: my experience is that they have actually reduced costs. In RDF core, we had a simple yes/no system that has reduced discussion.
Lofton Henderson: We had the same experience in the SVG Working Group.
Dan Connolly: I'm concerned about the meaning of Last Call. It was said "double check 2 weeks before you request Last Call." But that is Last Call. Last Call is in effect First call, as people will review the spec for the first time then. The review must start before Last Call.
Steven Pemberton: I don't see how this could happen. Most people wait for Last Call anyway.
Henry Thompson (wearing XML schema editor hat): I strongly agree with Paul Grosso about separating issues and errata management. Issues should be linked from the Working Group page, not in the document. I would like clarification on who's the gatekeeper of 2nd editions of specs.
Responding to Misha: I'm not aware of a single controversial errata. Working Groups are in control of their errata and have been responsible for not publishing harmful corrections. The current process is OK and does not need to change. The Working Group makes the errata normative and that's all the process we need.
Paul Grosso: Within the Working Group we can announce the errata to the Chairs mailing list and set a countdown period after which it becomes normative.
Paul Biron: I suggest to the owner of W3C's publication rules to make errata more visible by having pointers to them from various places.
Janet Daly: Acknowledged.
Arnaud Le Hors (XML Core Working Group co-Chair, Patent Policy Working Group, IBM): There are legal implications about requirements documents. And so they should be part of the process.
Ian Jacobs: The Advisory Board (AB) discussed it. Requirements documents sometimes set wrong expectations. The AB decided that it shouldn't be a requirement to have requirements documents. Danny Weitzner will take that issue to the Patent Policy Working Group.
Daniel Veillard: 1. Test suites are extremely valuable. 2. What is the limit of a test suite, i.e., where is the point after which a test suite becomes normative? 3. What is the "officialness" of a test suite?
Daniel Dardailler: Our goal is to bring closer test developers and Working Groups. This does not mean that test development has to happen in the Working Group.
Daniel Veillard: But if the Working Group writes the test suite, it can be fixed faster.
Masayasu Ishikawa (HTML Activity lead, W3C): There is no process about editorial revisions of specs (2nd editions and such). For XHTML 1.1 2nd edition, the Working Group was asked to follow XML 1.0 2nd edition, but the process was not written down. I would also propose to get public review before all publications of Recommendation revisions. Second, what should be the best practice to manage errors in DTDs and schemas? How do you fix an error in a DTD in the text of a spec?
Ian Jacobs: Would the proposed errata process meet your needs?
Masayasu Ishikawa: No.
Daniel Dardailler: That's something we'll discuss off-line.
Karl Dubost (W3C Conformance Manager, QA Working Group co-Chair): Allocating resources for QA in Working Groups is not a problem if it is decided at the beginning of a Working Group's life.
Moderator: Steve Bratt. Panelists: David Orchard, Eric Miller, Janet Daly, David Fallside, Daniel Dardailler
Steve Bratt: Any general comments?
Daniel Veillard: Curves were interesting - number of specs - also interesting how much time each spec took.
Steve Bratt: Planning to do that.
Tantek Çelik: Data on document was interesting - wondering how chart would look with number of test suites imposed.
Daniel Dardailler: See the Matrix - how many endorsed test suites there are.
Karl Dubost: There is now a W3C icon on W3C recognized test suites.
Dave Hollander: Will trajectory flatten off? Will we do more specs in a year? ...
David Fallside: Even with count being flat, number of interactions between specs goes up at a huge rate. This has been a concern of mine. A major inhibitor of progress. We should be tracking number of interactions from resource point of view.
David Orchard: I see specs like software. We should do feature count, lines count - fallacy in terms of code metrics apply in terms of spec metrics. I am not sure that this gets to the core. Data will stay flat, since Membership stays flat
Janet Daly: We may have flat membership, but Working Groups often contain more people than the whole W3C Team.
Steven Pemberton: This is only the second tech plenary. Has it been a success? What went wrong? ... Should discuss. I missed worked up discussions of issues in plenary, e.g., the role of namespace as profile identifier. Presentation and question sessions were not open enough.
(Show of hands suggests more technical presentations.)
T.V. Raman: Should get topics from Chairs mailing list.
Janet Daly: We did solicit them.
David Fallside: In the Web services session, there were meaty technical issues. It was our intent when planning this plenary to have a 50/50 mix of technical and process. Maybe today was too much process. We tried to seed discussion on technical stuff, but that didn't happen.
Steve Bratt: Who wants more technical presentations? (80% raise hands.) Who thinks we can find more four topics ...
Dave Hollander: I heard only organizational presentations today. Didn't learn anything about the Semantic Web.
Janet Daly: Do you want to hear Working Group presentations?
Henry Thompson: We did do that for XML Schema, and it was a good thing.
Jim Larson: I liked slides that showed what we accomplished. Also interested in what went wrong (how many specs did not go to Last Call etc.), and the reasons for that. Help stop mistakes in the future.
Graham Klyne (MIMEsweeper Group): Intermixing/plenary effect is stronger in the IETF. This meeting is too big for meaty technical issues. Maybe could start with plenary, then break out into smaller groups.
Dan Connolly: Some people don't like to talk to large crowds and come to the mike. BOF tables were interesting. Real work happens in hallway meetings.
Unidentified speaker: Too big a meeting for technical discussions. Wanted to get background on topics that I don't follow but am stuck in Working Group meetings. Parallel tracks today would have been good.
Daniel Dardallier: Should TAG be setting up the agenda of this meeting? They should know "hot topics." Too early for this time, but maybe next time.
David Orchard: Let's take namespace name. We had a TAG face to face meeting and spent 3/4 hours on that. Need context to get through. I don't think much active debate is possible in this forum.
David Fallside: I like the idea of parallel sessions. Long running BOFs or associated with special Working Groups? How would that work?
Daniel Veillard: Missing XML plenary - smaller than this forum, addressing technical issues. Fruitful to have two Working Groups share a meeting. Something which takes 3 months by mail gets resolved in 30 minutes.
Graham Klyne: The IETF model is to have Working Group sessions of 1-2 hours, then break, and only a few plenary sessions.
Paul Biron: Health Level Seven is a standards body. Our meetings are typically a week long. Chairs need 9 days. I liked the W3C way - 1 or 2 day meetings.... Multiple tracks, tutorials sound like a good idea. Joint meetings are very good, but hard to coordinate.
Wendy Chisholm (W3C): A comment on TAG organizing this day: We have tech reviews in the W3C Team to discuss technical discussions, and also workshops. The TAG could become a body where Working Groups come together and propose a technical review. Then the TAG selects three or four for the plenary.
David Booth (W3C): It would be difficult to discuss detailed technical issues in this forum. I like the idea of technical presentations, but should be applicable to everybody, or in multiple sessions.
Paul Cotton: I like the idea of TAG organizing this meeting. Want to talk about QA. Have test suites ready at beginning of CR. RDF core experiences with test cases. Candidate Recommendation is not mandatory. Measure to enforce interoperability. Having test cases before optional step is wrong. Need use cases and test cases much earlier. That encourages people to look at the spec early. Encourages early implementations. We have 12 implementations of XML Query because we used that strategy. It allows you to catch errata before you publish. Encourage the W3C in helping Working Groups how they should develop test cases as early as possible. Would like to get to the point where we don't need Candidate Recommendation because we already have implementations. Specs fail because they haven't built up a community.
Janet Daly: Requirements for Candidate Recommendation are mandatory....
Jonathan Robie (Software AG): I support Paul's statement. Use cases and requirements documents are an important part to help outsiders understand work of a Working Group. All of our work needs to work together.
Eric Miller: I agree with Paul. A side-effect of RDF core is that examples are an educational experience....
Daniel Dardailler: QA is planning another column in the Matrix.... Some Working Groups have inherited test suites from another organization.... Should show where there is need for test development.
Arnaud Le Hors, comment to Paul: Resources in companies are stressed, it is hard to encourage implementors to change things over and over because of changes in the spec. There is a high cost involved. Query case cannot be generalized.
Daniel Veillard: What was the conclusion of the namespace discussion in the TAG?
David Orchard: Namespace names getting widely deployed. A namespace name may be dereferencable; you can't count on it for building anything. Some put an XML schema document there, others put an HTML document, others put RDDL, or RDF schema. Could leave as is, could say that something should go there. But what? ... Trying to figure out use cases and requirements. Who will use this: human or machine - I believe we should have an HTML document. Web Services won't use a namespace name to validate. We don't have consensus in the TAG.
Paul Cotton: A minority of the TAG agrees that there should be human readable material at the end of a namespace. You don't have to do that ... something like RDDL. Other TAG members disagree. Dave and I don't think it should be XML schemas. Was intrigued by the number of people that were interested in what TAG was doing. The charter says that we have to report to the Advisory Committee. The TAG should consider how we report to the Technical Plenary community.... We may have given TAG too much on the Advisory Committee level, and not enough on the technical level.... Will try to report more to tech plenary community.
Larry Masinter (Adobe): Why namespace name issue is hard: Most W3C languages talk about language syntax and semantics, but this is about operational practice, recommendation to Webmasters.... The words "should" and "must" don't really apply.
David Orchard: "Best practices" came up during a lunch discussion.... Could be TAG output.
Judy Brewer (W3C): The Web Accessibility Initiative (WAI) Education and Outreach group will talk about "best practices" on Thursday.... Want to extend and co-promote other areas of W3C ... There is a huge potential for mutual advantages.... Could cover Device Independence, Privacy, best use of metadata.
Graham Klyne: I don't think that best practices should have normative force - but others don't agree.
David Orchard: Do we do conformance testing against namespace rules - open issue. Would worry about trend if specs avoid making decisions and put it into best practices instead. There are process issues around this.
Graham Klyne: A Recommendation that may not be interoperable without best practice would be problem with the Recommendation.
Daniel Burnett (Nuance): Comment on QA. In the Voice Browser Working Group, we have dozens of implementations - applications running millions of phone calls. Huge call from industry. Helps in narrowing down the spec. Great that we had all the feedback from industry. Hope we can go to Last Call and Recommendation as quickly as possible.
T. V. Raman: The distinction between "Recommendation" and "Best Practices" is not clear on a language level.
Steven Pemberton: Public complaining about verbosity of required headers in HTML - XHTML 2.0 will be even bigger - can we do something about that?
Ray Whitmer (Netscape/AOL): Comment on patent policy. The TAG should maybe oversee patent policy. Work on definition of "essential" - in DOM, most of the things are optional, so patent policy does not seem to apply....
Janet Daly: Can comment on new patent policy Working Draft that came out yesterday. The Patent Policy Working Group does not live in isolation from the rest of W3C work.
Ray Whitmer: thought I had already brought up that comment, but have the impression that the Patent Policy Working Group does not make connection to technical work.
Arnaud Le Hors: I am a technical guy on the Patent Policy Working Group - we are aware of the issue.
Michael Rys: Essential patents are patents essential for features, not for essential features.
Ray Whitmer: I have not seen this definition - only mention of "essential for implementation."
Rigo Wenning: "Essential" can be defined by us, or by the courts, we cannot define this here.
Steve Bratt thanked the organizers. Adjourned at 17:30. | http://www.w3.org/2002/02/techplen-minutes.html | CC-MAIN-2016-40 | refinedweb | 5,245 | 67.65 |
Difference between revisions of "Adding an Ebuild to the Wiki"
Revision as of 02:48, August 2,.
What We Have So Far
To see what ebuild pages we have so far, see the Ebuilds by CatPkg page to see if the "CatPkg" (category/package atom) is already on this wiki.
You can also view all pages having to do with ebuilds (which also includes Package pages themselves) by going to the Category:Ebuilds page. Package pages will be listed by their regular wiki page names, which may make them harder to find.
The Package Namespace
This wiki has a special MediaWiki namespace called
Package, used as a home for wiki pages about ebuilds.. Pages in this namespace have a URL that is prefixed with
Package:, such as this wiki page: Package:Accelerated AMD Video Drivers, which is also a good example of a wiki page for an ebuild. View all ebuild pages here.
The Repository Namespace
There is also a special namespace for repositories -- repositories are collections of ebuilds..) Important for forked ebuilds. Can be left blank.
- Repository (Page) - a link to the repository from which this ebuild is sourced. This can be an overlay or a full Portage tree. This information is filled out automatically.
- Summary (Text) - a short summary, typically a sentence, that describes the functionality of the ebuild.
To extract some of this information: example extraction of gnome mastermind's information
$ equery m games-board/gnome-mastermind
$ cat /usr/portage/games-board/gnome-mastermind/gnome-mastermind-*.ebuild | grep DESCRIPTION!
Pages Need Approval
Anyone! | http://www.funtoo.org/index.php?title=Category:Featured&diff=next&oldid=4722 | CC-MAIN-2015-22 | refinedweb | 256 | 57.16 |
Creating software can be challenging, but it does not have to be. When it comes to project scalability and maintainability, it's essential to keep a project's files, modules, and dependencies nice and tidy so you can add features, fix bugs, and refactor code more efficiently.
TypeScript is designed to develop large applications and brings the benefits of strict and static types over JavaScript, so today, it's easy to refactor large codebase applications without the fear of breaking things on runtime.
But still, when you need to organize your codebase, keep it abstract and implement the SOLID principles, you need something to manage modules, instances, factories, and abstraction.
Dependency Injection Framework
Injex is a dependency injection framework that helps you resolve dependencies automatically. Think about a large application codebase with hundreds of modules; how can we manage all these connections and dependencies?
The Injex framework was created with simplicity in mind and in an unopinionated way so you can keep writing your classes with a small footprint. Injex API is small, so you don't need to learn new concepts.
Injex's core API works the same both on the server and the client, so it's easy to share code between them.
Why should I use Injex?
- You love and write TypeScript applications.
- You like to write clean code.
- You want to implement the SOLID principles.
- You're a full-stack developer who wants to build server/client applications.
- You don't want to make your hands dirty from circular dependencies.
- You want to be able to refactor code more efficiently.
- You like to keep your code as abstract as possible.
- You want something to manage module dependencies for you.
A quick tutorial
We're going to create a basic TypeScript Node application powered by the Injex framework. This example will overview the core functionality of Injex, including how to create an IoC container, define and inject modules, and bootstrap your application.
At the end of this example, you will have all the tools to get you up and running using Injex on your TypeScript applications, making it easier to implement paradigms like the SOLID principles.
What we're going to build
We're going to build a mail sender service for Node. The app will receive a mail provider type, a message body, and a contact email address as the addressee.
Note
Remember, it's just a demo application, and it's not going to send anything. We are creating it as part of this tutorial.
Scaffolding
Start by creating a folder and init an npm project.
mkdir -p injex-node-app/src cd injex-node-app npm init -y touch src/index.ts
Now, install the dependencies you're going to use in the project.
npm install --save @injex/core @injex/node typescript @types/node
TypeScript config
Copy this basic
tsconfig.json file to the root folder.
{ "compilerOptions": { "rootDir": "./src", "outDir": "./out", "module": "commonjs", "target": "es6", "experimentalDecorators": true }, "exclude": [ "node_modules" ] }
Package scripts
Edit the
package.json file, replace the
"scripts": {...} section with:
{ ... "scripts": { "dev": "tsc -w", "build": "tsc", "start": "node out/index" }, ... }
Interfaces
We're going to use the
IMailProvider TypeScript interface, later on, So add it to a file called
interfaces.ts inside the
src/ folder.
export interface IMailProvider { send(message: string, email:string): void; }
After all these preparations, let's write some TypeScript code using the Injex framework.
The mail providers
Now we will create two mail providers,
GoogleMailProvider and
MicrosoftMailProvider, so we can send the mail message using GMAIL or MSN. Let's start by creating two files inside the
src/providers/ folder.
src/providers/googleMailProvider.ts
import { define, singleton, alias } from "@injex/core"; import { IMailProvider } from "../interfaces"; @define() @singleton() @alias("MailProvider") export class GoogleMailProvider implements IMailProvider { public readonly Type = "google"; public send(message: string, email: string) { console.log(`GMAIL: Sending message to ${email}...`); console.log(`GMAIL: ${message}`); } }
src/providers/microsoftMailProvider.ts
import { define, singleton, alias } from "@injex/core"; import { IMailProvider } from "../interfaces"; @define() @singleton() @alias("MailProvider") export class MicrosoftMailProvider implements IMailProvider { public readonly Type = "microsoft"; public send(message: string, email: string) { console.log(`MSN: Sending message to ${email}...`); console.log(`MSN: ${message}`); } }
Both of the files are pretty the same except for minor changes. Remember, this is not a real-world mail sender service, so we only print some content to the console.
Let's go over the important lines (4, 5, 6):
In line 4, we define the provider class as an Injex module; this will register the class in the Injex container. Line 5 marks this class as a singleton, meaning that any time a module will "require" this provider, he will get the same instance of the mail provider.
In line 6, we tell Injex that each module has the alias name
MailProvider to use the
@injectAlias(NAME, KEY) decorator to inject a dictionary with all the modules with this alias as we will see in a minute.
The mail service
Let's create a service called
MailService. This service will have the
send method, which receives the mail provider type, a message body, and the addressee as arguments and triggers the send method of the selected mail provider.
Create the file
services/mailService.ts inside the
src/ folder and paste the following code.
src/services/mailService.ts
import { define, singleton, injectAlias, AliasMap } from "@injex/core"; import { IMailProvider } from "../interfaces"; @define() @singleton() export class MailService { @injectAlias("MailProvider", "Type") private mailProviders: AliasMap<string, IMailProvider>; public send(provider: string, message: string, email: string) { const mailProvider = this.mailProviders[provider]; mailProvider.send(message, email); } }
Like before, let's go over the important lines (3, 4, 6):
Lines 3 and 4 should be familiar. We define and register the module and mark it as a singleton module.
In line 6, we tell Injex to inject all the modules with the
MailProvider alias name into a dictionary object called
mailProviders which is a member of the
MailService class, the
"Type" in line 7 tells Injex what will be the key for this dictionary (line 8 in our mail providers from before).
Bootstrap
Like every application, we should have an entry point. Injex's entry point is the Bootstrap class
run method.
Create the file
bootstrap.ts inside our
src/ folder and paste the following.
src/bootstrap.ts
import { bootstrap, inject } from "@injex/core"; import { MailService } from "./services/mailService"; @bootstrap() export class Bootstrap { @inject() private mailService: MailService; public run() { this.mailService.send("google", "Hello from Injex!", "udi.talias@gmail.com"); } }
Line 1 defines this module as the bootstrap class. You should have only 1 class in your container with the
@bootstrap() decorator.
In line 6, we tell Injex that we want to
@inject() the
mailService singleton module we created earlier to use it to send our so important email 😅.
Note
You probably asking yourself, how is the
mailServiceon line 6 received the singleton instance of
MailService(with the capital 'M')? The answer is that Injex takes the name of the module class (MailService) and converts it to its camelCased version. You can read more about it on the @inject() decorator docs.
The Injex container
The container is the central part of the Injex framework. It's where all your application module definitions, instances, factories, and configurations will live for later injection.
We're going to use the Injex Node container, the one we installed earlier via the
npm install @injex/node command.
Open the
src/index.ts file in your favorite editor and paste the following code.
src/index.ts
import { Injex } from "@injex/node"; Injex.create({ rootDirs: [__dirname] }).bootstrap();
Here we import Injex from
@injex/node and creates the container using the
Injex.create() method. We pass the
__dirname as the only root directory of our project, so Injex can scan all the files inside this directory and look for Injex modules for auto registration.
This is one of the significant parts of the Injex framework. You need to create a module inside the root directory, and Injex will find it automatically and wire everything for you. No need to add each module manually.
Note
3, 2, 1... lift off!
Ok, we came so far, let's start the engine and watch the magic.
Open your terminal and run the build command to transpile our TypeScript.
Please make sure you're inside the project root folder and run the following commands.
npm run build && npm start
You should see the following output:
GMAIL: Sending message to udi.talias@gmail.com... GMAIL: Hello from Injex!
Summary
We created a simple Node application to show the basic parts of the Injex framework. We created a service and some classes with an alias name and injected them into the service using the
@injectAlias() decorator.
We then created the bootstrap class, and we used the MailService singleton instance, which we injected into it.
Where to go next?
Injex has a lot more to offer. If you want to use Injex and learn more about the framework, it's features and options, Check-out
Happy coding!
daily.dev delivers the best programming news every new tab. We will rank hundreds of qualified sources for you so that you can hack the future.
Discussion
Hey Guys!
If you have any questions about Injex and how it can power up your TypeScript applications, you're invited to the Discord channel at discord.gg/Kf5h8C
Cheers! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/dailydotdev/introducing-injex-4hjd | CC-MAIN-2021-04 | refinedweb | 1,554 | 56.76 |
If we put trace statements in the API stubs, the user could get some
output that tells what is missing. Or scan the JS for some jsdoc key we
leave in the code. But if it compiles, the user will know also what is
missing because the App will not look right or operate correctly.
Harbs, back when you migrated your app, there was no
spark.components.Button and we weren't sure we would ever build one, since
a complete implementation requires over 100 methods and properties. I'm
still not clear on exactly what steps you took to migrate, so, I am
imagining that for every s:Button or "import spark.components.Button" you
had to search and replace it with js:TextButton or "import
org.apache.royale.html.TextButton". So I'm guessing that you created a
spark.components.Button class that was all stubs so the app would compile
without having to convert every using of Spark Button, yet it would help
you find the remaining places to convert.
I am proposing that we actually add an operational spark.components.Button
in a SWC that only implements the 12 out of the 100+ APIs that Alina
needs. Then there are fewer places to change in her code. The final
output will be bigger because we have this additional emulation code, but
it should get it up and running. So, the migration experience will be
quite different for Alina. You had to do a lot of search and replace with
bundles of beads. We are going to encapsulate that work under the API
surface. Maybe we don't need to have every migration user do that much
searching and replacing. Our goal is to try to encapsulate and eliminate
repetitive work where we can.
As other users try to migrate, we'll get their API reports, see what is
missing and hopefully only need to add a few more APIs.
Thoughts?
-Alex
On 2/27/18, 10:42 AM, "Piotr Zarzycki" <piotrzarzycki21@gmail.com> wrote:
>What would be the results for Alina if you will have that swc ? She simply
>will be able to launch application without the error - That's the idea ?
>
>2018-02-27 19:38 GMT+01:00 Harbs <harbs.lists@gmail.com>:
>
>> Maybe. Not sure.
>>
>> How does the client know what needs to be implemented and how do they go
>> about implementing that?
>>
>> > On Feb 27, 2018, at 8:32 PM, Alex Harui <aharui@adobe.com.INVALID>
>> wrote:
>> >
>> > Hmm, maybe I'm not understanding you. If we decide to create a SWC
>>with
>> a
>> > spark.components.Button and Alina needs 12 APIs and we only have time
>> > right now to implement 6 of them, how would you handle the missing 6?
>> >
>> > I would just implement those APIs but they wouldn't do anything. They
>> > would contain a comment or trace statement or todo. I don't think I
>> would
>> > create a dummy/stub spark.components.Button class, just dummy/stub
>> methods
>> > and properties.
>> >
>> > Maybe we are saying the same thing?
>> > -Alex
>> >
>> > On 2/27/18, 10:15 AM, "Harbs" <harbs.lists@gmail.com> wrote:
>> >
>> >> If things are no-op or to-dos wouldn’t “stubs” or “dummy” classes
be
>> >> better?
>> >>
>> >> What’s the advantage of having partially functional SWCs? It seems
>>to me
>> >> like it would mask the issues?
>> >>
>> >> Harbs
>> >>
>> >>> On Feb 27, 2018, at 7:48 PM, Alex Harui <aharui@adobe.com.INVALID>
>> >>> wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> On the users list, Alina has provided the API report for the main
>> >>> portion
>> >>> of her application. We are still waiting to get a report on her SWC
>> >>> library. She might have a pile of modules to report on as well.
>> >>>
>> >>> Based just on the main application, and her saying that she has 500
>> MXML
>> >>> files to port, I'm leaning towards creating migration SWCs that
>>reduce
>> >>> the
>> >>> amount of copy/paste. In her data, we see that only 12 out of more
>> than
>> >>> 100 APIs on s:Button are being used, and we have 6 of them
>>implemented
>> >>> already. The plan would be to write the remaining six. Some, like
>> >>> useHandCursor might be temporary no-ops or to-dos.
>> >>>
>> >>> I've been pondering what to name these libraries. I've been using
>> >>> MXish.SWC and Sparkish.SWC, but maybe we want a better name like
>> >>> MXMigration.SWC/SparkMigration.SWC or MXRoyale.SWC/SparkRoyale.SWC
>>or
>> >>> RoyaleMX/RoyaleSpark.SWC. I want to imply that it isn't fully
>>backward
>> >>> compatible in the name of the SWC if possible.
>> >>>
>> >>> We could leave the namespace URI as
>> >>>
>> >>> xmlns:> >>> xmlns:> >>>
>> >>>
>> >>> just to have one less thing to change in each MXML file, although it
>> >>> might
>> >>> be better to use a different namespace URI to get "adobe.com" out of
>> >>> there
>> >>> which might help imply that it isn't fully backward compatible and
>>go
>> >>> with:
>> >>>
>> >>> xmlns:> >>> xmlns:> >>>
>> >>> I don't think we'd bother to fully re-create the Flex class
>>hierarchy
>> at
>> >>> this time, but I think we will need to create a UIComponent that
>> >>> subclasses UIBase and have all migration components extend that
>>instead
>> >>> of
>> >>> extending Express or Basic components because we need to change the
>>way
>> >>> percentWidth/Height work in the migration components. UIBase sets
>>the
>> >>> style.width to a % value, but we don't want that in the migration
>> >>> components. The Flex layout classes use percentage differently.
>>The
>> >>> cool
>> >>> thing is that if we wrote our beads correctly, we can re-compose the
>> >>> functionality from Basic and Express onto this migration library
>>and it
>> >>> will "just work". This illustrates the value of composition over
>> >>> subclassing.
>> >>>
>> >>>
>> >>> I think it will be a few more days before we have all of the data
>>from
>> >>> Alina and know how big this task will be so now is a good time to
>> >>> discuss
>> >>> some of the details on how this would work.
>> >>>
>> >>> Thoughts?
>> >>> -Alex
>> >>>
>> >>
>> >
>>
>>
>
>
>--
>
>Piotr Zarzycki
>
>Patreon:
>*
>* | http://mail-archives.apache.org/mod_mbox/royale-dev/201802.mbox/%3CD6BAE80E.BA287%25aharui@adobe.com%3E | CC-MAIN-2018-47 | refinedweb | 976 | 73.58 |
Please send coments to www-amaya@w3.org - archived in public
Screenshot of 6.4 (120k), screenshot of 7.1 (30k)
These instructions explain how to install Amaya with fink, which makes it a lot easier. They should provide enough information to install it even if you don't use fink, but comments and improvements are always welcome.
Fink is a packaging system for OS X. It provides tools to install software originally developed as open-source Unix software, and most importantly it manages dependencies (i.e. if you want to install Amaya it will make sure libpng is installed so Amaya can run) and updates (if there is a new version of Amaya packaged, you can update to it with a one-line command or using a piece of software with a relatively simple interface).
To install fink, follow the instructions at fink's download page. Note that you will need the developer tools (in OS 10.3 they are called XCode, and are on a separate disc, not installed by default). You can also download them (after registering) from Apple's developer site. Be warned - it is around 200MB.
There are a number of things you need in order to install Amaya. The most important is X11 - the X Window system that Amaya runs under. This can be installed from Apple, or using fink.
If you are using fink you only need to decide whether to install Apple's X11 or install it as provided by fink. Once you have installed one or the other, fink will install everything else you need.
Download and install both the x11 package and the Developer's kit package from Apple's X11 site
(It seems that if you use the current beta - apple's second version - you no longer need to download and run the fink repair script for Apple's X11 libraries.)
You then need to run the command:
sudo apt-get install system-xfree86
Note: You can get away with not doing this step - fink or apt-get will normally install this automatically because Amaya lists it as a dependency.
sudo apt-get xfree86-base
will install X11. You should also install a local rootless X server (if you don't know what this means, you need to do it - if you do then you have the option of running a remote X server without installing the package)
sudo apt-get xfree86-rootless
run the command
sudo apt-get install amaya
That's it. Apt-get will install several libraries that Amaya needs if you don't already have them, as well as the Amaya code itself. If you agree to everything it suggests, there should be no problems - if you have some reason for doing something differently you need to understand what you are doing. When it has done its work, you will have Amaya installed.
Alternatively, version 8.5 as a fink debian package is now available from the Amaya site. You can download that package and install it with the command
sudo dpkg -i amaya_8.5-1_darwin-powerpc.deb
Make sure you have the current package list.
fink selfupdate-cvs
Once you have set up your system requirements, run the following command to install amaya for the first time:
fink install amaya
(This will install the GTK version of Amaya, which is the standard version now. If you have installed an older version it this method should replace it.)
Fink will then download Amaya (and anything else it needs that is not already installed) and compile it
If you have installed Amaya 5.3 using fink or apt-get, then you can upgrade. Use the same process as for installing, but change the command install to upgrade - for example
fink selfupdate-cvs
fink update amaya
or
sudo apt-get update
sudo apt-get upgrade amaya
Until version 6.1 there were two versions of Amaya - one relied on Motif and one relied on GTK. The GTK version was recommended, but both were available as packages. From version 7.1 the GTK version is the only one supported, so there is only one package, called amaya, and using GTK.
If you want to upgrade from amaya-gtk to a new version of amaya you should follow the instructions for installing amaya (recommended is to install amaya via apt-get, because it is quicker). This will replace your existing amaya-gtk cleanly, and in future you can upgrade amaya as usual.
If you are planing to use the fink unstable to get the latest version the instructions given below include this step.
Often you will find there is a later version of Amaya in the unstable tree. To check if this is the case, look at the entry for amaya in the fink package database.
If there is a newer version in the unstable tree and you are not generally using the unstable version of fink, you can still use fink to install the later version.as follows
First you need to update the data fink has about packages:
fink selfupdate-cvs
then tweak your local fink setup so it compiles the newer version (copy the package information, then update the index that fink uses):
sudo cp /sw/fink/dists/unstable/main/finkinfo/web/amaya* /sw/fink/dists/local/main/finkinfo/ fink index
if you have a previous version of Amaya installed with fink or apt-get, use the upgrade command
fink update amaya
if you do not have amaya installed, or if you have a package called amaya-gtk installed, use the install command
fink install amaya
(Note that from version 7.1 onwards the -gtk extension has been dropped. This is because the GTK-based package is now the default, and is the only one currently supported in fink).
Sometimes fink will refuse to install or upgrade an unstable version, giving a message like
Dependency failed (imlib-shlib >= 1.9.14)
This means that fink couldn't find a package, or a recent enough version of a package, that Amaya relies on. Usually the required package will be in the list of unstable packages (just like Amaya itself).
What you need to do is find the package, move it to the local list, rebuild the index, and then do the install or update.
As an example, on 7 May 2003 the version of Amaya in the stable package list was 7.1 and latest package available was version 7.2. Installing this version fails with the error message above.
The best way is to search the fink package database for the name of the package that was not found. in the example above searching for imlib-shlib gives the fink imlib-shlib package's page. From this you can see three important pieces of information:
This is the same process as with the amaya package. For our example, the following two commands do it:
sudo cp /sw/fink/dists/unstable/main/finkinfo/graphics/imlib-shlib-1.9.14* /sw/fink/dists/local/main/finkinfo/
fink index
(graphics in this example is the section the package is in. Other packages may be in other sections, most likely lib or X11. imlib-shlib is the name of the package, and 1.9.14 is the version number, without the revision number. The asterisk/star "*" is there to ensure that all the necessary files are copied.)
You are now ready to install or upgrade Amaya in the "normal" way with:
fink update amaya
or
fink install amaya
Joseph Myers reports successfully compiling version 8.1b without necessarily installing fink.
The following notes are for compiling version 8.3.
As usual, I recommand that you read the instruction for compilation at. The build process is unchanged, but you may need some tricks to complete a successful build.
Go to the "How to build" section and follow steps 1 and 2 :
Step 3 is the "configure" script:
../configure --host=ppc --build=ppc --with-gtk --without-i18n
If it fails to generate Makefile, config.h and so on, you should install GNU sed from fink and retry configure.sudo fink install sed
(Choose distfiles.opendarwin.org as mirror if needed)
Then Step 4:
In Options:
GTK_INCLUDES=-I/sw/include/gtk-1.2 -I/sw/include/glib-1.2 -I/sw/lib/glib/include -I/usr/X11R6/include -I/sw/include
GTK_LIBRARIES=-L/sw/lib -L/usr/X11R6/lib -lgtk -lgdk -lgmodule -lglib -ldl -lintl -lX11 -lXext -lm -lgdk-imlib
And a special trick to avoid conflicts with some Apple frameworks:
In config.h:
It is important to undefine something related to APPKIT because AppKit in MacOSX is not what is expected by amaya.
So comment the following line if present:
/* #define HAVE_APPKIT_APPKIT_H 1 */
You could have to do the same in libwww/wwwconf.h
Here is Step 5:
Just gnumake it!
Additional information for Amaya >= 7.x and MacOSX > 10.0
THOT_OPTIONS = -L ../
ln -s /usr/include/malloc/malloc.h /usr/include/malloc.h
$$libwwwdir/configure \ --disable-shared \add the following:
--host=ppc \
# Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag="-flat_namespace -undefined suppress"
#include "stdint.h"There was a more serious bug due to system constants (FILENAME_MAX), described in www-amaya-dev list archive. It's corrected in 7.2 so I strongly advise you to compile 7.2 instead of 7.1.
If you really want i18n support, then be prepared to work a little more.
Let's start again from step 3. Replace the configure command by this one:
../configure --host=ppc --with-gtk
Then go through Step 4 but now -D_I18N_ must be present for Thotlib and Amaya in Options file. The following instructions work for Amaya 6.4 and 7.x
Since Darwin, like BSD systems, does not have wchar support in libc, you have to use a workaround for it:
ustring.haround line 55 to get:
#ifdef _I18N_ /*#include <wchar.h> DOES NOT EXIST IN MACOSX */ #include <stddef.h> /* defines BSD_WCHAR_T */ typedef wchar_t CHAR_T; typedef wchar_t *STRING; typedef long int wint_t; /* was in wchar.h in other systems */ #else /* _I18N_ */ /* don't touch anything else */
sudo cp /sw/fink/dists/unstable/main/finkinfo/lib/libwcs-1.0-2.info /sw/fink/dists/local/main/finkinfofink index fink install libwcs
Or you can download the libwcs source in
your /tmp. Unpack it, compile it and install it:
tar xfvz libwcs.tar.gz cd wcs make libwcs.a mv libwcs.a /sw/lib
Optionsto complete
EXTRA_LIBSwith:
EXTRA_LIBS = -L/sw/lib -lwcs
And add an option to the cc:
CC = cc -no-cpp-precomp
Makefileto use these
EXTRA_LIBS
LIBS = ../thotlib/libThotKernel.a $(EXTRA_LIBS)
Now you should be able to build it:
cd ..
gmake
carine@w3.org, charles@sidar.org. Thanks to Joseph Myers and Damian Steer for compiling/packaging and reporting it. This draft $Id: amaya-darwin.html,v 1.33 2004/03/31 14:15:56 cbournez Exp $ | http://www.w3.org/2002/09/amaya-darwin.html | crawl-001 | refinedweb | 1,814 | 64.61 |
08 February 2013 18:00 [Source: ICIS news]
HOUSTON (ICIS)--Here is Friday’s midday ?xml:namespace>
CRUDE: Mar WTI: $96.13/bbl, up 30 cents; Mar Brent: $118.92/bbl, up $1.68
NYMEX WTI crude futures rose, tracking a rally in global stock markets in response to a string of upbeat economic indicators. Strong Chinese export data and a rise in oil imports signalled a steady rise in economic activity. Brent continued to outperform its American counterpart as the market remained sensitive to the risk of wider supply disruptions. WTI topped out at $96.57/bbl before retreating,
RBOB: Mar: $3.0608/gal, up 6.09 cents/gal
Reformulated blendstock for oxygen blending (RBOB) gasoline futures prices rebounded after falling on Thursday. Higher crude futures gave support.
NATURAL GAS Mar: $3.290/MMBtu, up 0.5 cent
NYMEX natural gas futures edged upward through Friday’s morning session, boosted by improving near-term demand prospects as winter storms sweep through the
ETHANE: higher at 25 cents/gal
Ethane spot prices traded slightly higher as it continued to track energy commodities.
AROMATICS: benzene wider at $4.75-4.81/gal
Prompt benzene spot prices moved to a slightly wider range early in the day, sources said. Bids were flat, while offers were up by a penny compared with $4.75-4.80/gal FOB (free on board) the previous session.
OLEFINS: ethylene steady at 62.5-63.0, RGP higher at 73.5 cents/lb
February ethylene bids and offers were absent in the market, while bids for March ethylene were heard at 61.0 cents/lb, lower than deals done the previous day at 62.5 | http://www.icis.com/Articles/2013/02/08/9639567/noon-snapshot-americas-market-summary.html | CC-MAIN-2014-35 | refinedweb | 278 | 69.28 |
C++ Tutorial - API Testing
From API design for C++ by Martin Reddy
The trend toward applications that depend on third-party API s is particularly popular in the field of cloud computing. Web applications rely more and more on Web services (APIs) to provide core functionality. In this case of Web mashups, the application itself is sometimes simply a repackaging of multiple existing services to provide a new service, such as combining the Google Maps API with a local crimes statistics database to provide map-based interface to the crime data.
In fact, it's worth taking a few moments to highlight the importance of C++ API design in Web development. A superficial analysis might conclude that server-side Web development is confined to scripting languages, such as PHP, Perl, or Python, or .NET languages based on Microsoft's ASP (Active Server Pages) technology. This may be true for small-scale Web development. However, it is noteworthy that many large-scale Web services use a C++ backend to deliver optimal performance.
In fact, Facebook developed a product called HipHop to convert their PHP code into C++ to improve the performance of their social networking site. C++ API design therefore does have a role to play in scalable Web service development. Additionaly, if we develop our core APIs in C++, not only can they form a high-performance Web service, but our code can also be reused to deliver our product in other forms, such as desktop or mobile phone versions.
An article on the Facebook's HipHop project:
How Three Guys Rebuilt the Foundation of Facebook
White box testing
Tests are developed by the programmers who know the sources code.
Black box testing
Tests are developed based on specifications and without any knowledge of the code. These kinds of tests are often done by manually using an end-user apps.
Unit tests are usually written by programmers who know the implementation details. So, unit test is a white box test.
#include <iostream> #include <sstream> #include <cassert> double StringToDouble(const std::string &s;) { std::stringstream ss(s); double d; if(ss >> d) { return d; } else return 0; } void Test_StringToDouble() { // simple case: assert( 0.12345 == StringToDouble( "0.12345")); // blank space: assert( 0.12345 == StringToDouble( "0.12345 ")); assert( 0.12345 == StringToDouble( " 0.12345")); // trailing non digit characters: assert(0.12345 == StringToDouble( "0.12345a")); assert(0 == StringToDouble( "0" )); assert(0 == StringToDouble( "0." )); assert(0 == StringToDouble( "0.0" )); assert(0 == StringToDouble( "0.00" )); assert(0 == StringToDouble( "0.0e0" )); assert(0 == StringToDouble( "0.0e-0" )); assert(0 == StringToDouble( "0.0e+0" )); std::cout << "Passed the test" << std::endl; } int main() { Test_StringToDouble(); return 0; }
In the above example, we want to test the function which converts string to a double:
double StringToDouble(const std::string &s;)
This function accepts a string parameter and returns a double. If not, it returns 0. Given that function, void Test_StringToDouble() unit test function performs a series of checks to ensure that it works as expected.
An assert() evaluates its argument and calls abort() if the result if zero(false). For example:
void foo(int *ptr) { assert(!ptr != 0); // assert that ptr != 0; abort() if ptr is zero }Before aborting, assert() spits out the name of its source file and the number of the line on which it appears, and this makes assert() a useful debugging aid. After abort() is called, the program execution will be terminated.
As long as NDEBUG is not defined, the assert() macro evaluates the condition. Note that the NDEBUG is not defined by default. So in the initial stage of developing code, we let the debugging statements are executed by leaving NDEBUG undefined. Also note that NDEBUG is not automatically defined by Visual Studio when compiling in Debug Configuration let alone Release Configuration.
Because the popularity of JUnit, the testing framework for JUnit has been ported to many other languages, and it is known as xUnit. For Python, it is PyUnit, CUnit for C, and CppUnit for C++.
In real world, unlike the example given above, the object (or method) under test often depends on other objects in the system or on external resources such as database or objects on a remote server. This leads to two versions of views of unit testing:
Fixture Setup
The classic approach to unit testing is to initialize a consistent environment before each unit test is run. For example, to ensure that dependent objects are initialized, we copy a specific set of files to a known place, or we load a database with a prepared set of initial data. This is usually done in a SetUp() function associated with each test to differentiate test setup steps from the actual test operations. Once the test finishes, a related tearDown() function is used to clean up environment. In this way, the same fixture can often be resued for several times.
Stub/mock objects
With this approach, the code under test is isolated from the rest of the system by creating stub (or mock) objects that represnent for any dependencies outside of the unit. For instance, if a unit test needs to communicate with a database, a stub database object an be created that accepts the subset of queries that the unit will generate and then return data in response. Note that we are not making any connection to the database. The outcome is completely isolated test that will not be affected by database problems, or other issues such as network issues, or file system permissions. However, the drawbacks of this approach is that the creation of these stub objects can be tedious and often they cannot be reused by other unit tests.
So, when our code depends on an unreliable resources, such as database, file system, or network, we should consider using stub or mock objects to get more robust unit tests.
It's a port of JUnit to C++, and it supports various helper macros to simplify the declaration of tests, capturing exceptions, and a range of output formats including XML. It also provides a number of different test runners such as Qt- and MFC-based GUI runners. CppUnit 2 is under development, and there is also an extremely lightweight version called CppUnitLite.
// file: cmplx.cpp #include <CppUnit/TestCase.h> #include <CppUnit/extensions/TestFactoryRegistry.h> #include <CppUnit/CompilerOutputter.h> #include <CppUnit/TestResult.h> #include <CppUnit/TestResultCollector.h> #include <CppUnit/TestRunner.h> #include <CppUnit/TextTestProgressListener.h> #include <CppUnit/TestCaller.h> class Complex { public: Complex( double r, double i = 0 ) : real(r), imaginary(i) {} Complex& operator+(const Complex&); private: friend bool operator ==(const Complex& a, const Complex& b); double real, imaginary; }; bool operator ==( const Complex &a;, const Complex &b; ) { return a.real == b.real && a.imaginary == b.imaginary; } Complex& Complex::operator+( const Complex &a; ) { real += a.real; imaginary += a.imaginary; return ); } }; int main () { CppUnit::TestCaller
test( "testEquality", &ComplexNumberTest;::testEquality ); CppUnit::TestResult result; test.run( &result; ); return 0; }
Makefile is:
CPPUNIT_PATH=/cppunit cmplxtest: cmplx.o g++ -o cmplxtest cmplx.o -L${CPPUNIT_PATH}/lib -lcppunit -ldl cmplx.o: cmplx.cpp g++ -c cmplx.cpp -I${CPPUNIT_PATH}/include clean: rm -f *.o cmplx
The Framework of Google C++ Testing is based on xUnit architecture. It is a cross platform system that provides automatic test discovery. In other words, we don't have to enumerate all of the test in our test suite manually. It supports a rich set of assertions such as fatal assertions (ASSERT_), non-fatal assertions (EXPECT_), and death test which checks that a program terminates expectedly.
Here is a step by step tutorial how we can setup Google Test using Visual Studio 2012:.
Integration test is to test the interactions of components of the system. Integration test are still needed even though our codes passed unit tests. Integration tests are normally developed against the specification of the API, and not require knowledge of the implemention details. So, it is a black box testing.
Software Development Kit (SDK) is a platform specific package usually comes with the API (header files (.h) and the libraries (.dll, .so, .dylib)). It's a kit for software developer can compile/link against. Usually, an SDK may include other resources to help developer use the APIs: documents, example source code, and tools.
One of the most popular SDKs is Java SDK (JDK) that includes all the libraries, debugging utilities, etc., which would make developer's life much easier since there is no need to look for components/tools that are compatible with each other and all of them are integrated in to a single package that is easy to install.
So, an SDK at its minimum is an API.
| http://www.bogotobogo.com/cplusplus/cpptesting.php | CC-MAIN-2017-34 | refinedweb | 1,429 | 56.76 |
Namespaces (F#)
A namespace lets you organize code into areas of related functionality by enabling you to attach a name to a grouping of program elements.
If you want to put code in a namespace, the first declaration in the file must declare the namespace. The contents of the entire file then become part of the namespace.
Namespaces cannot directly contain values and functions. Instead, values and functions must be included in modules, and modules are included in namespaces. Namespaces can contain types, modules.
Namespaces can be declared explicitly with the namespace keyword, or implicitly when declaring a module. To declare a namespace explicitly, use the namespace keyword followed by the namespace name. The following example shows a code file that declares a namespace Widgets with a type and a module included in that namespace.
If the entire contents of the file are in one module, you can also declare namespaces implicitly by using the module keyword and providing the new namespace name in the fully qualified module name. The following example shows a code file that declares a namespace Widgets and a module WidgetsModule, which contains a function.
The following code is equivalent to the preceding code, but the module is a local module declaration. In that case, the namespace must appear on its own line.
If more than one module is required in the same file in one or more namespaces, you must use local module declarations. When you use local module declarations, you cannot use the qualified namespace in the module declarations. The following code shows a file that has a namespace declaration and two local module declarations. In this case, the modules are contained directly in the namespace; there is no implicitly created module that has the same name as the file. Any other code in the file, such as a do binding, is in the namespace but not in the inner modules, so you need to qualify the module member widgetFunction by using the module name.
The output of this example is as follows.
For more information, see Modules (F#).
When you create a nested namespace, you must fully qualify it. Otherwise, you create a new top-level namespace. Indentation is ignored in namespace declarations.
The following example shows how to declare a nested namespace.
Namespaces can span multiple files in a single project or compilation. The term namespace fragment describes the part of a namespace that is included in one file. Namespaces can also span multiple assemblies. For example, the System namespace includes the whole .NET Framework, which spans many assemblies and contains many nested namespaces.
You use the predefined namespace global to put names in the .NET top-level namespace.
You can also use global to reference the top-level .NET namespace, for example, to resolve name conflicts with other namespaces. | https://msdn.microsoft.com/en-us/library/dd233219.aspx | CC-MAIN-2016-07 | refinedweb | 468 | 57.16 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
If you consider test cases as leaves on the test tree, the test suite can be considered as branch and the master test suite as a root. Unlike real trees though, our tree in many cases consists only of leaves attached directly to the root. This is common for all test cases to reside directly in the master test suite. If you do want to construct a hierarchical test suite structure the Unit Test Framework provides both manual and automated test suite creation and registration facilities:
In addition the Unit Test Framework presents a notion of the Master Test Suite. The most important reason to learn about this component is that it provides an ability to access command line arguments supplied to a test module.
The solution the Unit Test Framework presents for automated test suite creation and registration is designed to facilitate multiple points of definition, arbitrary test suites depth and smooth integration with automated test case creation and registration. This facility should significantly simplify a test tree construction process in comparison with manual explicit registration case.
The implementation is based on the order of file scope variables definitions
within a single compilation unit. The semantic of this facility is very similar
to the namespace feature of C++, including support for test suite extension.
To start test suite use the macro
BOOST_AUTO_TEST_SUITE. To end test
suite use the macro
BOOST_AUTO_TEST_SUITE_END. The same
test suite can be restarted multiple times inside the same test file or in
a different test files. In a result all test units will be part of the same
test suite in a constructed test tree.
BOOST_AUTO_TEST_SUITE(test_suite_name); BOOST_AUTO_TEST_SUITE_END();
Test units defined in between test suite start and end declarations become members of the test suite. A test unit always becomes the member of the closest test suite declared. Test units declared at a test file scope become members of the master test suite. There is no limit on depth of test suite inclusion.
This example creates a test tree that matches exactly the one created in the manual test suite registration example.
As you can see test tree construction in this example is more straightforward and automated.
In the example below, the test suite
test_suite
consists of two parts. Their definition is remote and is separated by another
test case. In fact these parts may even reside in different test files. The
resulting test tree remains the same. As you can see from the output both
test_case1 and
test_case2 reside in the same test suite
test_suite.
To create a test suite manually you need to
boost::unit_test::test_suiteclass,
The Unit Test Framework models the notion of test case
container - test suite - using class
boost::unit_test::test_suite.
For complete class interface reference check advanced section of this documentation.
Here you should only be interested in a single test unit registration interface:
void test_suite::add( test_unit* tc, counter_t expected_failures = 0, int timeout = 0 );
The first parameter is a pointer to a newly created test unit. The second optional parameter - expected_failures - defines the number of test assertions that are expected to fail within the test unit. By default no errors are expected.
The third optional parameter -
timeout
- defines the timeout value for the test unit. As of now the Unit
Test Framework isn't able to set a timeout for the test suite
execution, so this parameter makes sense only for test case registration.
By default no timeout is set. See the method
boost::execution_monitor::execute
for more details about the timeout value.
To register group of test units in one function call, the
test_suite class provides another
add
interface covered in the advanced section of this documentation.
To create a test suite instance manually, employ the macro
BOOST_TEST_SUITE. It hides all implementation
details and you only required to specify the test suite name:
BOOST_TEST_SUITE(test_suite_name);
BOOST_TEST_SUITE creates an instance
of the class
boost::unit_test::test_suite and returns a pointer to the
constructed instance. Alternatively you can create an instance of class
boost::unit_test::test_suite yourself.
Newly created test suite has to be registered in a parent one using add interface. Both test suite creation and registration is performed in the test module initialization function.
The example below creates a test tree, which can be represented by the following hierarchy: | https://www.boost.org/doc/libs/1_63_0/libs/test/doc/html/boost_test/tests_organization/test_suite.html | CC-MAIN-2018-26 | refinedweb | 735 | 53.61 |
How can I display timer results with a C++ "putText" command?
I working on several Android OpenCV samples, which use native C++ code to perform operations. I have implemented a timer in the C++ code and I would like to display the results of the timer. I have checked a putText command and it works great but I don' t know how to show the results of the timer with it. Here is the code of native part of the application:
#include <jni.h> #include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/features2d/features2d.hpp> #include <vector> using namespace std; using namespace cv; extern "C" { JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial3_Sample3View_FindFeatures(height + height/2, width, CV_8UC1, (unsigned char *)_yuv); Mat mbgra(height, width, CV_8UC4, (unsigned char *)_bgra); Mat mgray(height, width, CV_8UC1, (unsigned char *)_yuv); //Please make attention about BGRA byte order //ARGB stored in java as int array becomes BGRA at native level cvtColor(myuv, mbgra, CV_YUV420sp2BGR, 4); vector<KeyPoint> v; OrbFeatureDetector detector(1); double t = (double)getTickCount(); detector.detect(mgray, v); t = ((double)getTickCount() - t)/getTickFrequency(); putText(mbgra, t+ " detection time", Point2f(100,100), FONT_HERSHEY_PLAIN, 2, Scalar(0,0,255,255), 2); for( size_t i = 0; i < v.size(); i++ ) circle(mbgra, Point(v[i].pt.x, v[i].pt.y), 10, Scalar(0,0,255,255)); env->ReleaseIntArrayElements(bgra, _bgra, 0); env->ReleaseByteArrayElements(yuv, _yuv, 0); } }
When I compile everything in Eclipse, I get an error: "invalid operands of types 'double' and 'char const [15]' to binary 'operator+'". What am I doing wrong? Is it possible to display the timer results with a "putText" command? How else can I display the measure results?
@Mostafa Sataki Your code is much better but now I get an error: "cannot convert 'double' to 'char const' for argument '2' to 'int sprintf(char, char const*, ...)' ". I don't know how to fix it. Something must be wrong in my timer. Can you help me with it? | https://answers.opencv.org/question/6544/how-can-i-display-timer-results-with-a-c-puttext-command/ | CC-MAIN-2021-39 | refinedweb | 327 | 56.55 |
Hi,
We are trying to use SOA/B2B to transmit cXML files to Ariba. Ariba only provides DTD's (not XSD's). We have converted the DTD to an XSD using XMLSpy but are still having many problems with the generated XSD. I tried to simplify the situation as much as possible. The Ariba spec requires the xml:lang attribute for many elements. SOA does not seem to like this. The XSD below shows as valid within SOA, but I'm unable to assign a variable to the element. I get the following error:
Exception: Invalid reference: ''
I've tried numerous methods, including with and without the "import namespace" tag, nothing seems to work. Any suggestions would be appreciated.
<?xml version="1.0" encoding="UTF-8" ?>
<!--W3C Schema generated by XMLSpy v2013 rel. 2 sp2 ()-->
<!--Please add namespace attributes, a targetNamespace attribute and import elements according to your requirements-->
<xs:schema xmlns:
<xs:import
<xs:complexType
<xs:attribute
</xs:complexType>
<xs:element
</xs:schema>
Another option is to get a sample cXML file with all the possible elements from them and use it to create the XSD.
~Ismail. | https://community.oracle.com/message/11220309?tstart=0 | CC-MAIN-2017-09 | refinedweb | 188 | 55.74 |
22658/how-to-use-binary-tree-search-in-python
# stack_depth is initialised to 0
def find_in_tree(node, find_condition, stack_depth):
assert (stack_depth < max_stack_depth), 'Deeper than max depth'
stack_depth += 1
result = []
if find_condition(node):
result += [node]
for child_node in node.children:
result.extend(find_in_tree(child_node, find_condition, stack_depth))
return result
I need help understanding this piece of code. The question i want to answer is
The Python function above searches the contents of a balanced binary tree. If an upper limit of 1,000,000 nodes is assumed what should the max_stack_depth constant be set to?
From what I understand, this is a trick question. If you think about it, stack_depth is incremented every time the find_in_tree() function is called in the recursion. And we are trying to find a particular node in the tree. And in our case we are accessing every single node every time even if we find the correct node. Because there is no return condition when stops the algorithm when the correct node is found. Hence, max_stack_depth should 1,000,000?
Can someone please try to explain me their thought process.
replace() is a method of <class 'str'> ...READ MORE
In Python programming, pass is a null statement. The ...READ MORE
This is a simple example of a ...READ MORE
You can use Python dictionary it is a built-in type ...READ MORE
suppose you have a string with a ...READ MORE
if you google it you can find. ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
Thread is the smallest unit of processing that ...READ MORE
You can use it to raise errors ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/22658/how-to-use-binary-tree-search-in-python | CC-MAIN-2020-05 | refinedweb | 288 | 76.52 |
User Tag List
Results 1 to 5 of 5
Threaded View
- Join Date
- Nov 2004
- Location
- Victoria BC
- 116
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Passing variables to a form from the controller
In the controller I have an edit and delete (destroy) method. When I have carried out the action I want to pass a set of variables to a basic "success" view that describes what the success was (ie. Success - you have "deleted a record" or Success - you have "saved the edited record") with the things in quotes being the strings (plus others) that I want to pass. Here is the basic "destroy" methodCode:
def destroy EdaOfficer.find(params[:id]).destroy #(variavles to be passed to success form) redirect_to :action => 'success' endRuby, Ruby when will you be mine
Bookmarks | http://www.sitepoint.com/forums/showthread.php?389338-Passing-variables-to-a-form-from-the-controller&p=2803920&mode=threaded | CC-MAIN-2018-05 | refinedweb | 133 | 62.61 |
Generate time series data using Go¶
This tutorial will show you how to generate some mock time series data about the International Space Station (ISS) using Go.
See also
Generate time series data
Table of contents
Prerequisites¶
CrateDB must be installed and running.
Make sure you are running an up-to-date version of Go. We recommend Go 1.11 or higher since you will be making use of modules.
Most of this tutorial is designed to be run as a local project using Go tooling since the compilation unit is the package and not a single line.
To begin, create a project directory and navigate into it:
sh$ mkdir time-series-go sh$ cd time-series-go
Next, choose a module path and create a
go.mod file that declares it. A
module is a collection of Go packages stored in a file hierarchy with a
go.mod file at the root. This file defines the module’s module path, which
is also the import path for the root directory and its dependency requirements.
Without a
go.mod file, your project contains a package, but no module and
the
go command will make up a fake import path based on the directory name.
Make the current directory the root of a module by using the
go mod init command to create a
go.mod file there:
sh$ go mod init example.com/time-series-go
You should see a
go.mod file in the current directory with contents similar
to:
module example.com/time-series-go go 1.14
Next, create a file named
main.go in the same directory:
sh$ touch main.go
Open this file in your favorite code editor.
Get the current position of the ISS¶
Open Notify is a third-party service that provides an API to consume data about the current position, or ground point, of the ISS.
The endpoint for this API is
In the
main.go file, declare the main package at the top (to tell the
compiler that the program is an executable) and import some packages from the
standard library that will be used in this tutorial. Declare a main
function which will be the entry point of the executable program:
package main import ( "encoding/json" "fmt" "io/ioutil" "log" "net/ ) func main() { }
Then, read the current position of the ISS by going to the Open Notify API endpoint at in your browser.
{ "message":"success", "timestamp":1591703638, "iss_position":{ "longitude":"84.9504", "latitude":"41.6582" } }
As shown, the endpoint returns a JSON payload, which contains an
iss_position object with
latitude and
longitude data.
Parse the ISS position¶
To parse the JSON payload, you can create a struct to unmarshal the data into. When you unmarshal JSON into a struct, the function matches incoming object keys to the keys in the struct field name or its tag. By default, object keys which don’t have a corresponding struct field are ignored.
type issInfo struct { IssPosition struct { Longitude string `json:"longitude"` Latitude string `json:"latitude"` } `json:"iss_position"` }
Now, create a function that makes an HTTP GET request to the Open Notify API endpoint and returns longitude and latitude as a Geographic types declaration.
func getISSPosition() (string, error) { var i issInfo response, err := if err != nil { return "", fmt.Errorf("unable to retrieve request: %v", err) } defer response.Body.Close() if response.StatusCode/100 != 2 { return "", fmt.Errorf("bad response status: %s", response.Status) } responseData, err := ioutil.ReadAll(response.Body) if err != nil { return "", fmt.Errorf("unable to read response body: %v", err) } err = json.Unmarshal(responseData, &i) if err != nil { return "", fmt.Errorf("unable to unmarshal response body: %v", err) } s := fmt.Sprintf("(%s, %s)", i.IssPosition.Longitude, i.IssPosition.Latitude) return s, nil }
Above, the
getISSPosition() function:
Uses the net/http package from the Go standard library to issue an HTTP GET request to the API endpoint
Implements some basic error handling and checks to see whether the response code is in the 200 range
Reads the response body and unmarshals the JSON into the defined struct
issInfo
Formats the return string and returns it
Then in the main function, call the
getISSPosition() function and print
out the result:
func main() { pos, err := getISSPosition() if err != nil { log.Fatal(err) } fmt.Println(pos) }
Save your changes and run the code:
sh$ go run main.go
The result should contain your geo_point string:
(104.7298, 5.0335)
You can run this multiple times to get the new position of the ISS each time.
Set up CrateDB¶
First, import the context package from the standard library and the pgx client:
import ( "context" "encoding/json" "flag" "fmt" "io/ioutil" "log" "net/ "github.com/jackc/pgx/v4" )
Then, in your main function, connect to CrateDB using the
PostgreSQL wire protocol port (
5432) and
create a table suitable for writing ISS
position coordinates.
var conn *pgx.Conn func main() { var err error conn, err = pgx.Connect(context.Background(), "postgresql://crate@localhost:5432/doc") if err != nil { log.Fatalf("unable to connect to database: %v\n", err) } else { fmt.Println("CONNECT OK") } defer conn.Close(context.Background()) conn.Exec(context.Background(), "CREATE TABLE [ IF NOT EXISTS ] iss ( timestamp TIMESTAMP GENERATED ALWAYS AS CURRENT_TIMESTAMP, position GEO_POINT )") }
Save your changes and run the code:
sh$ go run main.go
When you run the script this time, the
go command will look up the module
containing the pgx package and add it to
go.mod.
In the The CrateDB Admin UI, you should see the new table when you navigate to the Tables screen using the left-hand navigation menu:
Record the ISS position¶
With the table in place, you can start recording the position of the ISS.
Create some logic that calls your
getISSPosition function and insert the result into the
iss table.
... func main() { ... pos, err := getISSPosition() if err != nil { log.Fatalf("unable to get ISS position: %v\n", err) } else { _, err := conn.Exec(context.Background(), "INSERT INTO iss (position) VALUES ($1)", pos) if err != nil { log.Fatalf("unable to insert data: %v\n", err) } else { fmt.Println("INSERT OK") } } }
Save your changes and run the code:
sh$ go run main.go
Press the up arrow on your keyboard and hit Enter to run the same command a few more times.
When you’re done, you can select that data back out of CrateDB with this query:
SELECT * FROM "doc"."iss"
Tip
You can run ad-hoc SQL queries directly from the Console screen in the Admin UI. You can navigate to the console from the left-hand navigation menu, as before.
Automate the process¶
Now that you have the key components, you can automate the data collection.
In your file
main.go, create a function that encapsulates data insertion:
func insertData(position string) error { _, err := conn.Exec(context.Background(), "INSERT INTO iss (position) VALUES ($1)", position) return err }
Then in the script’s
main function, create an infinite loop that gets the
latest ISS position and inserts the data into the database.
... func main() { ... for { pos, err := getISSPosition() if err != nil { log.Fatalf("unable to get ISS position: %v\n", err) } else { err = insertData(pos) if err != nil { log.Fatalf("unable to insert data: %v\n", err) } else { fmt.Println("INSERT OK") } } fmt.Println("Sleeping for 10 seconds...") time.Tick(time.Second * 10) } }
See also
The completed script source
Above, the
main() function:
Retrieves the latest ISS position through the
getISSPosition()function
Inserts the ISS position into CrateDB through the
insertData()function
Implements some basic error handling, in case either the API query or the CrateDB operation fails
Sleeps for 10 seconds after each sample using the time package
Accordingly, the time series data will have a resolution of 10 seconds. If you wish to change this resolution, you may want to configure your script differently.
Run the script from the command line:
$ go run main.go INSERT OK Sleeping for 10 seconds... INSERT OK Sleeping for 10 seconds... INSERT OK Sleeping for 10 seconds...
As the script runs, you should see the table filling up in. | https://crate.io/docs/crate/howtos/en/latest/getting-started/generate-time-series/go.html | CC-MAIN-2022-21 | refinedweb | 1,340 | 57.06 |
Yesh_02 + 0 comments
A simple solution i would think of is,
STEP-1: Create a array initially with size 10001 with all zeros.
STEP-2: We are going to decrement the value corresponding to the value that we read as input.
for example if the first element of list A is 200, then array[200]--.. Similarly we decrement the values for list A.
STEP-3: Similarly when reading list B increment the values. Since the elements of list A are lost, the resulting values in the array after this operation will be positive.
STEP-4: Print all the index of array with value grater than 0. The numbers will be in asscending order aswell.
This solution can be optimized in a number of ways. Hint is "
Xmax−Xmin<101".
Happy Coding!!
I_love_Pournima + 0 comments
Java users note that if you're using HashMap to represent number frequencies, make sure you use the .equals() method to compare them and not == because Integer is an object while int is a primitive, that was causing me WA
sheldonrong + 0 comments
in Python, if you know Counter from the standard collection library, you can do:
def missingNumbers(arr, brr): a = Counter(arr) b = Counter(brr) return sorted((b - a).keys())
Not sure why this question worth 45 points, its even much eaiser than some of those 15/20 points questions.
coder_aky + 0 comments
Simple in Python 3 :-)
from collections import Counter n,arr=int(input()),list(map(int,input().split())) m,brr=int(input()),list(map(int,input().split())) a,b=Counter(arr),Counter(brr) print(*(sorted((b-a).keys())))
Explanation:
Let us take an example : n=10 arr=11 4 11 7 13 4 12 11 10 14 m=15 brr=11 4 11 7 3 7 10 13 4 8 12 11 10 14 12 a=Counter(arr)=Counter({11: 3, 4: 2, 7: 1, 13: 1, 12: 1, 10: 1, 14: 1}) b=Counter(brr)=Counter({11: 3, 4: 2, 7: 2, 10: 2, 12: 2, 3: 1, 13: 1, 8: 1, 14: 1}) b-a=Counter({7: 1, 3: 1, 10: 1, 8: 1, 12: 1}) (b-a).keys()=dict_keys([7, 3, 10, 8, 12]) sorted((b-a).keys())=3 7 8 10 12
Sort 875 Discussions, By:
Please Login in order to post a comment | https://www.hackerrank.com/challenges/missing-numbers/forum | CC-MAIN-2020-50 | refinedweb | 386 | 61.67 |
I'm wondering how (and in which way it's best to do it) to split a string with a unknown number of spaces as separator in C++/CLI?
Edit: The problem is that the space number is unknown, so when I try to use the split method like this:
String^ line; StreamReader^ SCR = gcnew StreamReader("input.txt"); while ((line = SCR->ReadLine()) != nullptr && line != nullptr) { if (line->IndexOf(' ') != -1) for each (String^ SCS in line->Split(nullptr, 2)) { //Load the lines... } }
And this is a example how Input.txt look:
ThisISSomeTxt<space><space><space><tab>PartNumberTwo<space>PartNumber3
When I then try to run the program the first line that is loaded is "ThisISSomeTxt" the second line that is loaded is "" (nothing), the third line that is loaded is also "" (nothing), the fourth line is also "" nothing, the fifth line that is loaded is " PartNumberTwo" and the sixth line is PartNumber3.
I only want ThisISSomeTxt and PartNumberTwo to be loaded :? How can I do this?
Why not just using System::String::Split(..)?
The following code example taken from , demonstrates how you can tokenize a string with the Split method.
using namespace System; using namespace System::Collections; int main() { String^ words = "this is a list of words, with: a bit of punctuation."; array<Char>^chars = {' ',',','->',':'}; array<String^>^split = words->Split( chars ); IEnumerator^ myEnum = split->GetEnumerator(); while ( myEnum->MoveNext() ) { String^ s = safe_cast<String^>(myEnum->Current); if ( !s->Trim()->Equals( "" ) ) Console::WriteLine( s ); } }
I think you can do what you need to do with the String.Split method.
First, I think you're expecting the 'count' parameter to work differently: You're passing in
2, and expecting the first and second results to be returned, and the third result to be thrown out. What it actually return is the first result, and the second & third results concatenated into one string. If all you want is
ThisISSomeTxt and
PartNumberTwo, you'll want to manually throw away results after the first 2.
As far as I can tell, you don't want any whitespace included in your return strings. If that's the case, I think this is what you want:
String^ line = "ThisISSomeTxt \tPartNumberTwo PartNumber3"; array<String^>^ split = line->Split((array<String^>^)nullptr, StringSplitOptions::RemoveEmptyEntries); for(int i = 0; i < split->Length && i < 2; i++) { Debug::WriteLine("{0}: '{1}'", i, split[i]); }
Results:
0: 'ThisISSomeTxt' 1: 'PartNumberTwo'
Similar Questions | http://ebanshi.cc/questions/4218156/c-cli-split-a-string-with-a-unknown-number-of-spaces-as-separator | CC-MAIN-2017-22 | refinedweb | 395 | 61.46 |
29913/catch-warnings-and-errors-in-rpy2
To get the warnings as an rpy2 object, you can do:
from rpy2.robjects.packages import importr
base = importr('base')
# do things that generate R warnings
base.warnings()
With tryCatch you can handle errors as you want:
an.error.occured ...READ MORE
Use echo=FALSE and fig.cap = "caption in ...READ MORE
Normally to perform supervised learning you need ...READ MORE
For avoiding rowwise(), I prefer to use ...READ MORE
You can use dplyr function arrange() like ...READ MORE
Well it truly depends on your requirement,
If ...READ MORE
The tm package in R provides the stemDocument() function to stem the ...READ MORE
To find the type of an object ...READ MORE
Hey @Ali, as.factor is a wrapper for ...READ MORE
remove import android.R
Then clean and rebuild project. R ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/29913/catch-warnings-and-errors-in-rpy2?show=29914 | CC-MAIN-2020-10 | refinedweb | 148 | 79.97 |
While there are several scenarios that may require you to run .NET code from within Node.js like- programming against a Windows specific interface or running a T-SQL query, there could be possible scenarios where you might have to execute a Node.js code from a .NET application. The most obvious one is where you have to return results from the .NET code to the calling Node script using a callback function, but there could be other possible scenarios like hybrid teams working on processes that run both Node and .NET applications. With Node.js getting a fairly large share of the server side development in recent years, the possibility of such hybrid development could become commonplace.
Edge.js really solves the problem of marshalling between .NET and Node.js (using the V8 engine and .NET CLR) thereby allowing each of these server side platforms to run in-process with one another in Windows, Linux and Mac. Edge can compile the CLR Code (it is primarily C#, but could compile any CLR supported language) and provides an asynchronous mechanism for interoperable scripts. Edge.js allows you to not only marshal data but JS proxies, specifically for .NET to the Func<object, Task<object>> delegate.
To install Edge.js in your .NET application, you can use the NuGet package.
Once you have successfully installed the package, you will see the Edge folder appearing in your solution.
You can then reference the EdgeJs namespace in your class files. The following code illustrates:
Note how the code uses the .NET CLR async await mechanism to support asynchronous callback of a JavaScript function using Node.js and Edge.js. This opens up several possibilities to call server side JavaScript from a .NET application using. | http://www.devx.com/blog/dev_issues/run-node.js-from-inside-a-.net-application-using-edge.js.html | CC-MAIN-2019-35 | refinedweb | 291 | 59.7 |
Let('bucket_name') keys = b.list() return keys
The corresponding test would presumably use some mocks and patching. Here is one way to write a test for the above code:
# Assume the code above is in a module list_keys # in a function list_keys from list_keys import list_keys from mock import patch, Mock def test_list_keys(): mocked_keys = [Mock(key='mykey1'), Mock(key='key2')] mocked_connection = Mock() # Start with patching connect_s3 with patch('boto.connect_s3', Mock(return_value=mocked_connection)): mocked_bucket = Mock() # Mock get_bucket() call mocked_connection.get_bucket = Mock(return_value=mocked_bucket) # Mock the list() call to return the keys you want mocked_bucket.list = Mock(return_value=mocked_keys) keys = list_keys() assert keys == mocked_keys
I thought I really had no other way to get around mocks and patches if I wanted to test this part of my application. But, I discovered moto. Then life became easier.
Using moto's S3 support, I don't need to worry about the mocking and patching the boto calls any more. Here is the same test above, but using moto:
from list_keys import get_s3_conn, list_keys from moto import mock_s3 def test_list_keys(): expected_keys = ['key1', 'key2'] moto = mock_s3() # We enter "moto" mode using this moto.start() # Get the connection object conn = get_s3_conn() # Set up S3 as we expect it to be conn.create_bucket('bucket_name') for name in expected_keys: k = conn.get_bucket('bucket_name').new_key(name) k.set_contents_from_string('abcdedsd') # Now call the actual function keys = list_keys() assert expected_keys == [k.name for k in keys] # get out of moto mode moto.stop()
Unless it is obvious, here are two major differences from the previous test:
We don't mock or patch anything
The point #1 above is the direct reason I would consider using moto for testing S3 interactions rather than setting up mocks. This helps us in the scenario in which this section of the code lies in another package, not the one you are writing tests for currently. You can actually call this section of the code and let the interaction with S3 happen as if it were interacting directly with Amazon S3. I think this allows deeper penetration of your tests and as a result your code's interactions with others.
The test code has to explicitly first setup the expected state
This may seem like more work, but I think it still outweighs the benefits as mentioned previously.
Please checkout moto here.
If you like this post, please follow PythonTestTips on Twitter. | http://echorand.me/replacing-boto-s3-mocks-using-moto-in-python.html | CC-MAIN-2016-40 | refinedweb | 396 | 62.27 |
From: Daniel James (daniel_at_[hidden])
Date: 2004-09-16 08:31:45
Peder Holt wrote:
> On Wed, 15 Sep 2004 09:14:02 -0400, David Abrahams
>>Definitely not. But I'm not sure which parts of the ADL rules you're
>>interested in here.
>
> I am not familiar with the full set of ADL rules, so I'll examplify instead.
I don't think it's the ADL rules that matter. It's whether the compiler
adds friend methods to the namespace containing the class. This appears
to be nonstandard but done by most compilers.
Daniel
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/09/72310.php | CC-MAIN-2020-45 | refinedweb | 118 | 78.35 |
Eden - Event Driven Evaluation Nodes
Project description
Name
Eden - Event Driven Evaluation Nodes.
Licence
QQuick licence, see . N.B. The Eden version on that site is obsolete, the newest version is on
Purpose
Eden is a library that allows rapid declarative construction of applications.
Recent changes
- Kivy version:
- Modules, demo on YouTube
- WinForms version:
- None
How does it work
All program logic and processing is specified by functional dependencies between Nodes. Dependencies can be cyclic and exception handling by rollback is provided. Nodes can be used in any situation, for console apps, batch apps, or apps using any GUI library that has a Python API.
To make life easier, a set of Views is available. Each View is a thin layer on top of a GUI Widget class from the underlying GUI library. Views can be connected to Nodes using Links. Typically a View will be connected to multiple Nodes, but also a Node can be connected to multiple Views. In this way a complete GUI app can be “wired” together. Layout is dynamic. Both data and layout are persistent.
Practical experiences using Eden
Using Eden in everyday practice has proven a pleasure. Eden has been in use for multiple years now by multiple people working on diverse engineering projects. The resulting applications involve dozens of modules, most of them with dozens of nodes, some nodes carrying many megabytes of data. A characteristic of both projects is that requirements rapidly evolved during the project. With Eden it proved remarkably easy to follow the changing requirements. In spite of the fact that requirements changed frequently and deeply, application structure has remained lean and clean. Unfortunately these projects, that otherwise might have very well served as coding and style examples, were all proprietary. One of the people working on a project remarked that with Eden, coding clean, flexible and maintainable program logic was as easy and routinely as drawing up a shopping list.
Learning Eden
Although the tutorial examples are simple for anyone to comprehend, they by far don’t cover all the features. Moreover they are too small to reveal issues of overall program organisation, like the use of the Module mechanism. Using Eden in an effective way for a non-trivial app has a steep (but short) learing curve. It has proven feasible to get a “fresh” developer upto speed in a few days of side by side tutoring. There’s a real need for a freely available elaborated example, though. Currently I concentrate upon the CPython + Kivy version, since mobile- and tablet platforms are where most of the action is. One public domain application that uses the IronPython + WinForms version is Wave (see). It is, however, not yet complete and to specialistic to serve as an example. A killer app would help. As soon as the CPython + Kivy version has some body, I hope to come up with a free multiplatform app that proves the point as well as is suitable as an elaborated example.
Status
Eden for IronPython + WinForms has been used for production programming for multiple years now by several people. Eden for CPython + Kivy is in the early stages of development.
Installation
To prevent name conflicts with future modules, Eden is now imported as follows:
from org.qquick.eden import *
Alternatively, you can import Eden e.g. as follows:
import org.qquick.eden as eden
To make this work, install Eden as follows:
In the ‘site-packages’ or the ‘dist-packages’ directory of the Python version you wish to use, make a subdirectory ‘org’ containing an empty file ‘__init__.py’. Only do so if a directory by that name and with such a file does not already exist.
In that subdirectory ‘org’ make a subdirectory ‘qquick’ containing an empty file ‘__init__.py’. Only do so if a directory by that name and with such a file does not already exist.
In that subdirectory ‘org.qquick’ put the ‘eden’ (lowercase!) subdirectory of Eden. If you want to install Eden under multiple Python versions, e.g. IronPython 2.7 and CPython 2.7, you can use a symlink named ‘eden’ (so NOT a shortcut, google for ‘mklink’).
N.B. 1 Java style URL based unique package names like ‘org.qquick.eden’ are a bit of a hack in Python 2.7, but are de-facto fully supported in Python >= 3.3. This is the result of improvements in the module search mechanism, which will now search multiple ‘org’ subdirectories for ‘qquick’ and not give up after the first one encountered. The decision to use URL based package names is based on the ever growing set of standard modules that are becoming part of the Python distribution. This has already resulted in name clashes with packages that were written before a standard module by the same name was introduced.
N.B. 2 The WinForms (stable) version works with IronPython 2.7. The Kyvi version (under development) works with CPython 2.7 and Kivy 1.8 you can e.g. use the Python 2.7 that comes with Portable Kivy for Windows. The Kyvi version should also run under Linux, although that is only tested infrequently.
Getting started
Using IronPython + WinForms
Tutorial programs are in the tutorialWinForms directory
Using CPython + Kivy
Tutorial programs are in the tutorialKivy directory
Compatibility
The IronPython + WinForms version has been tested and used extensively on Windows from XP to 8.1. It has never been tested on Linux + Mono.
The Views of the CPython + Kivy version will reflect the particularities of Kivy and of the diversity of platforms it should run at. So, although there will be many common elements, there will be no one to one correspondence between Views based on Kivy and Views based on WinForms.
The essence of the matter, the API of the underlying Event Driven Evaluation Nodes pattern, however, is the same. Only the GUI part differs.
Future
Plans are to build out and fully document Eden and stay committed to it for a long time to come. However not any, even implied guarantee is made with respect to its continuity. Time will have to prove whether it acquires mindshare.
There exists a proprietary commercial port of Eden to Qt using PyQt. It runs on Linux and Windows and was made and is owned by a third party. It is not available as open source software, but its existence has proven the portability of Eden.
Some work has been done on a TkInter version, but it has been abandoned in favor of Kivy.
Co-Development
The code of the Eden project is hosted by GitHub. The plan is to involve more developers as soon as the Kivy version is well underway. Completing the TkInter version e.g. would be great… Coding for Eden requires thorough understanding of the Node/Link/View concepts, including rollback and cyclic dependencies. The essence is in the Node library module. Although it is a small, extensively commented module, it is quite hard grab the nifty details. A very concise description is what I’ll have to come up. | https://pypi.org/project/Eden/2.1.13.Modules-.demo.on.YouTube/ | CC-MAIN-2018-26 | refinedweb | 1,173 | 65.32 |
Member since 04-07-2017
3
0
Kudos Received
0
Solutions
04-07-2017 04:50 AM
04-07-2017 04:50 AM
No...we don't have any issues so far with HBASE. I am just trying to understand the concept of it. Thanks for your assistance 🙂 ... View more
04-07-2017 04:41 AM
04-07-2017 04:41 AM
so when the above command delete everything i will lost all HBASE tables/WAL's. Is the above steps recommened in Production environment? ... View more
04-07-2017 01:28 AM
04-07-2017 01:28 AM
I have same doubt what happens when i delete "hdfs dfs -rm -r /hbase/*" will lost data/namespaces in HBASE or HDFS? ... View more | https://community.cloudera.com/t5/user/viewprofilepage/user-id/21557 | CC-MAIN-2020-34 | refinedweb | 123 | 91.11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.