text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I had to put together a small web app the other day, using SQLAlchemy and Flask. Because I hate writing code multiple times, when I can do things using a better way, I wanted to be able to serialise SQLAlchemy ORM objects straight to JSON.
I decided on an approach where taking a leaf out of Javascript, I would optionally implement a
tojson() method on a class, which I would attempt to call from my
JSONEncoder1.
It turns out to be relatively simple to extend SQLAlchemy’s declarative base class to add additional methods (we can also use this as an excuse to implement a general
__repr__().
from sqlalchemy.ext.declarative import declarative_base as real_declarative_base # Let's make this a class decorator declarative_base = lambda cls: real_declarative_base(cls=cls) @declarative_base class Base(object): """ Add some default properties and methods to the SQLAlchemy declarative base. """ @property def columns(self): return [ c.name for c in self.__table__.columns ] @property def columnitems(self): return dict([ (c, getattr(self, c)) for c in self.columns ]) def __repr__(self): return '{}({})'.format(self.__class__.__name__, self.columnitems) def tojson(self): return self.columnitems
We can then define our tables in the usual way:
class Client(Base): __tablename__ = 'client' ...
You can obviously replace any of the methods in your subclass, if you don’t want to serialise the whole thing. Bonus points for anyone who wants to extend this to serialise one-to-many relationships.
And what about calling the
tojson() method? That’s easy, we can just provide our own JSONEncoder.
import json class JSONEncoder(json.JSONEncoder): """ Wrapper class to try calling an object's tojson() method. This allows us to JSONify objects coming from the ORM. Also handles dates and datetimes. """ def default(self, obj): if isinstance(obj, datetime.date): return obj.isoformat() try: return obj.tojson() except AttributeError: return json.JSONEncoder.default(self, obj)
Cutting edge Flask provides a way to replace the default JSON encoder, but the version I got out of pip does not. This is relatively easy to work around though by replacing
jsonify with our own version.
from flask import Flask app = Flask(__name__) def jsonify(*args, **kwargs): """ Workaround for Flask's jsonify not allowing replacement of the JSONEncoder in my version of Flask. """ return app.response_class(json.dumps(dict(*args, **kwargs), cls=JSONEncoder), mimetype='application/json')
If you do have a newer Flask, where you don’t have to replace jsonify, you can also inherit from Flask’s JSONEncoder, which already handles things like datetimes for you.
3 thoughts on “Generating JSON from SQLAlchemy objects”
My experience with Django over the years is that a direct ORM to JSON serializer is much less useful than you would imagine because you usually get either too much (wasting bytes and processing power in both ends), or in a wrong format, or in a less useful way than it could be. What I do is something like this in the view:
data = json.dumps([dict(foo=o.foo, bar=massage(o.bar), …) for o in some_queryset])
Simple, flexible and explicit, using the power of Python. Of course, YMMV.
BTW in your
columnitems, you don’t actually need to use a list comprehension, instead you can just use a generator expression, i.e.:
return dict((c, getattr(self, c)) for c in self.columns)
Generator expressions returns a lazily evaluated iterable. They arrived later than list comprehensions, so somehow people still don’t think about them. I personally try to think of a list comprehension as a generator expression materialized into a list. Often you don’t need the materialization, and then it’s just more code and more work for the interpreter.
Thanks for a nice article.
I think it’s useful sometimes that models have default serialization methods. Of course that does not fit in every usecase, it helps to make some tasks easy, e.g. scaffolding admin controller/views if models are not so complex.
Indeed, the point here is to provide a default case that is overridden when needs/security/speed requires it, either by implementing your own
columnsproperty, your own
tojson()method or wrapping the object in a “view” (which provides its own
tojson()) that prepares the JSON in a particular way. | https://blogs.gnome.org/danni/2013/03/07/generating-json-from-sqlalchemy-objects/ | CC-MAIN-2016-22 | refinedweb | 704 | 56.25 |
I'm at SHDH26 right now. We now have a fancy login system that prints out name tags for you!
Tom Harrison, Ernesto S., and David S. had most of the system working last night, with the exception of the automatic label printer. This morning rndmcnlly and I got the printer working in about 2 hours!
Here is how we did it:
First we had to get the printer working on a Mac, this was pretty easy. We downloaded the drivers and set up the printer though the normal means.
After that we wrote the following two files:
"~/.enscriptrc":
Media: Label 162 288 10 10 152 278
- Use the command enscript --list-media to make sure that your .enscriptrc is being parsed.
- Use man enscript for more info
- The numbers are in Postscript points!
"~/printer_server.py":
from twisted.web.resource import Resource from twisted.web import server from twisted.internet import reactor from subprocess import * class Launcher(Resource): isLeaf = True def render_POST(self, request): names = ['first_name', 'last_name'] items = map((lambda l: request.args[l][0]), names) page = "\n".join(items) p = Popen(["enscript", "-r", "-B", "-f", "Helvetica35", "-MLabel"],stdin=PIPE) p.stdin.write(page) p.stdin.close() return page reactor.listenTCP(1080, server.Site(Launcher())) reactor.run()
(
Once the files were written, we ran the printer server using the "python printer_server.py command.
The printer server above listens on port TCP 1080, we had to change the port number because Comcast sucks and blocks that port.
The server listens on port 1080 for a HTTP POST and prints the value "first_name" on the first line and "last_name" on the second line.
Labels are printed by the web application sending a HTTP POST to the port of the computer connected to the printer - we had to set up port forwarding to get this working.
Unfortunately we didn't have time to implement Lee Felsenstein's suggestion to let people "tag" themselves. We'll have that working by the next SHDH - maybe even just let people go freeform on their tags.
Tom's end goal for this is to eventually have a "DevHouse Appliance" that other people could easily set up to do the same thing!
August 31 2008, 01:27:17 UTC 3 years ago
Glad you got it working.
November 16 2008, 18:04:53 UTC 3 years ago | http://joel.livejournal.com/51939.html | crawl-003 | refinedweb | 389 | 67.15 |
#include <wx/richtooltip.h>
Allows showing a tool tip with more customizations than wxToolTip.
Using this class is very simple, to give a standard warning for a password text control if the password was entered correctly you could simply do:
Currently this class has generic implementation that can be used with any window and implements all the functionality but doesn't exactly match the appearance of the native tooltips (even though it makes some efforts to use the style most appropriate for the current platform) and a native MSW version which can be only used with text controls and doesn't provide as much in the way of customization. Because of this, it's inadvisable to customize the tooltips unnecessarily as doing this turns off auto-detection of the native style in the generic version and may prevent the native MSW version from being used at all.
Notice that this class is not derived from wxWindow and hence doesn't represent a window, even if its ShowFor() method does create one internally to show the tooltip.
The images below show some examples of rich tooltips on different platforms, with various customizations applied.
Constructor must specify the tooltip title and main message.
The main message can contain embedded new lines. Both the title and message must be non-empty.
Additional attributes can be set later.
Destructor.
Notice that destroying this object does not hide the tooltip if it's currently shown, it will be hidden and destroyed when the user dismisses it or the timeout expires.
The destructor is non-virtual as this class is not supposed to be derived from.
Set the background colour.
If two colours are specified, the background is drawn using a gradient from top to bottom, otherwise a single solid colour is used.
By default the colour or colours most appropriate for the current platform are used. If a colour is explicitly set, native MSW version won't be used as it doesn't support setting the colour. timeout after which the tooltip should disappear and optionally set a delay before the tooltip is shown, in milliseconds.
By default the tooltip is shown immediately and hidden after a system-dependent interval of time elapses. This method can be used to change this or also disable hiding the tooltip automatically entirely by passing 0 in this parameter (but doing this will prevent the native MSW version from being used).
Notice that the tooltip will always be hidden if the user presses a key or clicks a mouse button.
Parameter millisecondsDelay is new since wxWidgets 2.9.5.
Choose the tip kind, possibly none.
See wxTipKind documentation for the possible choices here.
By default the tip is positioned automatically, as if wxTipKind_Auto was used. Native MSW implementation doesn't support setting the tip kind explicitly and won't be used if this method is called with any value other than wxTipKind_Auto.
Notice that using non automatic tooltip kind may result in the tooltip being positioned partially off screen and it's the callers responsibility to ensure that this doesn't happen in this case.
Set the title text font.
By default it's emphasized using the font style or colour appropriate for the current platform. Calling this method prevents the native MSW implementation from being used as it doesn't support changing the font.
Show the tooltip for the given window and optionally specify where to show the tooltip.
By default the tooltip tip points to the (middle of the) specified window which must be non-NULL or, if rect is non-NULL, the middle of the specified wxRect.
The coordinates of the rect parameter are relative to the given window.
Currently the native MSW implementation is used only if win is a wxTextCtrl and rect is NULL. This limitation may be removed in the future.
Parameter rect is new since wxWidgets 2.9.5. | https://docs.wxwidgets.org/trunk/classwx_rich_tool_tip.html | CC-MAIN-2021-17 | refinedweb | 647 | 52.9 |
Eric Blake <ebb9 <at> byu.net> writes: > When using both stackoverflow_install_handler and segv_handler_missing, a > SIGSEGV from dereferencing NULL will be wrongly treated as a stack > overflow on platforms that use mincore to check if the fault is near the > stack. In stackvma-mincore.c, mincore_is_near_this recognizes that > computation of a target address in between the fault and the stack causes > overflow, but then it calls is_unmapped(0,0) anyway. Since the page > containing 0 is unmapped, this results in claiming that a fault on NULL is > treated as a fault near the stack, and the stack overflow handler is > incorrectly invoked. On the other hand, it looks like the following patch is better (at any rate, it matches the comments in the file).. The libsigsegv testsuite is immune (it can get away with calling whatever it wants in the signal handler, since the stack overflow is not called by a non-async function), but it is much harder to make those same guarantees for a real-life program. Technically, it might be possible to determine the worst-case stack depth of any non-recursive call chain that uses non-async-safe functions, then write the recursive functions to intentionally probe the stack out larger than that depth prior to invoking non-async functions, so that the stack overflow is then guaranteed to occur without interrupting a non-async function, but this is not trivial. 2008-07-17 Eric Blake <address@hidden> Fix is_near_this logic to match comments. * src/stackvma-mincore.c (mincore_is_near_this): Use correct bounds to is_unmapped. * NEWS: Document the fix. Index: NEWS =================================================================== RCS file: /sources/libsigsegv/libsigsegv/NEWS,v retrieving revision 1.16 diff -u -p -r1.16 NEWS --- NEWS 28 May 2008 01:02:13 -0000 1.16 +++ NEWS 17 Jul 2008 14:43:15 -0000 @@ -1,6 +1,7 @@ New in 2.6: * Support for 64-bit ABI on MacOS X 10.5. +* Fix false positives in determining stack overflow when using mincore. New in 2.5: Index: src/stackvma-mincore.c =================================================================== RCS file: /sources/libsigsegv/libsigsegv/src/stackvma-mincore.c,v retrieving revision 1.1 diff -u -p -r1.1 stackvma-mincore.c --- src/stackvma-mincore.c 15 May 2006 12:01:12 -0000 1.1 +++ src/stackvma-mincore.c 17 Jul 2008 14:35:12 -0000 @@ -1,5 +1,5 @@ /* Determine the virtual memory area of a given address. - Copyright (C) 2006 Bruno Haible <address@hidden> + Copyright (C) 2006, 2008 Bruno Haible <address@hidden> This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by @@ -227,7 +227,7 @@ mincore_is_near_this (unsigned long addr unsigned long testaddr = addr - (vma->start - addr); if (testaddr > addr) /* overflow? */ testaddr = 0; - return is_unmapped (testaddr, addr); + return is_unmapped (testaddr, vma->start - 1); } #endif @@ -246,7 +246,7 @@ mincore_is_near_this (unsigned long addr unsigned long testaddr = addr + (addr - vma->end); if (testaddr < addr) /* overflow? */ testaddr = ~0UL; - return is_unmapped (addr, testaddr); + return is_unmapped (vma->end, testaddr); } #endif | http://lists.gnu.org/archive/html/bug-gnulib/2008-07/msg00073.html | CC-MAIN-2015-32 | refinedweb | 497 | 54.93 |
I'm having an issue with Office 2013 Pro Plus whereby accessing files on a shared drive hangs with a 'Contacting \\domain\shared\drive' screen for anywhere from 5 seconds to 30 seconds.
It's really weird behaviour, if you open word (powerpoint, excel and publisher too) and attempt to open a file the prompt will open up fine, and let you get to my computer, but as soon as you open the shared drive then it hangs on this screen. It will then list the directory, and if you click on the next subfolder it hangs, and the same for the remaining subfolders. Once you've got to the file you want to open, it then 9/10 times opens without
the progress bar. On the other side of that, if I open and traverse a users my documents/pictures which is redirected to the same file server it doesn't hang at all so I don't think it's the connection.
All of the computers here are Windows 7 Professional x64 and all have the latest security and critical updates. We use Avast AV (I temporarily disabled this completely and the issue still occurs). We use a software called Impero which can be heavy on the network, however I've disabled that too and it still occurs.
I had the 'remove the run command from start' GPO enabled so I though it might be an issue with users not being able to traverse the UNC path but disabling that didn't help.
The file server runs on a 2012 R2 box, no deduplication, but does run DFS namespace, which points our shared to \\domain.local\shared\staff.
I've disabled hardware acceleration, I've disabled Office File Validation. I don't see it being an issue with the Word default template as it's across all of the office suite.
Would a permission really cause this to hang each time? I mean once the hang stops they can open the folders so they do have permissions to the drive.
The only thing I could think is the permissions are set so :
\\domain.local\shared
Sorry for the mass of information, but the more troubleshooting the merrier. Any help on this would be truly appreciated!
9 Replies
Here is a link to another Spiceworks thread regarding this issue that may be helpful to you:...
Has this just started happening? Could this be related to the Microsoft Security update that was release a couple of days ago that affects GPO permissions? Check if you have update KB3159398 installed? If you do then in WSUS approve KB3159398 for Removal and then next time users updates run it should clear the problem.
This problem started happening for us around the beginning of April 2016. Our users would get it at random times when they were opening or saving an Office file to their file share. When it did happen the users saw an "Opening" or "Saving" window pop-up on the screen for 10 - 25 seconds, during which time the software was frozen. I researched this issue and I too found the SpiceWorks article that chrissmith35 mentioned in a previous post. The cause of the problem for their Office slowness was the use of eset and we don't use it. So I kept looking and found lots of different solutions that fixed this problem for people and I tried most of them. Unfortunately it did not solve our slowness with Office files on a file share. The one common factor I found though was that everyone was using Office 2010, so I told my users to wait until Office 2013 when hopefully the problem would be solved. I guess that hope has been dashed since lpollard is having the same problem with Office 2013. UGH!!!
If anyone does figure this out I too would be glad to hear about it.
chrissmith36: I've had a look through this article and attempted the various suggestions to no avail, likewise I have Avast not ESET and despite disabling the AV to check this, it still occurs.
Craig (Cook Trading): this is been happening for several weeks now roughly around April as scls-Brian had suggested. However I will be checking WSUS and removing the relevant KB's thanks for the info!
scls-Brian: as you mention it, I too found it happening around April ish. The odd thing I've found is that some staff users seem fine and others get the issue. I'm really struggling to pin down the issue. I came into work one day and it was happening, the next day the user came in saying thanks for fixing it (which I hadn't touched anything) and then a day or two later it was happening again!
So after some more digging, i found an issue with the shared drive that they're trying to access. At the moment it's mapped to \\domain\shared\drive. If i try to access this in an explorer window then I get a login prompt. However if I access this through Office it does the loading screen and then eventually opens up the files.
However if I remap the drive as \\server\shared\drive rather than \\domain\shared\drive I don't get the issue.
It only occurs when I access it via the DFS namespace. | https://community.spiceworks.com/topic/1670088-microsoft-office-2013-hangs-when-opening-files-from-shared-drive | CC-MAIN-2019-51 | refinedweb | 895 | 69.11 |
Suppose, we have two arrays containing integers. One list contains the height of some unit width boxes and another array contains the height of rooms in the godown. The rooms are numbered 0...n, and the height of the rooms is provided in their respective indexes in the array godown. We have to find out the number of boxes that can be pushed into the godown. a few things have to be kept in mind,
The boxes can’t be put one on another.
The order of the boxes can be changed.
The boxes are put into the godown from left to right only.
If a box is taller than the height of the room, then the box along with all the boxes to its right cannot be pushed into the godown.
So, if the input is like boxes = [4,5,6], godown = [4, 5, 6, 7], then the output will be 1 Only one box can be inserted. The first room is of size 4 and the rest cannot be pushed into the godown because the boxes have to be pushed through the first room and its length is smaller than the other boxes.
To solve this, we will follow these steps −
sort the list boxes
curmin := a new list containing the first element of godown
cm := curmin[0]
for i in range 1 to size of godown, do
cur := godown[i]
if cur < cm, then
cm := cur
insert cm at the end of curmin
i := 0
j := size of godown -1
r := 0
while j >= 0 and i < size of boxes, do
if curmin[j] >= boxes[i], then
i := i + 1
r := r + 1
j := j - 1
return r
Let us see the following implementation to get better understanding −
def solve(boxes, godown): boxes.sort() curmin = [godown[0]] cm = curmin[0] for i in range(1, len(godown)): cur = godown[i] if cur < cm: cm = cur curmin.append(cm) i,j = 0, len(godown)-1 r = 0 while j >= 0 and i < len(boxes): if curmin[j] >= boxes[i]: i += 1 r += 1 j -= 1 return r print(solve([4,5,6], [4, 5, 6, 7]))
[4,5,6], [4, 5, 6, 7]
1 | https://www.tutorialspoint.com/program-to-find-out-the-number-of-boxes-to-be-put-into-the-godown-in-python | CC-MAIN-2021-39 | refinedweb | 366 | 72.39 |
User talk:Juandev
Test and Quiz[edit]
Hello Juan, I have replied to your question here. --Erkan Yilmaz 18:51, 17 December 2006 (UTC)
Spanish speakers[edit]
To find other teachers, please look at the lists here and here. You can also find potential students on both of those pages. The Jade Knight 22:08, 15 March 2007 (UTC)
- Ou, thanks and also for copyediting. I will have a look next week.--Juan 14:36, 16 March 2007 (UTC)
- No problem. I've hit some of the major things that will cause comprehension problems, but many of them still could use further proofreading. I highly recommend you check out the English Language Department and the Writing Center and check out some of the resources there to brush up on your English. If you're not sure what a word is in English, Wiktionary can be helpful, as well as the English Wikipedia (I use this method all the time for French words I don't know). The Jade Knight 10:02, 22 March 2007 (UTC)
- OK, next time Ill try to do better job. I will use Microsoft Word to corect my texts. I promise.--Juan 12:11, 22 March 2007 (UTC)
ayudar[edit]
Yo puedo ayudar, pero coordínate con Spanish One y poned bien claro lo que se puede hacer, no es nada evidente para los que entramos por aquí
- Es que aquí tenemos dos sistemas diferentes de la enseňaza, pero inmentaré a hacer algo con esto. Quizá la semana que viene.--Juan 18:34, 23 March 2007 (UTC)
- Aunque hace tiempo que no edito en la Wikiversidad, no me he desvinculado. Ahora estoy preparando una encuesta sobre el uso de wikis para aprender vocabulario. Disculpa que no pueda ayudarte en estos momentos. De todas formas estaré pendiente de tus aportaciones. Además, te comento, por si no lo sabías, que acaban de implementar la extensión para hacer pruebas de conocimiento en Wikiversidad. [1] Espero que la puedas sacar provecho. Un saludo, --Javier Carro 14:00, 30 March 2007 (UTC)
Muchas gracias por tu respuesta y por el enlace. Aquí an la wikiversidad existen low "scripts" para examinar, pero para mí muy difíciles. Necesito algo más fácil. Espero que ese en mediawiki sería mejor. Si se trata del vocabulario eso es lo que necesitamos para los cursos de las lenguas aquí para los estudiantes - diccionario útil. Es que Wikcionario en inútil usamos los diccionarios externos. Quizá Sendbox server nos dará una platforma nueva. Ahora solomante puedo preparar con otros la databasis de las palabras como ha mostrado en la discusion aquí.--Juan 14:13, 30 March 2007 (UTC)
Topic:Spanish[edit]
Hi Juan - i just saw this comment you made a while back. I'm just interested - what do you mean by "an unfriendly environment" at the old Topic:Spanish page? It'd be good to know how to make things a little friendlier. :-) Salud. Cormaggio talk 00:10, 23 April 2007 (UTC)
- Well. "Unfriendly environment" was a html script there. Template, which I uploaded there is friendlier to edits, but of course it will be still... Anyway someone may say that the future appereance was friendlier than this. So it is a question. I had to change its source code, cus I coudlnt support that code by new data (lot of time looking what fits where.--Juan 13:47, 23 April 2007 (UTC)
fr wikiversity[edit]
I have just seen your comment and I'm sorry to answer your question so late, if you always need a poster contact me on the french wikiversity
fr:Utilisateur:Vivelefrat
spanish course[edit]
It says to register or sign in. How do I do that. I am very interested in learning Spanish for personel use as well as for work. Please an suggestion as to improve my spanish and to join different classes would be greatly appriciated
- Well, it sais. But I am not actually having a computer at home so, I can support the spanish courses so much as I would like. Anyway, you allready now a little bit of Spanish. I so it woudl be better to meet e.g via skype?--Juan 11:47, 27 June 2007 (UTC)
Hola[edit]
Hi Juan. Your invitation made my day. The previous work on this page was something too cold and naive. It irritated me so I left. I've checked your proposal and is very good. I think it's very important, to make it more faster, to create that IRC channel. I would like to colaborate, as a teacher (I'm native speaker) and with recordings. I'd like to meet you also to coordinate better --Elnole 01:01, 26 July 2007 (UTC)
- Nice to here that. I think our students are a little bit pissed off, but they dont understand that it is not a fun, to make a lively course. Anyway, recently a few people started to do lessons - so thats good. I think we need to apply slogan of Wikiversity of "Studying by doing" teachers will just coordinate students and practice with them. Students on other hand will study and slowly, make a teaching content wich will be controlled by us. Anyway, have you seen my concept on Spanish: An Introduction? There are still some of the technical difficulties, thats why it slowed down its development. I hope people from service and support will help us more in the future, that we can do our work more effective.--Juan 13:09, 26 July 2007 (UTC)
OK. When be back here on wikiversity lets record some words and during that time I might prepare some text. Now I am ready to buy a computer and internet to home, so everything will hopefully go faster.--Juan 10:47, 20 August 2007 (UTC)
Note[edit]
I left a reply for you at Wikiversity talk:Support staff. McCormack 12:53, 27 July 2007 (UTC)
Hola[edit]
Hello Juan! I'm a university student studying Spanish with the intent of becoming a professor and I'd love to help with the Spanish division project. Just drop me a line and let me know what I can do to help.
Thanks, klmv
- Are you able to record pronunciation? If so, could you record orange words from these sites: Spanish: An Introduction/Pronunciation#Pronunciation and Spanish: An Introduction/Pronunciation#Stress. If not, there is also a need to build up a second lesson here: Spanish: An Introduction/Hola. Anyway, we maight have an skype or IRC chat with other people and coordinate our work in here. There is a lot to do. And we have one advantage, that there is a continuous interest for Spanish courses.--Juan 10:44, 20 August 2007 (UTC)
Regarding private foundation[edit]
Hi Juan, I think you are referring to an idea to establish a foundation dedicated to just Wikiversity. At the time the Wikimedia Foundation had been stalling over whether to authorize Wikiversity on its servers for several years. The idea generated no interest and a lot of hostility so I have not pursued it. At this time support from Wikimedia Foundation seems adequate so I see no need for a competing Foundation. Thanks for the inquiry. user:mirwin
Bloom Clock stuff[edit]
Hi Juan,
Interesting talking to you on IRC and your talk at the same time :). Here's the scoop:
When there is a log page available, add your signature there, rather than on the text list (otherwise it just has to be moved later). The easiest way is to just hit the little edit link above the recent logs list.
Then add templates to the BCP profile. I only set them up for this account (Juan) so far:
- add {{bcp/prag/10}} to categorize for "blooming in October in Prague"
- if you want to add the status as well, add {{bcp/prag/np}} for native plants, {{bcp/prag/ip}} for invasive plants, and/or {{bcp/prag/gp}} for cultivated plants (plants can of course be both invasive and cultivated, or native and cultivated).
More on the keys later, but the unsorted key for Prague in October is Bloom Clock/Keys/Prague/October/All. --SB_Johnny | talk 13:11, 14 October 2007 (UTC)
solar system[edit]
hi, and thanks for your work on the solar system pages. I noticed that Solar System/Earth/Solar System overwiev has an incorrect spelling of "overview" in the title. The pages in Category:Solar System should probably be renamed. Let me know if you need any help with this by leaving a message at my talk page.--mikeu 00:23, 10 December 2007 (UTC)
{{Wmdgs-wikipedia-survey2}}[edit]
Hi Juan. I was looking at this template, and noticed a number of issues with it (sorry) :). First, you're not really using #switch correctly, so the box and category displays won't work... I started a replacement template at {{Wmdgs-wikipedia-survey2a}}, where you should be able to see the switching language used better. {{Wmdgs-makesurveyquestion-2}} can help you design future questions that have 2 possible answer (I'm trying to catch hold of darkcode to create on that allows any number of answers).
Otherwise, I'm not sure about a few of the actual topics.
- "IP/registered" and "number of accounts" might be more for a wikimedia survey rather than just wikipedia. Also maybe use a scaled response (always registered, usually registered, sometimes ..., rarely ..., never registered)
- "Home project" is already covered in the general survey
- The group of questions on "Why someone made their first edit" could probably be collapsed into one single question.
- Language questions should be a different surcey altogether, since this affects how someone approaches all the projects (not just WP).
- How someone first discovered wikimedia and their first project are probably better in a general survey of all wikimedians (since it can ask instead just "what was your first project?", "which project did you first edit?", "which project did you first read?")
I'm not really sure what you mean by "left to other project". --SB_Johnny | talk 12:42, 13 January 2008 (UTC)
- Well, you are right, that I was not thinking a lot when creating this template. I have just copyed one of yourse and overpasting some data. And now answers to your points:
- 1) Well, some users, are editing regullary and they are not register (I know this from cs), so I wanted to ask: Are you registered or not? The second part: "scalede responsee" - I should tell you that I dont know what does it mean e.g. "rarely registere"? On Wikiversity we have more accounts on Wikipedia it is not so much common, so you are right that it would be better to leave this answer for future, I would like to also know, why people on wikipedia has more accounts.
- 2) I thout, each survey is unique. Now I see, I was wrong. But, it is possible that users, will not respond all surveys - then well be missing answers.
- 3) well, yes.
- 4) here would be the same reply as for no. 2
- 5) hmmm.--Juan 20:51, 14 January 2008 (UTC)
- I hope you don't mind, but I copied this to Template talk:Wmdgs-wikipedia-survey2 (I was relinking stuff on the Wikipedia discussion page). --SB_Johnny | talk 12:04, 16 January 2008 (UTC)
Wikimedian Demographics update[edit]
Hello Juan.:58, 5 February 2008 (UTC)
- Oh, thanks for this. Right now, I am completly without free time.--Juan 22:47, 5 February 2008 (UTC)
Czech content[edit]
ping, ----Erkan Yilmaz Wikiversity:Chat 21:23, 7 March 2008 (UTC)
- Could you have a look also here please. Lately much Czech pages here - is there an invasion :-) ----Erkan Yilmaz Wikiversity:Chat 18:58, 11 March 2008 (UTC)
Genus cats[edit]
Why are you removing them? It helps to have those for navigating sometimes, etc. I'm thinking about adding a field for species as well in the next template version, since I'm doing a bunch of cultivars too, so that will also make it easier to keep track . --SB_Johnny | talk 14:10, 23 March 2008 (UTC)
- It's really just to keep things sorted... genera in particular need to have included species linked and tracked to keep data up-to-date. See changes to BCP/Salix to see what I mean :).
- BTW, see also Talk:Bloom Clock/Global Key. I'm switching categories on that after the cue updates, but it looks like it will show you matching up to "Early Spring" for March (it matches "Late Winter" here. More on that later :).
Featured content[edit]
"On other existing Wikiversities, I havent found something simillar to so called Featured Content." ([2])
- on de.WV for example there is de:Wikiversity:Kandidaten für empfehlenswerte Kurse (candidates for recommendable courses) - but it is only one yet in. I guess people are not so much interested in it, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 12:51, 31 March 2008 (UTC)
- Well, lets add in this link there, not here.--Juan 15:30, 31 March 2008 (UTC)
Spanish lessons[edit]
Hi, Juan. I read some Spanish lessons and I find them very interesting, like the wikibook. If you need any help I want to help you. Ah, I was near to forgot to say it, I am Spanish. Akhram 23:15, 9 April 2008 (UTC).
- Well, recently many people said, its a mess. So I would say that the best way, will be if the people will not write their answers on the appropriate page, but they will do i on theier subuserpage. Something like: User:name/Spanish. There are two of them. Do you have any idea, I mean your own idea how to organise the courses? --Juan 20:06, 10 April 2008 (UTC)
- That option seems to me very good. We can use a template like {{student|Spanish}}, or so, which the interested users can put in their user page. With this template, the inclusion of the student in a category of control is automatic, so, their administration will be easier. Akhram 00:57, 12 April 2008 (UTC).
On the other hand, I have been looking the Spanish Language Division and it seems too quiet and a little confuse. Do you know how many people are really active (as teachers, as students)? Now, the division has a lot of secondary pages with a long lists of participants which may be outdated. I am not a foreign language teacher, so, I have no idea about the organization of courses and contents... but I can help with:
- Grammar, spelling or exercises correction.
- Resources addition (from Spain, it is very easy to find varied useful material).
- Wiki-encoding support.
- Clerical asistance in general.
Akhram 00:57, 12 April 2008 (UTC).
- Well, I would say noone is active. There were some teachers offering help, but nothing was done. Nor on the level of the courses. So its inactive. Aaaa, thank you for your support. The biggest problem I see is that I am just one interested in creating courses, but I dont have enought time to prepare hole course. Its a lot of work. So as I said somewhere, Iam thinking to make hole raw course - I mean just basics for each lesson, and then by some ways students/participants will extend it. Because it is better to have hole corse on 20 %, than have firts to lessons perfect and theny nothing. I think the design could be simmillar to this: betawikiversity:Práce na Wikipedii. Anyway if you like, come to wikiversity-en channel I am there quite often.--Juan 21:29, 14 April 2008 (UTC)
Spanish Language Division... again![edit]
Hola de nuevo, Juan. Tras pasar unas cuantas horas revisando el material existente, ya empiezo a hacerme una idea general del proyecto y de sus necesidades. En primer lugar, me gustaría revisar la estructura del portal y organizar la estructura del directorio, las categorías y las páginas duplicadas. Por otro lado, estoy comprobando las listas de participantes para verificar quiénes siguen interesados en el proyecto y poder agruparlas todas en una única página para facilitar el trabajo. De modo que, me gustaría quedar contigo, en el Skype o en el IRC para saber qué planes tienes sobre el tema. Akhram 03:10, 13 April 2008 (UTC).
Kostival lékařský[edit]
Ahoj, šla by založit stránka pro kostival lékařský (Symphytum officinale)? Díky. --Chemgym 14:42, 28 April 2008 (UTC)
syntax error[edit]
Should be working now... not sure why it wasn't before (it used the exact same phrase as the New Region 4 field). --SB_Johnny | talk 08:40, 15)
Plants in cantral europe[edit]
Hi Juan, since your regularly request plants from czech republic, i want to invite you to the project "atlas of central european plants" (de:Projekt:Atlas der Blütenpflanzen), where finding places of plants shall be collected. it is in german, but i can add an english introduction. and with the scientific names, it is international anyway. would be really great, if you could provide the places, where you took your photos, for this project. regards, -- Turnvater Jahn 15:04, 20 May 2008 (UTC)
- Sounds good. But my knoweledge of German is zero. So could you place there and introduction, please. Maybe I will study a little bit:-)--Juan 11:01, 23 May 2008 (UTC)
msg @ German Wikiversity[edit]
ping, ----Erkan Yilmaz uses the Wikiversity:Chat (try) PS: Tag a learning project with completion status !! 22:49, 28 May 2008 (UTC)
Juan, did you mean "sending"?[edit]
"I though I am selling somewhere my opinion." or "I though I am sending somewhere my opinion." Robert Elliott 21:20, 6 June 2008 (UTC)
Wikiversity:Nominations for checkuser/Erkan_Yilmaz#Community discussion[edit]
Hello Juan, thanks for the feedback. Would you like to go a step further and provide some more info ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 19:28, 28 June 2008 (UTC)
re:old plants[edit]
Yeah, I do it all the time (especially in winter when I go through my photos from the past year). Easiest way I've found is to just alter the signature for one of your accounts (using Special:Preferences), adding both the signature of the appropriate account and the date you're logging from, make sure raw signatures is selected, and then sign using three tildes rather than four (since the date is already in the signature, and of course is not the current date). Just remember to change it again when you want to log from another account or date, and of course if you're signing on someone's talk page or something :). --SB_Johnny | talk 19:46, 30 June 2008 (UTC)
Uh oh[edit]
Hi Juan. You're using an old template version for new page creates (noticed it on BCP/Nepeta cataria). The new one doesn't include the color, calendar season, type, or pollination prompts (see diff). Careful please when altering templates, since the 'bot updates won't work if they're not standard. --SB_Johnny | talk 08:25, 3 July 2008 (UTC)
- Well, I am using templates of existing othere profiles. I thought, that when you change basic templates its being changed than for other profiles.--Juan 06:09, 6 July 2008 (UTC)
answer[edit]
Hi! This is to answer your last question I hadn't noticed earlier. I don't think there is a notable difference between WV custodian and WP admin in rights or duties. The difference in naming is (according to me) to emphasize the differences between these projects (openness to original research) --Gbaor 10:46, 4 July 2008 (UTC)
- Why you are answering here and not there, where I asked? Anyway, have you got an idea, how custodians can emphasize openness and difference? Can you give a particular example?--Juan 06:16, 6 July 2008 (UTC)
- Sorry for the late answer (again) :) I answered it here because of the advise of Erkan Yilmaz here. About your next question, what I meant is written also here, probably better. Change in naming of sysops (e.g. custodian not admin) should also emphasize, that something, may be forbidden in other projects (e.g. original research on WP), but allowed here at WV --Gbaor 10:58, 24 July 2008 (UTC)
Czech Wikiversity[edit]
ping, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:16, 20 July 2008 (UTC)
Wikiversity Day[edit]
Hi Juan. If you wish to argue about the date of Wikiversity Day, please feel welcome, and please conduct the debate where other Wikiversitarians can join in. I have created a page at Wikiversity:Wikiversity Day/controversy which summarises your arguments as well as the counter-arguments. --McCormack 04:41, 7 August 2008 (UTC)
- You havent understood. I am talking about "global" Wikiversity Day for all Wikiversitians (such as from Spanish Wikiversity).--Juan 18:29, 7 August 2008 (UTC)
task list[edit]
I've added a couple of items for you to take a look at. Let me know if you have any questions or need any advice. --mikeu talk 14:11, 5 January 2009 (UTC)
Custodian task list[edit]
Place here some tasks for this student:
- How to be a Wikimedia sysop
- Category:Candidates for speedy deletion
Done
- Category:Copy to Wikimedia Commons
IP address[edit]
Juan, please don't indefinitely block ip addresses since one may be shared by many people. Hillgentleman | //\\ |Talk 00:10, 9 January 2009 (UTC)
- You mean this: 207.245.247.196? I dont know what is the problem? Maybe the template on its user page that it is a block. If someone would like to use it, he can register (look: "with an expiry time of infinite (anonymous users only, cannot edit own talk page)". Well, if you browse Google, you can find out there are multiple problems with this IP: [3] such as e.g. this: [4] where it is believed it is an open proxy. So how to prevent Wikiversity against such vandalisms?--Juandev 07:34, 9 January 2009 (UTC)
- So finally after some discussion on cs.wp IRC, I have reblock this IP to 3 days.--Juandev 08:10, 9 January 2009 (UTC)
AC[edit]
Hi juan, thanks for cleaning up my work a little, Is there a reason it still isn't showing on the main topic page? It only shows when I am logged in for some reason. Graeme E. Smith 01:10, 11 January 2009 (UTC)
- Now I dont understand you. What is the main topic page? What should be show there?--Juandev 08:35, 11 January 2009 (UTC)
- Maybe this will help:
Ok, I did that but it still doesn't change the fact that the whole page I have been editing doesn't show on the WikiVersity Artificial Consciousness Topic page. Whenever I jump to that page outside my account the page reverts back to the original page with nothing but the Welcome message. It's probably something I did wrong being a bit naive about Wiki technology, but It's confusing me because I am never sure which version of my main page is being edited. Does this make more sense? (The preceding unsigned comment was added by Graeme E. Smith (talk • contribs) )
- I am sorry my friend not replying to you. I havent been here for couple of days as I entered a new possition of a scientist in the Agricultural Museum. Let me show you this:
- You can see, there are two pages. When there is nothing at the first one, the other one is full of your edits. That is not a problem, here you can see how I repaired that: [5]. Does it now make sence to you. There are basicaly two pages. Nothing is automatical in Wikiversity. Hope, that we have removed your problem. --Juandev 23:54, 24 January 2009 (UTC)
Oh, Sorry about not signing that message.... I keep forgetting, I seem to have access through the Wikiversity page now to my work, which might mean that it was a problem with my browser. Sorry for the confusion. Uh, I just started experimenting with sub-pages, and I have a problem that I don't know how to get around. When I anchor a subpage, it types the whole page/subpage name in my document. Is there someway to suppress the extra information in order to keep the meaning more clear? --Graeme E. Smith 01:32, 24 January 2009 (UTC)
- Well sometimes it helps to cleare your cache. In Internet Explorer pressing CTRL+F5. For other browsers I can look up how to do that if you need. Uf in this case I dont understand you. I dont for example know, what does it mean "to anchor a subpage". If you are in wiki environment, you can "lets say" anchor the page via to ways:
- place a link to the page (e.g. [[User talk:Juandev/Not existing page|This page doesnt exist]], which will give you This page doesnt exist and after typeing you will get the page which doesnt exist and it is placed at User talk:Juandev/Not existing page)
- or you can place there a redirect (e.g. #REDIRECT [[User talk:Juandev/Not existing page]]), but redirect works just when it is placed alone on the page and in fact it is not design to create subpages.
- So maybe to reply you. If you would like to anchor a subpage to your document without extra information. You will placed there something like [[Artificial Consciousness/Documents|Documents]] which will result in: Documents. Its like: [[w:cs:Pes|pes]] will result in pes:-) --Juandev 23:54, 24 January 2009 (UTC)
- Ok, I have rebuilt my contributions a bit, using that technique and it works great! the problem I have now is that I am trying to figure out how to embed a file from wiki commons the file involved is File:Consciousness phenomenal-functional(en).png I'd like to edit it a bit, and reference it from my file but I haven't figured out how to borrow it to edit it, or how to display it once I have an edited copy.--Graeme E. Smith 04:38, 28 January 2009 (UTC)
Study this: w:Wikipedia:Picture tutorial. In short. You find a picture on Commons and this [[File:Consciousness phenomenal-functional (en).png|thumb|This is a [[w:Picture|picture]]]] results in this:
But there are also other parameters you may use, such as PX, frame, boarder and so on. Have a fun during studying.--Juan de Vojníkov 06:21, 28 January 2009 (UTC)
- Sorry to bother you again but I am building a portal at Portal: GreySmith Institute and I want to implement 4 forums and a repository, I have been informed that I should build the repository under the CC.whatever copyright instead of GFDL. So far looking at the Research Forum and the WikiCollequium I see that they are very similar to normal wiki pages. However I have a question, how do I place the Table of Contents? None of my pages seem to have it visible, yet I haven't designated a NOTOC line except as part of the Portal template. Further, is it possible to designate under what copyright a page should be opened?--Graeme E. Smith 22:40, 1 February 2009 (UTC)
- No thats fine. Those coments were mine of course. Well, in the case of the licence. If something is placed on the Wikiversity it is GFDL by default, but authors of the edits can give additional licences. And I was recomending the other licence, because I probably misunderstood, what do you want to do. But I think licence is not a problem for this time. Only what I can recomend you is tu dubble licence your contributions. It is usually done via template, like this Dual license: {{Dual license: GFDL with CC-BY-SA 3.0 }} (or {{self}}) or simple phrase placed on your user page. Any other asociated documents could by just CC. Next time, I can explain you why, but back to your questions. To the question of the forums: yes, forums are normal wiki pages, so nothing special. They can be better (e.g. new namespace (example), new extension (example), but this needs time and people agreement). Table of Contents? Well it only works if there are sections done via = Heading 1 =, == Heading 2 ==, === Heading 3 ===. So in the case of the portals you are creating it wont work. You should create it manualy probably doing it via a new template. --Juan de Vojníkov 23:26, 1 February 2009 (UTC)
Ok, I'll hold off on the paper storage location for now, until I understand the license options better, I think I can play with the idea of a forum now, and see where it gets me.--Graeme E. Smith 00:04, 2 February 2009 (UTC)
- Well, the explanation is easy. Both licenses are free. GFDL license were made deep done in 70s for some Unix documentation and when Wikipedia was launched as the first project of WMF (Wikiversity is the last) all content started to be distributed under GFDL. At that time there were no other free licenses instead of PD. PD seem to be to much open for the funding people and moreover it varies country by country, when GFDL is more uniform. Than few years after CC licences were created and people noticed that Wikimedia content would be better to have under CC licences. Why? GFDL was not design for this kind of information sharing. Major problem is that when you would like to ditribute and/or modify and distribute data formely licensed as GFDL with every copy all about 5 A4 pages should go together and also full history. Thats why we have here this kind of database history and thats why we import from other projects instead of simple copy&paste. But imagine, you are having a picture licensed under GFDL and you would like to distribute it via printed version. It is nearly imposible, because you have to attach full license and its history. Look at this cartoon:
- But if you use CC license you can ditribute and/or modify and distribute data just with writingh down the major author(s) and linking the sourse and license itself. It doesnt matter if it is on-line or printed version. So majorly right now wikimedians are double licensing their files (which goes to Wikimedia Commons) and their contributions. The official policy of WMF is thatn to migrate from GFDL to CC. My personnal recomendation thatn would be that all separete files you will license just CC-BY-SA 3.0 and all contributions tu the project dubble licensed GFDL (you have to, as it is default license of the project) and CC-BY-SA 3.0. Any q?--Juan de Vojníkov 01:35, 2 February 2009 (UTC)
OK, let me get this straight. As an original content supplier as long as I am creating a project out of thin air, I am OK, the problem with GDFL is that it requires the whole project to be kept together, plus a HIstory, and a referral to license in any copies. CC-etc, gives creative control, which allows others to borrow my work and break it up into useful chunks, and make use of the chunks as long as the file lists the authors of the chunks, and who was first etc. So if I double license, I can expect my picture of Abe Lincoln to show up with a mustache? But at least it will list that I wasn't the person who added the mustache?--Graeme E. Smith 15:49, 2 February 2009 (UTC)
- Yes those are free/open licenses (GFDL, CC, PD). It doesnt matter which you choose all allow other people to modify your work (so e.g. to add a mustache to your Ave Lincoln picture). Well of course, for they pictures usually say: Abe Lincoln mustage.jpg (Author Pete Grant), compilation from abe lincoln.jpg (original author Graeme E. Smith). Look:
- When you double license, people may choose under which conditions they will use the imagine. Thats why projects in general has one license. When more licenses for the project, this situation make a big chaos. Here is a nice example of the modification: [11]. Within the Wikimedia community, you cant be afraid that someone would break the rules. Sometimes also Wikimedians call to other world not to break rules using data from projects.--Juan de Vojníkov 19:44, 2 February 2009 (UTC)
Ok, I have some Forums implemented now, Take a look GreySmith Institute I decided to implement them under CC-By-SA 3.0 so that they can be used outside the Institute. Essentially by posting on them you agree that the content is licensed under CC rules. I used the long form for the names, but "Nicknamed" them inside the forum so AC Forum becomes Articial Consciousness Forum etc. (just so it doesn't get mixed up with AC current)--Graeme E. Smith 21:03, 2 February 2009 (UTC)
- Well, I have just spoke to one license specialist nad he told me, that you can exclude some page from the official license of the project. So your forums now will be doublelicensed. Licenses are not fun. If you will be interested in more details about this, I can arrange a meeting with him on IRC channel.--Juan de Vojníkov 22:31, 2 February 2009 (UTC)
Better hold off on that, until I load an IRC client, I haven't installed one since I was forced to upgrade in order to connect to my highspeed-light modem. I'll log off. Load one and get back to you once I know it works.--Graeme E. Smith 22:42, 2 February 2009 (UTC)
OK, I have an evaluation copy of an IRC client running in the background let me know what channel to join, and who to ask for--Graeme E. Smith 23:05, 2 February 2009 (UTC)
- OK, we are on freenode.net and you can reach both channels, it means #wikiversity and #wikiversity-en. Sorry for delay, I am replying some questions to JWS:-) --Juan de Vojníkov 23:16, 2 February 2009 (UTC)
My IRC client logged me onto Quake.net and I am not getting a response, I logged on as GreyBeard for this session only.
- Are you skilled in IRC? You should change from quake.net to freenode.net.--Juan de Vojníkov 23:54, 2 February 2009 (UTC)
So, as I understand it Moulton, like me, comes from a dynamic connection? I keep getting welcomed when I forget to log on. It's nothing I have any control over its the router at the phone compuany, that resets my IP address constantly, It's supposed to reduce the chance of hacking my account, by constantly changing the ip address around. I don't know how effective it is, but most routers have the capability to reset IP addresses periodically. It's part of DHCP I think.
Did you think I was Moulton for some reason?
Are you sure I am not now, and that is why you don't drop by as often? I've begun to think I have bad breath or something.--Graeme E. Smith 00:46, 5 February 2009 (UTC)
- Uh, I think this is a time zone issue... I don't think Juan thinks that :-). --SB_Johnny talk 00:58, 5 February 2009 (UTC)
- Errr, I dont know, why you think so. I dont think you are Moulton. Of course, it happens that people forgot to log in and then they are editing from different IP adresess. Thats not a problem. By the way, I dont understand why SB Johnny is talking about different time zone, but yes. You guys in US could be UTC -5, -6, -7, -8, but I am UTC +1. So this is from 6 up 9 hour difference. It means when theres a night in US I have a day and vice versa.--Juan de Vojníkov 09:03, 5 February 2009 (UTC)
Ok, Now I have a problem, I just put a short page into the standard namespace describing GreySmith Institute, and someone slapped a category redirect template on it somehow. It claims that Artificial Consciousness is a Computer Science topic, and while some of the pages are listed in the Computing Science category, There are enough pages that fall outside the category for me to be a bit miffed. I think that someone is toying with me. Especially since the redirect template is supposed to be used only in a category directory and GreySmith Institute, as far as I know is not a category yet.--Graeme E. Smith 03:05, 5 February 2009 (UTC)
- Heh, I dont see nothing like this. Ask in Colloquium, that is the place for questions.--Juan de Vojníkov 09:03, 5 February 2009 (UTC)
Ok, I think I figured it out, they put the redirect on GreySmith Institute itself and I used the wrong type of link on the page, so it showed on the page, instead of just in the category system. Im getting too used to having to write curly brackets I guess.--Graeme E. Smith 14:50, 5 February 2009 (UTC)
- Yes. This is a template {{something}} (template page is in ns Templage: and it can have severla functions).--Juan de Vojníkov 18:54, 5 February 2009 (UTC)
Juan, sorry to bother you again, but is there any way to export the TOC to a separate file? or to suppress the text, and just show the TOC until someone selects a link to a specific range? Where would I look to find out?--Graeme E. Smith 14:45, 25 March 2009 (UTC)
recommendation for full custodianship[edit]
I have opened a nomination for full custodianship. This begins the five day request for the community to comment. --mikeu talk 11:43, 2 February 2009 (UTC)
- aaa, thx.--Juan de Vojníkov 13:45, 2 February 2009 (UTC)
Category Question[edit]
I can't find documentation about categories. I tried to add some categories and one I wanted to add was "Ancient Egypt". The automated message I received suggested looking at (among others) the wikipedia category. Does this mean that we can file wikiversity pages under wikipedia categories? Or does the suggestion mean that a category should be set up along similar lines? Any help/suggestion would be appreciated. I have been working on the Ancient Egyptian Monuments Project in case you wanted to know in what setting the question came up. Barta 18:18, 5 February 2009 (UTC)
- Yes, it means, that categories here can have a simmillar system as on Wikipedia.--Juan de Vojníkov 18:57, 5 February 2009 (UTC)
Congratulations![edit]
Hi Juan, and congratulations about being a full custodian! I'm very happy for you. --AFriedman 15:34, 7 February 2009 (UTC)
Wikiversity:Mentors[edit]
Hi Juan,
I think Wikiversity needs much better resources for recognizing and coordinating mentorship, e.g. research and content development mentorship for User:Graeme E. Smith and course mentorship for User:AdaptiveCampus. I'm trying to fix that by taking the Wikiversity:Mentors program to the next level. Perhaps if good mentorship is recognized, it's more likely to be provided. I've seen you do some very good mentoring, and would appreciate your feedback on the new version of the program. --AFriedman 17:53, 6 March 2009 (UTC)
- Course mentorship could be a difficult issue as we are still harvesting the ways how to do it. But let me see.--Juan de Vojníkov 00:06, 8 March 2009 (UTC)
Hi Juan, Have you gotten around to looking at the mentorship page yet? I've also been communicating with User:Graeme E. Smith about how to provide better automated feedback in courses, and he's been revising the Fundamentals of Neuroscience/Electrical Currents lesson just to let you know, and in case you want to look at that for ideas. --AFriedman 00:11, 26 March 2009 (UTC)
- Yes I have seen that page. But this time I dont have so much time free as writing my diploma thesis.--Juan de Vojníkov 19:35, 2 April 2009 (UTC)
Bounce from Wikibooks[edit]
I tried to start a book on Wikibooks, but as some know, they are prejudiced against original research or anything that smacks of it, so they are banning my book. They suggested transwiki-ing it back here and publishing it from here. Any idea how this works? And can you suggest where to find the documentation on publishing from here?--Graeme E. Smith 20:56, 22 April 2009 (UTC)
- Well, yeah. Most of the project doesnt allowed original research. Just the Wikiversity do. But it is logic that all books on wikibooks have some ammount of original research. Well, I suggest to write a book in here and then we will see. It can stay here and if someone else need he will call it via v:Book name, when "v" is a Wikiversity prefix how to call pages from Wikiversity. Then when the book will be ready in here, we may transport it to Wikibooks, if they will agree, we can change it to a different format and allow it from Wikimedia Commons or it might stay in here. There is no problem on wv with original research:-)
Nice to hear, that you are still on. I am in preparation for state exams this time, so not having much time to come here.--Juan de Vojníkov 01:29, 30 April 2009 (UTC)
Design/Tigey[edit]
Good evening! I thought I would submit my User page design for your consideration. You can find it at User:Juan de Vojníkov/Design Tigey. I added a note to the bottom with a link to wikEd, an editing tool for Wiki sites that eases viewing the markup. I removed a couple of subpages that I have on my own page for the design since one I have not yet implemented and the other would require a bit of work to port over (the travel template). Let me know if you like it. Tigey 02:16, 5 June 2009 (UTC)
- Wow nice. What about to change colors?--Juan de Vojníkov 04:45, 10 June 2009 (UTC)
- Changing colors is pretty easy. If you have wikEd, it highlights colors specifications in the wikitext using the color specified so if you have color=ff0000 that text would be highlighted red in the edit window. All of the colors in that design are either named colors (i.e. red) or in RGB format (i.e. ff0000 which equals red). If you would like I can redo the color scheme to the colors you ask for. Tigey 04:54, 13 June 2009 (UTC)
- Heh, should try than that.--Juan de Vojníkov 08:28, 29 March 2010 (UTC)
sandbox server[edit]
Since the sandboxserver.org went offline I've been discussing a new server with darkcode and SB_Johnny. We're probably going to go ahead and create a limited version soon. BTW, why are you banned from freenode services? --mikeu talk 14:50, 29 December 2009 (UTC)
- Thats a long story.--Juan de Vojníkov 19:17, 5 January 2010 (UTC)
Jimbo's talk page[edit]
Hi Juan! I have seen your entry on Jimbo's talk page. I know that the situation is not easy for anybody here, but I would like to ask you to cool down. Threats with blocking just don't lead anywhere, and I am pretty sure it would make the whole thing a lot worse. --Gbaor 08:17, 18 March 2010 (UTC)
- That is not a threat. That is my notice, which I normally use, when someone is violating the project.--Juan de Vojníkov 21:29, 18 March 2010 (UTC)
Mr Wales isn't someone. I would like to have that you resign your administrative powers until the dispute is solved. --Histo 22:17, 18 March 2010 (UTC)
- Yes, Mr. Wales is a Wikimedian as us. I can agree with founder flag (as I have seen it for the first time), but I disagree with the way it was used. It is disrupting the community and I was elected to protect it. Mr. Wales and/or WMF could use a better way to inform us, that they disagree with the content and practices hosted on the project.--Juan de Vojníkov 22:36, 18 March 2010 (UTC)
Hello![edit]
Hello! Juan de Vojinikov and welcome to Wikiversity, need help, don`t go to the famous Wikiversity:Sandbox
User:Geoff Plourde probationary custodianship[edit]
Thanks for mentoring - Geoff is now a probationary custodian. -- Jtneill - Talk - c 07:31, 2 August 2010 (UTC)
- OK. Thx!--Juan de Vojníkov 08:45, 2 August 2010 (UTC)
- Would you please add yourself to Wikiversity:List of custodian mentors? -- Jtneill - Talk - c 11:01, 3 August 2010 (UTC)
- Done.--Juan de Vojníkov 11:39, 3 August 2010 (UTC)
Category[edit]
Hi! Thank you for your message. You can put category. Because ı have much a works today... Thanks... --Bermanya 20:23, 2 August 2010 (UTC)
New tests[edit]
My Name is Yellow and Should be Juan.? What are you testing? –SJ+> 02:39, 12 August 2010 (UTC)
- Hi, as I commented here: User talk:Geoff Plourde/mentoring 2010#NS Special is really special, I am testing if triple and multiple redirects will be displayed in special:DoubleRedirects and if well have a tool to find them. The same with cyclic redirects. When the experiment will end, these pages will be deleted.--Juan de Vojníkov 04:17, 12 August 2010 (UTC)
Activity[edit]
Juan,
I'm still around. A majority of my time has been consumed by college, so I'm not as active as I was during the summer.
Geoff
Are you recommending?[edit]
Heya Juan,
Noticed this this morning... are you going to add a recommendation? He's actually a bit overdue(!), but you need to actually make a recommendation (pro or con) to start the discussion period. --SB_Johnny talk 19:53, 15 October 2010 (UTC)
- Well I dont know. I spoke with him to ask him to find another custodian, but he seems not to have time. And he was not acitive enough I can recomend/not recomend him.--Juan de Vojníkov 12:54, 25 October 2010 (UTC)
Good luck for your presentation[edit]
Hi Juan - I hope your Wikiversity presentation goes really well - and maybe you some new contributors will be inspired!
. Maybe it can be recorded? Sincerely, James. 08:36, 6 August 2011 (UTC)
PLE feedback[edit]
Hi Juan, since you were involved from begin here in regards to PLE's would you mind sharing your experiences? Thanks, ----Erkan Yilmaz uses the Wikiversity:Chat + Identi.ca 05:16, 13 February 2012 (UTC)
Deleted Content - Unable to log in Account[edit]
Hey I was writing my wikipistoarticle yesterday and clicked the "Save"Button when something really bad happend. I dont know why but the system showed me an error and logged me out. I tried to log in with password. I failed. I thought that I might have forgotten it and send this "remember" mail. Got it and entered the one from the mail and then was asked to set up a new one. Tried it but it didnt worked at all. Im still unable to log in - no matter how hard i try. My whole writings our wikiversity page seems to been gone which really upsets me. I tried it again today. One time the system told me that I cant get a remembermail because I have not registered with mailadress, the next time it works and I get a mail with a password - but this doesnt work either. What can I do? Im quite desperate because the workload of a month might be gone. Please help me.
My username was "Wintertagtraum" my address is "nora.steinbach@edu.uni-graz.at"
Thank you in advance 130.232.214.10 (discuss) 07:10, 15 March 2013 (UTC)Wintertagtraum
Custodian mentoring[edit]
Hello Juan, can you please mentor me for custodianship. Thank you! --Draubb (discuss • contribs) 19:30, 29 May 2013 (UTC)
Account renamed[edit]
Hi Juandev. In case you missed it, see here. Trijnstel (discuss • contribs) 14:04, 29 Septemberuan. To make sure that both of you can use all Wikimedia projects in future, we have reserved the name Juan:40, 17 March 2015 (UTC)
Thank you[edit]
Thanks for the "welcome back" message. --JWSchmidt (discuss • contribs) 01:08, 12 May 2015 (UTC)
Draft namespace[edit]
Hi Juandev!
Welcome back!
If you like, you can record an opinion and vote on the Draft namespace. The discussion is here. --Marshallsumter (discuss • contribs) 19:47, 12 March 2018 (UTC) | https://en.wikiversity.org/wiki/User_talk:Juandev | CC-MAIN-2020-29 | refinedweb | 8,121 | 72.87 |
xmlns namespace for outlook
I'm trying to add xmlns:v and xmlns:o to my template with the following code.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns:
However, when you look at the actual page, you can only see
<html xmlns="">. Is
there any way I can add these to my newsletter template?
Cascade version is v7.12.5 and I tried
#protect and
<o:root/> trick.
Thanks,
Han
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
1 Posted by Ryan Griffith on 29 Jun, 2015 06:24 PM
Hi Han,
It looks like there is perhaps one of the routines during page rendering is stripping out the additional namespaces, since they appear to be left alone when adding them to the Template.
When you have a moment, I believe I was able to get things to work using the following:
To summarize, the first
<html>tag will be removed on render due to the
#cascade-skiptags and the contents of the
#protect-topcode section will be promoted to the top of the rendered source.
Please let me know if you have any questions.
Thanks!
2 Posted by hohan on 29 Jun, 2015 06:55 PM
Hi Ryan,
I tried your code, but it still gives me the same stripped out code <html
xmlns=">.
Han
3 Posted by Ryan Griffith on 30 Jun, 2015 02:02 PM
Hi Han,
Definitely interesting, I was able to get it this to work in my local 7.12.5 instance. I am attaching the contents of my Template so you can see the snippet I provided in action.
Also, are you using a page, config set or template level XSLT Format?
Please let me know if you have any questions.
Thanks!
4 Posted by hohan on 30 Jun, 2015 03:31 PM
Hi Ryan,
It magically works.
Thanks,
Han
5 Posted by Ryan Griffith on 30 Jun, 2015 04:06 PM
Thank you for following up, Han, I am glad to hear the proposed tweak to your Template did the trick.
I'm going to go ahead and close this discussion, please feel free to comment or reply to re-open if you have any additional questions.
Have a great day!
Ryan Griffith closed this discussion on 30 Jun, 2015 04:06 PM. | https://help-archives.hannonhill.com/discussions/general/21868-xmlns-namespace-for-outlook | CC-MAIN-2021-49 | refinedweb | 406 | 78.28 |
NAMEgethostname, sethostname - get/set hostname
SYNOPSIS#include <unistd.h>
int gethostname(char *name, size_t
len);
int sethostname(const char *name, size_t len);
gethostname():
|| /*These system calls are used to access or to change the system hostname. More precisely, they operate on the hostname associated with the calling process's UTS namespace. says that if such truncation occurs, then it is unspecified whether the returned buffer includes a terminating null byte.
RETURN VALUEOn success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS
- EFAULT
- name is an invalid address.
- EINVAL
- len is negative or, for sethostname(), len is larger than the maximum allowed size.
- ENAMETOOLONG
- (glibc gethostname()) len is smaller than the actual size. (Before version 2.1, glibc uses EINVAL for this case.)
- EPERM
- For sethostname(), the caller did not have the CAP_SYS_ADMIN capability differencesT. | https://man.archlinux.org/man/sethostname.2.en | CC-MAIN-2021-10 | refinedweb | 140 | 51.75 |
Wednesday, October 10, 2012
.
The guide can be downloaded from:
Ive also included this on the technet wiki page for Windows Azure Service Bus resources:
Posted On Wednesday, October 10, 2012 11:55 AM | Comments (0) |
Filed Under [
BizTalk
Azure Service Bus
]
Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router.
One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand.
To show how this works download the spreadsheet from the following url:
In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to.
In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters.
In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file.
In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination.
Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.
Posted On Wednesday, October 10, 2012 11:01 AM | Comments (0) |
Filed Under [
BizTalk
Azure Service Bus
] | http://geekswithblogs.net/michaelstephenson/archive/2012/10/10.aspx | CC-MAIN-2014-15 | refinedweb | 472 | 62.11 |
So we currently have two offices, one in TN and one in FL. Both have a DC installed.
We recently started deploying Trend Micro AV through GP, which is a 200 MB file. We placed the file on DC1 in our TN office and it works great locally.
The problem is remotely, it takes forever for the machine to login, I assume because it is pulling the 200MB file across the WAN.
Is there a better way to do this in the GP vs setting the install path to \dc1\msifile.msi? If I add it it to a replicated folder on the DC (Netlogon folder), and then use the install path \domain.local\netlogon will each client pc be smart enough to know which DC to hit?
Hope this makes sense.
I don't like using netlogon/sysvol for large files like this. It likely will be using the older, less efficient NTFRS method of replication. On top of that, it's where your GPO templates and other scripts live. I like to have a separate share for software deployment, so that it doesn't spiral out of control in size.
You should have sites defined in Active Directory Sites and Services for each office anyway, so that your clients are (almost) always guaranteed to be authenticating against the local DC. After you do this, you can set up DFS in a way that it will force clients to use the file server located in their site.
What I do in this case is create a DFS namespace called "Deployment" or something similar. It will be accessed via \\yourdomain\deployment. Then you can use DFS-R to replicate anything in one Deployment share to the other. This will allow you to have mirrored deployment shares at both sites while obscuring the absolute path. In the DFS namespace settings, you can tell client machines to either connect to whatever server responds fastest or whatever server is available in the site that the client is in. In this case, you want the latter.
My suggestion would be to use a site level GPO to target the install source locally.
Site 1 GPO source would be:
\\site1server\share
Site 2 GPO source would be:
\\site2server\share
By targeting the source to be a local share via a site level GPO you only need to copy the file once to each site level share, it doesn't need to be replicated, and clients will install it from the local share.
This answer assumes that you've got Active Directory Sites and Services configured appropriately. If you don't, you should... for more reasons than just software
active | http://serverfault.com/questions/310862/deploying-large-applications-across-multiple-sites-via-gp | crawl-003 | refinedweb | 444 | 71.65 |
14 March 2006 16:50 [Source: ICIS news]
HOUSTON (ICIS news)--April nitration-grade toluene spot prices in the US surged to a five-month high on Tuesday morning due to strong premium gasoline cash prices in the US Gulf (USG), aromatics traders said.
April n-grade toluene spot business was done Tuesday morning at $2.47/gal free on board (FOB) USG ($750/tonne or Euro630/tonne), twice at $2.50/gal FOB USG and $2.52/gal FOB USG, according to global chemical market intelligence service ICIS pricing.
Prices of the four spot deals were the highest since the week ended ?xml:namespace>
US Gulf premium, or V-grade, gasoline cash prices surged past $2/gal on Monday for the first time since early October 2005, when Hurricane Rita caused many US Gulf refineries to stop production.
Higher V-grade gasoline prices, as well as the regular-to-premium differential holding above 20 cents/gal, caused gasoline toluene spot prices to hit a 5-month high. Toluene can be used a gasoline octane enhancer.
If the jump in toluene spot prices is sustained into the second half of March, toluene truck/railcar prices could see increases. No increases had been announced by any suppliers as of early Tuesday.
ExxonMobil, Flint Hills Resources, and Citgo are major suppliers of toluene for the truck and railcar | http://www.icis.com/Articles/2006/03/14/1048921/us-toluene-spot-prices-at-5-month-high.html | CC-MAIN-2014-41 | refinedweb | 226 | 71.14 |
Docker is an open source platform for development and operation teams to transport, build and run distributed applications in an effective manner. It can combine an application along with its dependencies in a virtual container that can run on a Linux server by providing abstraction and automation.
The concept behind using containers is to isolate resources from physical hosts, limit the services and provision them to have a private view of the operating system with their own file system structure, process ID space and network interfaces. A single kernel can have multiple containers but each container can use only a defined amount of I/O, CPU and memory.
This leads to an increase in portability and flexibility for the applications that can run on cloud deployment models such as private and public clouds, and also on bare metal and so on. Docker consists of:
- Docker Engine: This is a portable packaging tool which has lightweight application runtime.
- Docker Hub: This is a cloud service for application sharing and workflow automation.
Docker uses resource isolation features of the kernel such as kernel namespaces and cgroups to allow independent containers to run within a single instance, thereby avoiding the overhead of starting virtual machines.
The Linux kernels namespace provides isolation to the applications view of the operating environment, which includes the network, process trees, mounted file systems and user IDs. The kernels cgroups provides isolation from the network, memory, CPU and block I/O. In some versions of Docker, the libcontainer library directly uses the facilities for virtualisation provided by the kernel. It also uses abstracted virtualisation interfaces via LXC, system–nspawn and libvirt.
Docker enables applications to be assembled from components and eliminates the friction between the production environment, QA and development. By using Docker, we can create and manage containers; it enables us to create highly distributed systems by allowing workers tasks, multiple applications and other processes. Essentially, it provides a Platform as a Service (PaaS) style of deployment and, hence, enables IT to ship faster and run the same application unchanged on the data centre, VMS, the cloud and on laptops.
Docker can be integrated into various infrastructure automation or configuration management tools (such as Chef, CFEngine, Puppet and Salt), in continuous integration tools, in cloud service providers such as Amazon Web Services, private cloud products like OpenStack Nova, and so on.
Docker installation
Docker can be installed on various operating systems such as Microsoft Windows, CentOS, Fedora, SUSE, Amazon EC2, Ubuntu, and so on. We will be covering installation for three operating systemsMicrosoft Windows, CentOS and Amazon EC2.
On Windows
As the Docker engine uses Linux kernel features, to execute it on Windows, you need to use a virtual machine. A Windows Docker client is used to control the virtualised Docker engine to execute, manage and build Docker containers.
Docker has been tested on Windows 7.1 and 8, apart from other versions. The processor needs to support hardware virtualisation. To make the process easier, an application called Boot2Docker has been designed that installs the virtual machine and runs the Docker.
1. First, download the latest version of Docker for the Windows operating system.
2. Follow the steps to complete the installation, which will result in the installation of Boot2Docker, Boot2Docker Linux ISO, VirtualBox, Boot2Docker management tool and MSYS-git.
3. After this, run Boot2Docker to start the shell script or execute Program Files > Boot2Docker. Start script will ask you to enter an ssh key passphrase or simply press [Enter]. This script will connect to a shell session in the virtual machine. According to requirements, it will initialise a new VM virtual machine and start it.
To upgrade, download the latest version of Docker for the Windows operating system, and run the installer to update the Boot2Docker management tool. To upgrade the existing virtual machine, open a terminal and perform the following steps:
boot2docker stop boot2docker download boot2docker start
On CentOS
Docker is available by default in the CentOS-Extras repository on CentOS 7. Hence, to install it, run sudo yum install docker. In Centos-6.5, the Docker package is part of Extra Packages for the Enterprise Linux (EPEL) repository.
On Amazon EC2
Create an AWS account. To install Docker on AWS EC2, use Amazon Linux that includes the Docker packages in its repository, or Standard Ubuntu Installation.
Choose an Amazon-provided AMI and launch the Create Instance Wizard menu on AWS Console. In the Quick Start menu, select the Amazon-provided AMI, use the default t2.micro instance, configure the Instance Details button, select standard choices where default values can be kept, and wait for the Amazon Linux instance to run the SSH instance to install Docker: ssh -i <path to your private key> ec2-user@<your public IP address>. Connect to the instance, type sudo yum install -y docker; sudo service docker start to install and start Docker, and then set up Security Group to allow SSH.
Why Docker is considered hot in IT
Docker is widely used by developer teams, systems administrators and QA teams in different environments such as development, testing, pre-production and production, for various reasons. Some of these are:
- Docker can run everywhere regardless of its kernel version.
- It can run on the host or in a container.
- It consists of its own process space, a network interface and can run resources as the root.
- It can share the kernel with the host.
- Docker containers, and their workflow, help developers, sysadmins, QA teams and release engineers to work together to get code into production.
- It is easy to create new containers, enable rapid iteration of applications and increase the visibility of changes.
- Containers are light in weight and quick; they also consist of sub-second launch times, which reduces the time of the development phase, testing and production.
- It can run almost everywhere such as on desktops, virtual machines, physical servers, data centres, and in cloud deployment models such as a private or public cloud.
- It can run on any platform; it can be easily moved from one application environment to another, and vice versa.
- As containers are light in weight, scaling is very easy, i.e., as per needs, they can be launched anytime and can be shut down when not in use.
- Docker containers do not require a hypervisor, so we can get more value out of every server and can potentially reduce the expenditure on equipment and licences.
- As Dockers speed is high, we can make small changes as and when required; moreover, small changes reduce risks and enable more uptime.
Comparison between Docker and virtual machines (VMs)
The significant difference between containers such as Docker and VMs is that the hypervisor abstracts an entire device while containers abstract the operating system kernel. One thing hypervisors can do that containers cant is use different operating systems or kernels. So, for example, you can use Amazons public cloud or the VMware private cloud to run instances of both Windows Server 2012 and SUSE Linux Enterprise Server in parallel. With Docker, containers must use the same operating system and kernel. On the other hand, if all you want to do is get the most server application instances running on the least amount of hardware, you dont need to worry about running multiple operating system VMs. If different copies of the same application are what you want, then its better to consider containers.
Containers are smaller in size than VMs, the starting process is much faster and they have enhanced performance. However, this comes at the expense of less isolation and greater compatibility requirements due to sharing the hosts kernel. Virtual machines have a full OS with their own device drivers, memory management, etc. Containers share the hosts OS and are therefore lighter in weight.
References
[1]
[2]
[3]
[4]
[5] so-darn-popular/
[6]
[7]
[8]
[9]
[10]
[11]
[12]
Connect With Us | http://opensourceforu.com/2015/06/an-introduction-to-docker/ | CC-MAIN-2016-44 | refinedweb | 1,314 | 51.28 |
How to test methods in Go
Radliński Ignacy
・1 min read
Hello, last week I've started learning Golang. Following the Tour of Go I've implemented a function to calculate square root of a given float64 number.
import "fmt" type ErrNegativeSqrt float64 func (e ErrNegativeSqrt) Error() string { return fmt.Sprintf("cannot Sqrt negative number: %g\n", float64(e)) } // Sqrt calculates the square root of a number. // If given negative number it returns an error. func Sqrt(x float64) (float64, error) { if x < 0 { return 0, ErrNegativeSqrt(x) } else { z := float64(x) for i := 0; i < 100; i++ { z -= (z*z - x) / (2 * z) } return z, nil } }
To test if it's working as expected I've wrote this test:
import ( "math" "testing" ) func TestSqrt(t *testing.T) { var tests = map[float64]float64{ 3: math.Sqrt(3), 2: math.Sqrt(2), 1: math.Sqrt(1), 0: math.Sqrt(0), 4: math.Sqrt(4), 5: math.Sqrt(5), -5: 0, -1: 0, } precision := 0.00000001 for key, expectedVal := range tests { val, _ := Sqrt(key) if val < expectedVal-precision || val > expectedVal+precision { t.Error( "For", key, "expected", expectedVal, "got", val, ) } } }
My question is How do I write a test for the
Error method of
ErrNegativeSqrt type which I wrote so that
ErrNegativeSqrt can implement the
error interface?
Thanks in advance. ❤️
PS. This is my first dev.to post! 🙌
PPS. Feel free to checkout my repo
radlinskii / go-playground
Repository created to have fun with Golang.
go-playground
Repository created to have fun with Golang.
Classic DEV Post from Nov 1 '19
Do a type assertion to check the error matches the type you hope for.
If you are doing table driven tests in Go it's better to run each subtest separately with t.Run. Check it here - dev.to/plutov/table-driven-tests-i... | https://dev.to/radlinskii/how-to-test-methods-in-go-4n8b | CC-MAIN-2020-16 | refinedweb | 303 | 67.96 |
When this series was initially proposed, my thoughts immediately moved to “what have we not heard about” or at least not too much about? There are so many positive aspects of Windows Server that we dwell on, I wanted to pick something we don’t hear enough about: Distributed File System and BranchCache
Although these are not completely related topics, they are both technologies designed to support end user access to file resources in a distributed organization; an organization with more than one physical office. A few years back, we used to cite that an organization with more than 50 employees had greater than a 52% probability of more than one physical office. How are we supporting access to network resources for those employees that are working out of the “remote offices?”
What is a Distributed File System?
According to the TechNet documentation on Windows Server 2012 R2 Distributed File System (DFS) found at, DFS is broken into two different components: DFS Namespace and DFS Replication.
The DFS Namespace “Enables you to group shared folders that are located on different servers into one or more logically structured namespaces.” While the DFS Replication “Enables you to efficiently replicate folders across multiple servers and sites.”
If DFS Replication is being used, an initial, seeded copy of the data can be done. From that point on, any changes made to the replica or the main office will result in a sync’ing of only the changed blocks using an optimized Remote Differential Compression transfer.
This powerful pair of services allows a network administrator to set up a virtualized file system (DFS) that appears the same regardless of the data location or end user location. One advantage is that documents can be kept on a local server, to improve access performance, while having changes replicated to the other instances of the same file found on other geographically distant servers. Another benefit is the path they use to access one of the resources (UNC) is the same regardless of where the physical resources are actually hosted. this makes access to network resources easier to share. And lastly, there is some native fault tolerance; if a local resource is unavailable because of an unforeseen issue, the DFS reference will fail over to the next best hosting location.
What is a BranchCache?
According to the TechNet documentation found at, “BranchCache is a wide area network (WAN) bandwidth optimization technology”. Accessed files are cached locally to improve bandwidth optimization and performance. There are two modes of BranchCache: distributed cache mode and hosted cache mode.
In distributed cache mode, the local cache is kept on a workstation running Windows 7 or sooner. While hosted cache mode the local cache is kept on a server running Windows Server 2008 R2 or sooner. Obviously the distributed cache mode is more economical because it uses a workstation. This makes distributed cache mode a desirable solution for very small branches where a server ROI would be difficult to justify. However, the cache in distributed cache mode will disappear if the caching workstation is turned off or removed from the local network.
Once configured, typical operation of BrachCache is initiated by a client computer attempting to access a data file found at the main office, this is the “request” operation. A scan is performed of the local cache server to “locate” the document. If found, that cached copy is used. If not found, the document is “retrieved” from the main office and stored in the local “cache”.
DFS vs. BranchCache
So, what is the difference between DFS and BranchCache if I’m looking at BranchCache hosted cache mode?
In DFS, you define specific locations for the data sources and where the replicas are kept. In addition, you can specify exactly how much data will be pre-cached. When the user attempts to access locally cached data, the information is already there. If view this as being a more structured solution.
In BranchCache hosted cache mode, the data cached is often just cached on demand the first time a particular data file is accessed. I view this as being a more organic solution.
In Summary
Remember, even if you are starting the shift to the Cloud, that doesn’t negate the need for on-premises technologies. Parts of DFS and Branch Cache are ideally suited for a hybrid deployment model supporting a highly mobile user community. There is additional information provided in each of the previous TechNet articles about things to watch out for when integrating with Azure.
Hopefully these thoughts have inspired you to think of other ways you can leverage Windows Server in your customer solutions.
Please check out our whole series on Windows Server 2012 R2 at
Welcome! To close out the fantastic year of 2015, we are going to be doing a quick blog series on Windows
Rob Waggoner When we talk about Windows Clustering, we are always talking about the need for High Availability | https://blogs.technet.microsoft.com/uspartner_ts2team/2015/12/15/leading-your-customers-to-modern-it-with-windows-server-2012-r2-supporting-remote-offices/ | CC-MAIN-2018-17 | refinedweb | 824 | 51.18 |
15 June 2012 14:31 [Source: ICIS news]
LONDON (ICIS)--Producers and consumers negotiating the European methanol third quarter contract have indicated that the price is likely to settle between a rollover and a slight increase.
Speaking at the sidelines of the International Methanol Producers and Consumers Association (IMPCA) meeting in ?xml:namespace>
Recent widespread commodity and energy price falls have changed the market picture somewhat from around a month ago, when there was little doubt that the contract price would increase.
Now, these predictions have been revised down, and although there remains much uncertainty, most players expect to see either a rollover or an increase of lesser magnitude than in the second quarter (€20/tonne).
“They [producers] are listening, at least, to the idea of a rollover. If things had remained as they were a month ago, I don’t think this would happen,” a buyer said.
Producers believe than an increase is still justified, citing the high European spot price throughout the second quarter, the weaker euro, the need to maintain global price parity and the lost Iranian supplies resulting from US and EU trade sanctions.
Yet buyers, while accepting the factors above, point to the dire state of not just the European but the global economy, and the effect this will have of eroding demand.
One key area of disagreement is Chinese demand. Buyers routinely point out that the Chinese methanol market is sluggish, by normal standards, as prices there have fallen even with a sharp reduction in imports from
A producer said reports of lower Chinese demand need to be taken in the context of its extraordinarily high demand previously.
“
Nevertheless, prices globally have decreased significantly over the past few weeks (although the European market has remained more or less stable), and buyers insist this is grounds for a rollover at least.
Some have even suggested that if spot prices continue falling, a contract decrease could be on the cards.
The second quarter methanol contract was settled at €340/tonne ($430/tonne) | http://www.icis.com/Articles/2012/06/15/9570196/rollover-slight-increase-likely-outcome-for-q3-europe-methanol.html | CC-MAIN-2015-11 | refinedweb | 336 | 54.26 |
can someone tell me what is wrong with my code?? i cannot make this face. thank you! using graphics.win
from graphics import * import time def moveAll(shapeList, dx, dy): for shape in shapeList: shape.move(dx, dy) def moveAllOnLine(shapeList, dx, dy, repetitions, delay): for i in range(repetitions): moveAll(shapeList, dx, dy) time.sleep(delay) def makeFace(center, win): head = Circle(center, 25) head.setFill("yellow") head.draw(win) eye1Center = center.clone() eye1Center.move(-10, 5)Width = 300 winHeight = 300 win = GraphWin('Back and Forth', winWidth, winHeight) win.setCoords(0, 0, winWidth, winHeight) # make right side up coordinates! rect = Rectangle(Point(200, 90), Point(220, 100)) rect.setFill("blue") rect.draw(win) faceList = makeFace(Point(40, 100), win) #NEW faceList2 = makeFace(Point(150,125), win) #NEW stepsAcross = 46 #NEW section dx = 5 dy = 3 wait = .05 offScreenJump = winWidth*2 for i in range(3): moveAllOnLine(faceList, dx, 0, stepsAcross, wait) moveAll(faceList2, offScreenJump, 0) # face 2 jumps off the screen moveAllOnLine(faceList, -dx, dy, stepsAcross/2, wait) moveAll(faceList2, -offScreenJump, 0) # face 2 jumps back on screen moveAllOnLine(faceList, -dx, -dy, stepsAcross/2, wait) Text(Point(winWidth/2, 20), 'Click anywhere to quit.').draw(win) # wait, click mouse to go on/exit # win.getMouse() # win.close() # main() | https://www.daniweb.com/programming/software-development/threads/269220/draw-a-face | CC-MAIN-2017-34 | refinedweb | 209 | 58.08 |
Have you ever wished that you had a sweet little API to generate HTML in Python? Dominate is probably what you are looking for.
Dominate is a Python library for creating and manipulating HTML documents using an elegant DOM API.
Now, I’m a self admitted HTML purist, but look at how the dominate API works.
from dominate.tags import ul, li list = ul() for item in range(4): list += li('Item #', item)
If done correctly HTML generators can blend in with your code nicely.
Checkout Dominate the next time you’re looking for a nice native HTML generator API for python. | http://thechangelog.com/tagged/templating/ | CC-MAIN-2014-52 | refinedweb | 102 | 63.49 |
Chris Oliver's Weblog
- All
- F3
- JavaFX
- Programming
First steps with the JavaFX Compiler
Thanks to the efforts of Robert Field, Lubo Litchev, and Jonathan Gibbons of the Javac team, as well as Per Bothner and Brian Goetz (and also thanks to the organizational efforts of Bob Brewin, James Gosling, and Tom Ball) we have the beginnings of a JavaFX to JVM-byte-code compiler built on the same infrastructure as Javac.
Of course, the compiler is still incomplete, but it turns out to be far enough along to try a first performance benchmark (Takeuchi function):
import java.lang.System; public class Tak { operation tak(x:Number, y: Number, z:Number):Number; } operation Tak.tak(x, y, z) { return if (y >= x) then z else tak(tak(x-1, y, z), tak(y-1, z, x), tak(z-1, x, y)); } var tak = new Tak(); System.out.println("tak(24,16,8)={tak.tak(24, 16, 8)}"); $ time java -cp ".;dist/JavaFX.jar" TakMod tak(24,16,8)=9.0 real 0m1.333s user 0m0.010s sys 0m0.020s
Here's the interpreter:
$ time bin/javafx.sh TakMod.fx compile thread: Thread[main,5,main] compile 0.04 tak(24,16,8)=9.0 init: 69.48 real 1m10.422s user 0m0.190s sys 0m0.130sSpeed improvement for this particular example is a pretty awesome 54x.
Posted at 03:25PM Jul 14, 2007 by Christopher Oliver in JavaFX | Comments[13]
Posted by peter on July 14, 2007 at 07:18 PM PDT #
Posted by Hans Uvstohn on July 15, 2007 at 08:48 AM PDT #
Posted by David Ford on July 15, 2007 at 11:42 AM PDT #
Posted by peter on July 17, 2007 at 10:48 AM PDT #
There is a figure for you.
JavaFX Interpreter (Class Loader) => Run-Time
JavaFX Compiler => Compiler-Time
Comparing compiler-time performance with run-time performance is meaningless.
Posted by Jerry Tsai on July 17, 2007 at 07:15 PM PDT #
Posted by peter on July 18, 2007 at 04:33 AM PDT #
Posted by Tom Palmer on July 19, 2007 at 12:33 PM PDT #
Posted by 61.8.226.148 on July 19, 2007 at 12:33 PM PDT #
Posted by Tom Ball's Blog on July 20, 2007 at 06:50 AM PDT #
Posted by Fabrizio Giudici on July 20, 2007 at 04:28 PM PDT #
That's great to hear. I am really looking forward to JavaFX Script's release, and some visual design tools to be used along side a code editor.
I've read most of the tutorials and had a couple of questions:
1) One of JavaFX Script's uses marketed at JavaOne was for RIA development, competing with Adobe Flex. I would imagine most of the app would be written in JavaFX and run client-side. How will it access EJBs, web services, databases, etc?
2) Will there one day be a visual designer such as NetBeans' Matisse and Adobe Flex Builder for designing screens? I realize that JavaFX script is supposed to simplify Swing development but I still prefer to use a visual designer for initial screen design rather than typing code and setting properties manually.
3) The JSF guys talk about how the renderer doesn't have to be HTML and could be anything. Is there a way for JavaFX Script RIAs to take advantage of JSF's features? I can't really see how, but you may know.Thanks,
Ryan
Posted by Ryan de Laplante on July 20, 2007 at 04:58 PM PDT #
At least loop it a 1000 times to give some meaningful results.
Currently you are mostly measuring pure VM startup times vs JavaFX startup times, taking absolutely no advantage of HotSpot at all.
You also should include the timings for a pure java implementation as reference.
I just checked a C implementation, at about 12s/1000 loops on a 2Ghz core2.
Posted by Eike Dierks on July 21, 2007 at 08:50 PM PDT #
Posted by Felipe Gaucho on July 22, 2007 at 03:42 AM PDT # | http://blogs.sun.com/chrisoliver/entry/first_steps_with_the_javafx | crawl-001 | refinedweb | 680 | 70.73 |
# Example of How New Diagnostics Appear in PVS-Studio

Users sometimes ask how new diagnostics appear in the PVS-Studio static analyzer. We answer that we draw inspiration from a variety of sources: books, coding standards, our own mistakes, our users' emails, and others. Recently we came up with an interesting idea of a new diagnostic. Today we decided to tell the story of how it happened.
It all started with a review of the COVID-19 CovidSim Model project and an [article](https://habr.com/en/company/pvs-studio/blog/541034/) about an uninitialized variable. The project turned out to be small and written using the modern C++ language standard. This means it can perfectly add to the base of test projects for regression testing of the PVS-Studio analyzer core.
Before supplementing the base, we find it useful to look through warnings to search out patterns of false positives and highlight them to improve the analyzer in future. This is also an additional opportunity to notice that something else is wrong. For example, a message fails to describe an error for a particular code construct.
Luckily, the developer who was assigned to add the project to the test base approached the task thoroughly and decided to look into the MISRA diagnostics section. This wasn't an indispensable step. MISRA diagnostics are generally specific. They can be safely disabled for such projects, as CovidSim.
MISRA C and MISRA C++ diagnostics are intended for developers of embedded systems, and their point is to limit the use of unsafe programming constructs. For example, it is not recommended to use the *goto* operator ([V2502](https://www.viva64.com/en/w/v2502/)), since it provokes the creation of complex code, where it is easy to make a logical error. Read more about the philosophy of the MISRA coding standard in the article "[What Is MISRA and how to Cook It](https://habr.com/en/company/pvs-studio/blog/482486/)".
As for application software development, it doesn't make sense to enable them. The CovidSim project could do without them. Otherwise, a user will simply drown in a huge number of messages that are of little use in this case. For example, when experimenting with this set of diagnostics, we received more than a million warnings for some medium-sized open projects. Roughly speaking, every third line of code might be faulty in the view of MISRA. No one will scrape through all warnings, much less fix them. The project is either developed immediately taking into account MISRA recommendations, or this coding standard is irrelevant for it.
Anyway, let's get back to the topic. So, while skimming through the MISRA warnings, a colleague caught a glimpse of the [V2507](https://www.viva64.com/en/w/v2507/) warning issued for this code snippet.
```
if (radiusSquared > StateT[tn].maxRad2) StateT[tn].maxRad2 = radiusSquared;
{
SusceptibleToLatent(a->pcell);
if (a->listpos < Cells[a->pcell].S)
{
UpdateCell(Cells[a->pcell].susceptible, a->listpos, Cells[a->pcell].S);
a->listpos = Cells[a->pcell].S;
Cells[a->pcell].latent[0] = ai;
}
}
StateT[tn].cumI_keyworker[a->keyworker]++;
```
The V2507 rule forces us to wrap the bodies of conditional statements in curly braces.
At first, our meticulous colleague thought that the analyzer had failed. After all, there is a block of text in curly braces! Is this a false positive?
Let's take a closer look. The code only seems to be correct, but it is not! The curly braces are not attached to the *if* statement.
Let's tweak the code for clarity:
```
if (radiusSquared > StateT[tn].maxRad2)
StateT[tn].maxRad2 = radiusSquared;
{
SusceptibleToLatent(a->pcell);
....
}
```
Agree, this is a nice bug. It will surely be one of the Top10 C++ bugs we found in 2021.
What follows from this? The MISRA standard approach works! Yes, it forces you to write curly braces everywhere. Yes, it's tedious. Though this is a reasonable price to pay for improving the reliability of embedded applications used in medical devices, automobiles, and other high-responsibility systems.
I'm glad developers who use the MISRA standard are doing fine. However, recommending that everyone use curly braces is a bad idea. With this approach it is very easy to bring the analyzer to the state where it becomes impossible to use it. There will be so many warnings that no one will care about them.
Finally we got to the idea of a new General Analysis diagnostic and the following rule.
The analyzer will issue a warning in case the following conditions are met for the *if* statement:
* the entire conditional *if* statement is written in one line and has only a *then* branch;
* the next statement after *if* is a compound statement, and it is on different lines with *if*.
We look forward to getting a decent rule that gives few false positives.
This is how this idea is now described in our task tracker. Perhaps something will be done differently in the implementation process, but it doesn't really matter at this point. The main thing is that a decent diagnostic rule will appear, which will begin to identify a new error pattern. Next, we will extend it to the C# and Java cores of the PVS-Studio analyzer.
We just looked at the unique example of how a new diagnostic rule came up, which we will implement in PVS-Studio. Kudos to the CovidSim project, the MISRA coding standard, and our colleague's observation skills.
Thank you for your attention and follow me into the world of C++ and bugs :). [Twitter](https://twitter.com/Code_Analysis). [Facebook](https://www.facebook.com/andrey.karpov.98/).
**Additional links:**
1. [Technologies used in the PVS-Studio code analyzer for finding bugs and potential vulnerabilities](https://www.viva64.com/en/b/0592/).
2. [Under the Hood of PVS-Studio for Java: How We Develop Diagnostics](https://www.viva64.com/en/b/0752/).
3. [Machine Learning in Static Analysis of Program Source Code](https://www.viva64.com/en/b/0706/). | https://habr.com/ru/post/548080/ | null | null | 1,016 | 58.69 |
Bug #5480open
Saxon EE Adding Extraneous namespaces to each ancestor node, i.e. xmlns=""
0%
Description
Hi There - we are using Saxon EE 10.6.0 for .NET framework and having an issue with extraneous namespaces in the result-document output when transforming xml.
We have several XSLTs that transform XML which worked properly with Saxon EE 9.6, but with 10.6 we're getting extraneous href attributes added to each node, i.e. <.... When processed using 9.6, the results look correct. We are using a namespace-aware DOM to load the XML, so the issue is unrlated to that.
If anyone knows of a configuration setting or other change that would prevent those hrefs from appearing in each ancestor node it would be most appreciated.
Attached is an exammple of the xsl we are using.
Files
Updated by O'Neil Delpratt 6 days ago
- Project changed from SaxonC to Saxon
- Category set to .NET API
- Found in version deleted (
10.6)
Updated by Martin Honnen 5 days ago
Can you add a small but representative XML input sample you process with the stylesheet you have already attached? It would also help if you show the relevant .NET (e.g. C#) code you use to run the transformation.
Updated by John Crane 5 days ago
- File SaxonSampleCode.cs SaxonSampleCode.cs added
Hi All,
Thanks for looking at this issue so quickly. I think we have discovered the error we were making, and now have transforms working correctly.
When doing the transform, we were previously using a DomDestination object for the results. We changed that to an XdmDestination type, which seems to have resolved our issue.
Attached is a sample of the code we are using - this is from a test application, the actual code is a bit more complicated - but this shows essence of the transform we are doing. Unfortunately the XML is quite long and not easily modified for sharing.
You can see the older DomDestination references are commented out - the uncommented code is what is now working. We've done preliminary tests that look good, and will continue to do more.
I think we have the issue resolved - but if you have any feedback on the code or suggestions in general we'd certainly appreciate them.
Many thanks again for looking so quickly. Incidentally, I meant to enter this as 'Support' - once I hit submit it was too late...
John C
Updated by Michael Kay 5 days ago
@John, for your reference Martin Honnen is a friendly user who solves a lot of bugs before we get to them (and also raises quite a few). Your thanks go to him and not to Saxonica!
You should definitely avoid using the DOM with Saxon unless you really need it, for performance reasons. If you want serialized output, use a Serializer as the destination.
But we'll keep the bug open, because we need to see why it isn't working properly with a DOM destination. We should be eliminating redundant namespace declarations when writing to the DOM tree.
Please register to edit this issue
Also available in: Atom PDF | https://saxonica.plan.io/issues/5480 | CC-MAIN-2022-21 | refinedweb | 522 | 65.32 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Dear. MAXON's SDK Team
I use R21 on microsoft windows with python.
I have threads that how to get object's position and material color on all time line.
below my simple source code.
import c4d # import Cinema 4D module
doc = c4d.documents.GetActiveDocument() # Cinema 4D active document
number_of_sphere = 10 # [Set] sphere number
frames_count = 2600 # [Set] frames counter
for i in range(0, number_of_sphere):
for f in range(0, frames_count):
doc.SetTime(c4d.BaseTime(f, doc[c4d.DOCUMENT_FPS]))
c4d.EventAdd()
obj = doc.SearchObject('Sphere_' + str(i))
#(obj.GetName() + ", Frame = " + str(f) + ", (X,Y,Z) = " + str(x) + "," + str(y) + "," + str(z) + ", RGB = " + str(r) + "," + str(g) + "," + str(b))
Please, guide to me for method of this.
Cheers,
MAXON's SDK Team
Hi jhpark, thanks for reaching out us.
Aside from the notes left by @blastframe - thanks dude for the remarks - I think it's worthy, thinking of a more generic scene, to mention the need BaseDocument::ExecutePasses() to be sure that everything is actually evaluated before querying the scene rather than the EventAdd() which serves a different scope.
This function is responsible to execute the scene evaluation and, consequently to be sure that, moving from a frame to another, all the items in the scene reflect the changes imposed by the frame switch.
The approach used by @blastframe actually operates on CTracks and key but, although this approach works fine for your specific case, when more evaluation dependencies are created in the scene you could easily end up in unexpected results.
The code could then look like
frames_count = 10 # [Set] frames counter
for f in range(0, frames_count):
doc.SetTime(c4d.BaseTime(f, doc.GetFps()))
# evaluate the scene
doc.ExecutePasses(None, True, True, True, c4d.BUILDFLAGS_NONE)
obj = doc.SearchObject('Cube')
#("Frame = " + str(f) + ", (X,Y,Z) = " + str(x) + "," + str(y) + "," + str(z) + ", RGB = " + str(r) + "," + str(g) + "," + str(b))
Hi @jhpark!
I don't work for the SDK team, but I believe the script below will do what you want.
Some quick notes about posting:
When entering your code into a post on this forum, make sure you hit this button first
It creates code tags in your post. Put your code in between those and then it will format your code automatically.
Also, after submitting, hit the button Topic Tools at the bottom right of your post to Ask as Question.
When someone has answered your question correctly, click this button at the bottom of their post.
This makes it clear to the moderators when the question has been correctly answered.
Here's the code. Because you were using ID_BASEOBJECT_COLOR, I was unsure if you wanted the object's display color or the material color (they are two different things), but I wrote this for the sphere's texture tags' material's color. Also, the code is for the spheres' relative position. More work would need to be done to get the animating position track values into global space.
ID_BASEOBJECT_COLOR
import c4d
from c4d import gui
def GetNextObject(op):
#function for navigating the hierarchy
if op==None: return None
if op.GetDown(): return op.GetDown()
while not op.GetNext() and op.GetUp():
op = op.GetUp()
return op.GetNext()
c4d.EventAdd()
def getPreviewRange(doc,fps):
#returns the active preview range
fps = doc.GetFps()
fromTime = doc.GetLoopMinTime().GetFrame(fps)
toTime = doc.GetLoopMaxTime().GetFrame(fps)+1
return [fromTime,toTime]
def convertVecToRgb(vector):
#converts vector to rgb list
return [vector[0]*255,vector[1]*255,vector[2]*255]
def main(doc):
fps = doc.GetFps()
previewRange = getPreviewRange(doc,fps) #rather than needing to set frames manually, you can simply resize your preview range.
frame_count = previewRange[1]-previewRange[0]
# this section navigates the hierarchy and saves all of the spheres to a list called 'output'
# it's better to do this than to use doc.SearchObject in the case you have multiple spheres with the same name
obj = doc.GetFirstObject()
if obj==None:
gui.MessageDialog('There are no objects in the scene.')
return
output = []
while obj and obj!=None:
if obj.GetType() == c4d.Osphere:
output.append(obj)
obj = GetNextObject(obj)
if len(output) == 0:
gui.MessageDialog('There are no spheres in the scene.')
# loops through spheres in the scene
for sphere in output:
#prints a separating line to the console
print '#' * 80
for f in range(previewRange[0], previewRange[1]):
doc.SetTime(c4d.BaseTime(0, doc[c4d.DOCUMENT_FPS]))
keyTime = c4d.BaseTime(f,fps) #get the current frame
# POSITION
pTracks = sphere.GetCTracks() #get the sphere's animating tracks
pos = [sphere.GetMl().off.x,sphere.GetMl().off.y,sphere.GetMl().off.z] #get the sphere's default relative position
#replace those values with the animating ones.
for t in pTracks:
descid = t.GetDescriptionID() #get the track's id
if descid[0].id == c4d.ID_BASEOBJECT_REL_POSITION: #see if it matches the object's position track
curve = t.GetCurve() #get the track's animation curve
keyvalue = curve.GetValue(keyTime, fps) #get the animation curve's value at the current frame
if descid[1].id == c4d.VECTOR_X:
pos[0] = keyvalue #add to x
elif descid[1].id == c4d.VECTOR_Y:
pos[1] = keyvalue #add to x
elif descid[1].id == c4d.VECTOR_Z:
pos[2] = keyvalue #add to z
# MATERIAL COLOR
tags = sphere.GetTags() #get sphere's tags
matColor = [] #create material color list
for tag in tags: #loop through sphere's tags
if tag.GetType() == c4d.Ttexture: #check if tag is a texture tag
mat = tag.GetMaterial() #if yes, get the tag's material
tracks = mat.GetCTracks() #get the material's animating tracks
for t in tracks:
descid = t.GetDescriptionID() #get the track's id
if descid[0].id == c4d.MATERIAL_COLOR_COLOR: #see if it matches the material color track
curve = t.GetCurve() #get the track's animation curve
keyvalue = curve.GetValue(keyTime, fps) #get the animation curve's value at the current frame
matColor.append(keyvalue*255) #add r,g,b to matColor
if len(tracks) == 0: #in case it's not animating, use general Color
matColor = convertVecToRgb(mat[c4d.MATERIAL_COLOR_COLOR])
# I prefer using string formatting with the placeholder %s for strings, %d for numbers,
# and the % as the replacement operator
print("Name: %s, Frame: %d, Position (x,y,z): %d,%d,%d, Material Color (r,g,b): %d,%d,%d"%(
sphere.GetName(),f,pos[0],pos[1],pos[2],
matColor[0],matColor[1],matColor[2]))
if __name__=='__main__':
# rather than using documents.GetActiveDocument, I found that you can pass a reference to the document using this method
main(doc)
Here's a scene file where the object's display colors and material colors are different. The display colors are visible in the viewport, but you will see the material color if you render.
Spheres.c4d | https://plugincafe.maxon.net/topic/12142/how-to-get-object-s-position-and-material-color-on-all-time-line | CC-MAIN-2021-49 | refinedweb | 1,146 | 58.89 |
lex Peshkov escreveu:
>>;
>
We remove this typedef.
BTYACC will generate a union based on the %union construction.
> ?
Example grammar:
------------
%type <expr> value datetime_value_expression
%union
{
ExprNode* expr;
int intConstant;
}
%%
value : datetime_value_expression
;
datetime_value_expression : CURRENT_DATE
{
$$ = new CurrentDateExprNode();
}
;
------------
"value" and "datetime_value_expression" is declared as "expr".
Every time a $n or $$ is a "value" or "datetime_value_expression", the
code generated will be yyvsp[i].expr.
The lexical symbols could/should also be declared.
Adriano
ISQL crash when converted-from-double string longer than 23 bytes
-----------------------------------------------------------------
Key: CORE-1363
URL:
Project: Firebird Core
Issue Type: Bug
Components: ISQL
Affects Versions: 2.1 Beta 1, 2.1 Alpha 1, 2.0.1, 1.5.4
Environment: Windows XP, Intel 32
Reporter: Bill Oliver
This has been around since dirt.
Try this in ISQL:
-- this did crash
select -2.488355210669293e+39 from rdb$database;
Output is this, followed by crash
> -- this did crash
> select -2.488355210669293e+39 from rdb$database;
> =======================
> -2488355210669293000000000000000000000000.000000
Dmitry said that the crash is ISQL-specific, it just doesn't expect a converted-from-double string to be longer than 23 bytes. Otherwise, the allocated buffer is trashed and the heap corruption happens.
Originally reported in CORE-1362, Claudio asks this be entered as a separate ticket against ISQL.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
-
For more information on JIRA, see:
On Monday 16 July 2007 18:31, Adriano dos Santos Fernandes wrote:
> >> Being contrary of both your opinions :-), I prefer to have the common
> >> part as suffix and not prefix.
> >
> > Do we really need a SQL prefix/suffix in the SQL parser? :-) If you care
> > about ambiguities with other kind of nodes, I'd rather introduce the
> > Dsql namespace with classes Node etc.
I assumed that all it will be in Dsql namespace. Need in it is obvious. As
long as we do not have common jrd/dsql node with no BLR between them :)
> Node is a very generic name, that could be used in others places, so I
> like SQLNode.
>
> But for the derived classes, we can use only Node.
> It sounds better, but is less consistent name convention.
I do bo understand why should we invite rules for prefixes/suffixes as long as
there is standard for C++ way to group related objects together, called
namespace?
Let it be namespace Dsql::Node. What's wrong with it?
On Monday 16 July 2007 17:31, Adriano dos Santos Fernandes wrote:
> My suggestion is almost identical, but instead of deriving everything
> from one base class, I suggest we create an Expr and a Stmt class, both
> derived from SQLNode:
>
> class SQLNode
> {
> };
>
> class ExprSQLNode : public SQLNode
> {
> };
>
> class StmtSQLNode : public SQLNode
> {
> };
>
> class ConcatenateExprSQLNode : public ExprSQLNode
> {
> };
>
> class InsertStmtSQLNode : public StmtSQLNode
> {
> };
>
> Operations from Expr and Stmt is different.
> Expr should have many functions that doesn't apply to Stmt.
No objections as long as we need not to do:
(ExprNode*)node
or you suggest to do it only as intermediate step? In that case OK for me.
> ?
Alex.
Hi there
This nice new feature (CREATE VIEW without explicit column list) seems
not to be 100% documented yet, so I'm not sure if this is a bug or
implementation limitation.
As of FB 2.1 beta 1, the CREATE VIEW requires the column list when the
VIEW is defined as UNION:
create view X as
select rdb$relation_name from rdb$relations where rdb$system_flag = 0
union all
select rdb$relation_name from rdb$relations where rdb$system_flag = 1
This operation is not defined for system tables.
Dynamic SQL Error.
SQL error code = -607.
Invalid command.
must specify column name for view select expression.
It works fine with "create view X (name) as"
Any comments?
Regards
Emil
Hi,
fix me, if I'm wrong. When I grant some privilege on table or view the
value of rdb$object_type is same (rdb$object_type = 0).
There's no direct way how to recognize table from view and vice versa.
I have to have to look into other i.e. rdb$relations to determine
this.
Am I right?
--
Jiri {x2} Cincura (Microsoft Student Partner) | | https://sourceforge.net/p/firebird/mailman/firebird-devel/?viewmonth=200707&viewday=17 | CC-MAIN-2017-43 | refinedweb | 684 | 57.16 |
How to list the package repository URL in SUSE? - linux
I tried zypper repos but it does not list the URL of the repository. I need this URL as I need the same package repository in another machine and this package repository is private to our organization. Is there a way I can get the URL?
I suppose you can use
zypper repos -u
or
zypper lr -u
which will also give you the URI for the repository.
Related
Creating a full replica/offline copy of the public pypi repository
Nexus Repository Manager OSS 3.9.0-01. I wish to create a 'proxy' Nexus repository that will a replica of the public pypi repository. The other machines can then be configured to point to this Nexus repo. so that a 'pip install' on these machines works even if there is no Internet connection. Accordingly, I created a proxy repository of type 'pypi(proxy)'. When I browse this repo, there aren't any components/assets but whenever someone does a 'pip install' by pointing to this repo, the package shows up in the interface e.g: pip install --user pyspark --verbose What I am looking for is to clone/copy all the packages in the PyPI repository at once so that the future 'pip install' refers to this local copy and doesn't go to the Internet every time. Once a day, the local copy should be updated. Is it possible to do so in Nexus OSS?
What you are trying to achieve is a PyPI mirror repository, not a proxy. The PyPI proxy repository behaviour you described is correct, because it is a proxy, not a mirror. Nexus Repository Manager does not provide functionality to to create a mirror of another repository. However, you could try to use a PyPI mirror client (e.g. bandersnatch) to obtain a copy of all packages, then move those files over to your PyPI hosted repository and ask Nexus to reindex the files. Later you would have to periodically repeat the process to keep your mirror up to date.
Install Python Package From Private Bitbucket Repo
I created a Python 3.5 package for work, which is in a private Bitbucket repo and I can easily pull the code and do a "python .\setup.py install" to have it install, but I want to try to eliminate the step of having to pull the code and have multiple copies on my machine and at the same time make it easier for my coworkers to install/update the package. Is it possible to use git bash or cmd (we are all on Windows) to install the package and ask for credentials in the process?
You can use the https option listed in pip_install. Sample Code: pip install git+ You can use the url Bitbucket gives you when you request the clone url. Just remember to add the git+ to it.
Import external repositories to Gitlab with sshfs
Is that possible to have all repositories on some local server and also browse it with Gitlab (hosted on other local server)? I use Gitlab v8.3.3. and I have a following situation: - I have all of my repositories stored at local server, say: 192.168.5.5 at /git - I also have a local virtual machine that hosts Gitlab, at: 192.168.5.6 - I mounted my local git server at git-data directory (that's where repositories are being kept) by running: sshfs my.user#192.168.5.5:/git /var/opt/gitlab/git-data/repositories/server-group server-group is an empty directory created by Gitlab when I created a new group with the same name. Now I would like to be able to browse repositories mounted this way via Gitlab. Is that possible? I believe it should be but it needs some extra configuration? Of course simple: gitlab-ctl reconfigure or gitlab-ctl restart doesn't help and Gitlab group server-group has 0 projects even though in it's directory I have valid "repos.git".
You need to create the projects in GitLab before they will show up. But there are a couple considerations, GitLab is expecting the repos to be bare repos, and also will be expected a HEAD file to be set. My advice would be to create the projects by importing each repo from your git server. This will create the bare repos with all your commits and branches, and create the projects in the GitLab database. Then if you still want the repos on a seperate server, you can move the GitLab created folders to the other server, and then mount them as you were trying. Alternatively, if you still want to try to get your current repos to show up, and hope they work. (Not sure if they will if they aren't bare repos) You can try: unmounting your git repo that you have setup Creating empty projects for each of your repos, in a way that their path will match your current repos. remount your git repos into the place where the empty repos where created.
I resolved this issue by: 1. creating empty repositories with the same name using GitLab API curl -k --header "PRIVATE-TOKEN: <your_private_token>" -H "Content-Type: application/json" -d '{"name":"<name_here>","path":"<the_same_name_here>","visibility_level":"10","namespace_id":"<id_of_my_group>"}' "" 2. mounting our repositories as described in How can I use GitLab on one server and store all of the repositories on another? (including changing permissions - see answer)
Import the entire android repositories to GitLab
I want to create a copy of the entire android repository (which uses the repo tool) Is there a simple way to duplicate the source into my own GitLab server?
Even if the android repo uses the repo tool, you end up with a regular git repo, since a command like repo sync is like a git clone. Simply create an empty repo on your GitLab server, then go to your local repo, and: git remote add gitlab /url/of/your/gitlab/repo git push --mirror gitlab
I want to fix this problem for long time. I think the man who has the same problem is less. because you should be a android system developer not just a simple git user can met this problem. There is a good way to do this. by default, unless you changed it in the /etc/gitlab/gitlab.rb file. For installations from source, it is usually located at: /home/git/repositories or you can see where your repositories are located by looking at config/gitlab.yml under the gitlab_shell => repos_path entry.) [...]
Cannot install a specific git branch on github with pip - Permission denied (publickey)
I'm trying to install a forked repo () on Github with pip but without success. When I use pip install -e git+git://github.com/theatlantic/django-ckeditor.git#egg=django-ckeditor It does install the repo's content, but an older version of it, without the new changes I'm interested in. So I tried to force pip to get the most updated branch, which is apparently atl/4.3.x but I get this weird error, like if the branch's name would be incorrect or something like that : $ pip install -e git+git://github.com/theatlantic/django-ckeditor.git#"atl/4.3.x"#egg=django-ckeditor Obtaining django-ckeditor from git+git://github.com/theatlantic/django-ckeditor.git#atl/4.3.x#egg=django-ckeditor Updating /home/mathx/.virtualenvs/goblets/src/django-ckeditor clone (to atl/4.3.x) Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Clone of 'git#github.com:theatlantic/ckeditor-dev.git' into submodule path 'ckeditor/static/ckeditor/ckeditor-dev' failed Am I making a mistake somewhere ? Thanks.
A user in IRC came in asking about this similar situation, and I think the answer we found applies here as well. (The user linked to this question saying "the same thing is happening", that's how I came across it.) Consider the output from the OP: Obtaining django-ckeditor from git+git://github.com/theatlantic/django-ckeditor.git#atl/4.3.x#egg=django-ckeditor The OP was attempting to pip install django-ckeditor via anonymous git (a git:// URL). The error was: Clone of 'git#github.com:theatlantic/ckeditor-dev.git' into submodule path 'ckeditor/static/ckeditor/ckeditor-dev' failed If you look at, django-ckeditor pulls in ckeditor-dev, and does so with an SSH URL. GitHub does not allow anonymous clones via SSH. Any use of git via SSH must use a registered SSH key. A user would have to sign up for GitHub, register their public key, and configure the private key appropriately to be used when this installation is happening. The repository owner (theatlantic) should change their submodule URL to an https:// URL, or anonymous git://.
The error message you posted: Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. ...indicates you don't have access to the repo. You might have luck using GitHub's HTTP URL instead: pip install -e git+ | https://jquery.developreference.com/article/10000586/How+to+list+the+package+repository+URL+in+SUSE%3F | CC-MAIN-2020-40 | refinedweb | 1,537 | 55.03 |
How can I pass output of a Python script to gets function of a c program ? My c program code is below :
#include <stdio.h>
int main()
{
char name[64];
printf("%p\n", name);
fflush(stdout);
puts("What's your name?");
fflush(stdout);
gets(name);
printf("Hello, %s!\n", name);
return 0;
}
$./a.out "$(python -c 'print "A"*1000')"
To send data from the stdout of one command into the stdin of another command, you need a "pipe":
python -c 'print "A"*1000' | ./a.out
I assume that the buffer overrun here is deliberate, so I'll leave out the lecture about the unsafety of
gets.
Normally, a command-line utility will acquire its input from the argument array (
argv in the parameters to
main), which usually avoids the need for copying the data and thus any risk if a buffer overrun. | https://codedump.io/share/gczdwugS8F8J/1/passing-quotgetsquot-c-language-function-input-parameter-from-an-inline-python-code | CC-MAIN-2017-04 | refinedweb | 142 | 72.56 |
Thanks Christopher for valuable insight.
Right now we don't have scenario which it needs to query data from multiple
customers at once. Perhaps some time in the future, and that 'future' seems
could be years from now (or perhaps never), so I think I am inclined to
implement them as separate tables for now.
Though they are in separate tables, I will still apply visibility column for
each row in the table. The visibility string could be something like
customer id. The caller will be another app of ours, so we can trust it
(still need to pass that customer id as authz string).
In term of scan performance, is it true that if we shard by column family or
different table, it won't matter much since I'd think we also can create
separate locality group for different column family)?
Thanks for the tips on using namespace, originally I'd think of using prefix
the table names with customer id. I guess they are no difference, right?
Thanks,
Z
--
View this message in context:
Sent from the Developers mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/accumulo-dev/201508.mbox/%3C1439838098003-14893.post@n5.nabble.com%3E | CC-MAIN-2017-04 | refinedweb | 187 | 69.92 |
A tiny image classification library.
Project description
NeuronA tiny and very high level transfer learning library for image classification 📚
What is Neuron exactly?
Neuron is a tiny library that aims to simplify image classification (If you don't know what Image classification is, it's the process to tell from an image what object / thing / feature is on it).
Using Neuron, you'll be able to build production grade model under 5 lines of code. Yes, you read it correctly: 5 lines. Where as in common Machine Learning libraries like TensorFlow, PyTorch or Keras, you would do it in hundreds of lines.
Of course, these libraries are much more complex and versatile than Neuron. Neuron isn't replacing these libraries if you need to build your own graph, but if you're doing so, you probably already know this.
Install
Copy - paste that in a Terminal
pip install neuron-ml
Demo
Here is an example of what Neuron can do:
import neuron_ml as n # TensorFlow data = n.load("./dataset/") # formats the data model = n.train(data) # train the data n.export(model, [ "./Model.pb", "./Labels.txt" ]) # Exports everything n.clean(model) # Clean temporary files
And it can also load files and classify them (before using it for production, make sure you have good hardware, as the model can take up to 5 seconds to load and run).
import neuron_ml as n model = n.model([ "./Model.pb", "./Labels.txt" ]) # Load the model graph = n.graph(model) # Generate the graph labels = n.labels(model) # Get the labels n.classify(graph, labels, "./dataset/Celery/celery-1.jpg") # Classify. Will return a result object
See the wiki for more informations.
Versioning
We use SemVer for versioning. For the versions available, see the tags on this repository.
Authors
Also look at the list of contributors who participated in this project. If you don't code but you have great ideas, don't hesitate to write your idea in the issue part. If your idea is accepted, I will add you to this list 😊.
License
This project is licensed under the MIT License - see the <LICENSE> file for details
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/neuron-ml/ | CC-MAIN-2019-26 | refinedweb | 380 | 65.83 |
When you share your Gatsby blog to the world, you’ll want to make sure you give a good first impression. With react-helmet and meta tags, we can make sure your posts show up in Google like this:
And on Twitter like this:
What are meta tags?
Meta tags live in the header of every web page:
<html> <head> <title>Emma Goto</title> </head> </html>
This is what Google, Twitter and other sites use when they are rendering previews of your website.
It’s important to get your meta tags right, because you want users to click the link! They’re more likely to do this if what they see intrigues them, whether it’s from a fancy image or a relevant description.
Install the React Helmet plugin for Gatsby
To get started with meta tags on your Gatsby blog, you’ll need to install gatsby-plugin-react-helmet.
If you created your blog using a template like gatsby-starter-blog, you’ll probably already have this plugin installed.
If not, you can install it:
yarn add gatsby-plugin-react-helmet // or npm install gatsby-plugin-react-helmet
And then make sure to add it to your list of plugins:
// gatsby-config.js const config = { plugins: [ // ... all your other plugins 'gatsby-plugin-react-helmet', ] }
Create a component using React Helmet
After installing the plugin, you can create your React Helmet component:
// src/components/seo/index.js import React from 'react'; import Helmet from 'react-helmet'; const SEO = () => ( <Helmet htmlAttributes={{ lang: 'en', }} /> );
Make sure render this component on every page of your blog!
Pass in props and use the useStaticQuery hook
Before we get started with the meta tags, you’ll also want to make sure that you pass in any relevant data as props, like post titles and slugs:
const SEO = ({ description, title, slug }) => {
You can also make use of the
useStaticQuery hook to grab your site's metadata:
// src/components/seo/index.js import { useStaticQuery, graphql } from 'gatsby'; const SEO = ({ description, title, slug }) => { const { site } = useStaticQuery( graphql` query { site { siteMetadata { title description author siteUrl } } } `, );
This will grab any site metadata that has been stored in your config file:
// gatsby-config.js const config = { siteMetadata: { title: `Emma Goto`, description: `Front-end development and side projects.`, author: `Emma Goto`, siteUrl: ``, }, // ... }
Adding your page’s title
Now we can get started with the most important piece of information - your page’s title. This is what shows up as the title of your post on Google, as well as what you see as the title of the page in your browser.
<Helmet title={title} titleTemplate={`%s · ${site.siteMetadata.title}`} defaultTitle={site.siteMetadata.title} />
There are three separate title props we can pass in here. The logic is as follows:
- If the
titlevalue exists, it will be used in combination with the
titleTemplatevalue
- Otherwise, it will fall back to showing the
defaultTitlevalue
Using my blog as an example, if I’m on a blog post’s page I pass in its
title as a prop. My title looks like this:
Name of the blog post · Emma Goto
If I’m on the home page, the
title value will be undefined, and instead the
defaultTitle is be used:
Emma Goto
Adding your page’s description
After your title, the second-most important thing would be your description. This is what can show up underneath the title in a Google search result.
Similar to the title, I either have a description specific to my post (passed in as a prop), or else I show my default description:
<Helmet //... meta={[ { name: 'description', content: description || site.siteMetadata.description, }, ]} />
Getting a post’s description
If you want specific descriptions for your posts, you can manually write them and store it on your post’s front matter.
If you have a huge backlog of posts without custom descriptions, or you don’t want to write them yourself, each post’s first 140 characters are stored in an
excerpt value:
query($slug: String!) { markdownRemark(frontmatter: { slug: { eq: $slug } }) { excerpt frontmatter { slug title } } }
Adding Open Graph meta tags
To add social media-specific meta tags, we can use Open Graph meta tags. These meta tags were originally created and used by Facebook, but are now also used by other social media sites like Twitter.
{ property: `og:title`, content: title || site.siteMetadata.title, }, { property: 'og:description', content: description || site.siteMetadata.description, }, { property: `og:type`, content: `website`, },
If you don’t use these, social media sites may fall back to your default title and description values. But I would include them just to be on the safe side.
You’ll notice that we are using
propertyinstead of
namefor the meta tag name here. This is something you’ll need to do specifically when using Open Graph meta tags.
Adding Twitter-specific meta tags
By default, Twitter will make use of the Open Graph meta tags. But if you wanted to have specific meta tags only for Twitter, Twitter also provides their own meta tags which will override the Open Graph ones:
{ name: 'twitter:title', content: title || site.siteMetadata.title, },
Should I add the twitter:creator and twitter:site meta tags?
You may have come across
twitter:site and
twitter:creator:
{ name: `twitter:creator`, content: '@emma_goto', }, { name: `twitter:site`, content: '@emma_goto', },
In the past, Twitter link previews would contain your Twitter handle, but as far as I can tell, these values are no longer used.
The meta tags are still mentioned in their documentation though, so if you still want to include them it doesn’t hurt to do so.
Adding images to your link previews
To add an image when you share your blog’s link on Twitter, you’ll need Open Graph’s image meta tag:
{ property: 'og:image', content: 'image-url-here', },
Twitter can render your link preview image in two ways. Either with a 240x240 square image:
Or a larger 800x418 image, like you saw at the top of this post.
When choosing an image, you’ll also have to let Twitter know which size you are using. For the large image you’ll need to include this:
{ name: 'twitter:card', content: 'summary_large_image', },
And for the small, square image, you’ll need to add this:
{ name: 'twitter:card', content: 'summary', },
Pro-tip: Cover images on DEV
If you are cross-posting your Gatsby posts to DEV, you’ll be able to provide a 1000x420 cover image to be used on DEV.
This same image will be used if your DEV post is shared on Twitter - and since Twitter images have a width of 800px the edges of your DEV cover image will be cut off. You’ll want to make sure that your DEV cover images have a sufficient amount of padding on either side, so that nothing important is cut off.
For reference, this is the cover image that I use on DEV:
Adding your favicon
To get an icon to show up next to your website's name, you’ll need to include a favicon value:
import favicon from '../../images/favicon.png'; <Helmet link={[ { rel: 'shortcut icon', type: 'image/png', href: `${favicon}`, }, ]} />
My favicon is a 192x192 PNG image, which seems to do the trick.
How do I get dates to show up on Google search results?
You may have noticed that when you search on Google, some posts will show a published date. This isn’t something you can explicitly control or set a meta tag for. As long as you clearly render a date on your posts, Google should be able to pick it up, and will decide whether it's worth showing or not.
Tools to test your link previews
If you want to test how your site would look if it was shared on social media, Twitter and Facebook both provide their own preview tools to do so:
Conclusion
This post should give you everything you need to know to add meta tags to your Gatsby blog. To see the full code for my site’s SEO component, you can head over to Github.
If you’re adding any logic to your SEO component (like different sets of meta tags on different pages) I would also recommend adding some unit tests! You can check out the unit tests for my SEO component for some inspiration.
Thanks for reading!
Discussion (3)
Awesome post 🎉
Would love to see a part 2 for canonical metadata!
Thanks Tyler! I do have this plugin in my
gatsby-configfile:
To make sure that when I re-post my posts to DEV, my website still counts as the original page. To be honest that's all I know about canonical metadata - is there anything else you think is worth knowing?
I'm fairly new to figuring out canonical urls myself - this comment thread is most of what I know about the topic 😅 :
dev.to/terabytetiger/comment/1473g | https://practicaldev-herokuapp-com.global.ssl.fastly.net/emma/meta-tags-all-you-need-to-know-for-your-gatsby-blog-n2p | CC-MAIN-2021-49 | refinedweb | 1,476 | 56.89 |
Each Answer to this Q is separated by one/two green lines.
I come from a background in static languages. Can someone explain (ideally through example) the real world advantages of using **kwargs over named arguments?
To me it only seems to make the function call more ambiguous. Thanks.
You may want to accept nearly-arbitrary named arguments for a series of reasons — and that’s what the
**kw form lets you do.
The most common reason is to pass the arguments right on to some other function you’re wrapping (decorators are one case of this, but FAR from the only one!) — in this case,
**kw loosens the coupling between wrapper and wrappee, as the wrapper doesn’t have to know or care about all of the wrappee’s arguments. Here’s another, completely different reason:
d = dict(a=1, b=2, c=3, d=4)
if all the names had to be known in advance, then obviously this approach just couldn’t exist, right? And btw, when applicable, I much prefer this way of making a dict whose keys are literal strings to:
d = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
simply because the latter is quite punctuation-heavy and hence less readable.
When none of the excellent reasons for accepting
**kwargs applies, then don’t accept it: it’s as simple as that. IOW, if there’s no good reason to allow the caller to pass extra named args with arbitrary names, don’t allow that to happen — just avoid putting a
**kw form at the end of the function’s signature in the
def statement.
As for using
**kw in a call, that lets you put together the exact set of named arguments that you must pass, each with corresponding values, in a dict, independently of a single call point, then use that dict at the single calling point. Compare:
if x: kw['x'] = x if y: kw['y'] = y f(**kw)
to:
if x: if y: f(x=x, y=y) else: f(x=x) else: if y: f(y=y) else: f()
Even with just two possibilities (and of the very simplest kind!), the lack of
**kw is aleady making the second option absolutely untenable and intolerable — just imagine how it plays out when there half a dozen possibilities, possibly in slightly richer interaction… without
**kw, life would be absolute hell under such circumstances!
Another reason you might want to use
**kwargs (and
*args) is if you’re extending an existing method in a subclass. You want to pass all the existing arguments onto the superclass’s method, but want to ensure that your class keeps working even if the signature changes in a future version:
class MySubclass(Superclass): def __init__(self, *args, **kwargs): self.myvalue = kwargs.pop('myvalue', None) super(MySubclass, self).__init__(*args, **kwargs)
Real-world examples:
Decorators – they’re usually generic, so you can’t specify the arguments upfront:
def decorator(old): def new(*args, **kwargs): # ... return old(*args, **kwargs) return new
Places where you want to do magic with an unknown number of keyword arguments. Django’s ORM does that, e.g.:
Model.objects.filter(foo__lt = 4, bar__iexact="bar")
There are two common cases:
First: You are wrapping another function which takes a number of keyword argument, but you are just going to pass them along:
def my_wrapper(a, b, **kwargs): do_something_first(a, b) the_real_function(**kwargs)
Second: You are willing to accept any keyword argument, for example, to set attributes on an object:
class OpenEndedObject: def __init__(self, **kwargs): for k, v in kwargs.items(): setattr(self, k, v) foo = OpenEndedObject(a=1, foo='bar') assert foo.a == 1 assert foo.foo == 'bar'
**kwargs are good if you don’t know in advance the name of the parameters. For example the
dict constructor uses them to initialize the keys of the new dictionary.
dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2)
In [3]: dict(one=1, two=2) Out[3]: {'one': 1, 'two': 2}
Here’s an example, I used in CGI Python. I created a class that took
**kwargs to the
__init__ function. That allowed me to emulate the DOM on the server-side with classes:
document = Document() document.add_stylesheet('style.css') document.append(Div(H1('Imagist\'s Page Title'), id = 'header')) document.append(Div(id='body'))
The only problem is that you can’t do the following, because
class is a Python keyword.
Div(class="foo")
The solution is to access the underlying dictionary.
Div(**{'class':'foo'})
I’m not saying that this is a “correct” usage of the feature. What I’m saying is that there are all kinds of unforseen ways in which features like this can be used.
And here’s another typical example:
MESSAGE = "Lo and behold! A message {message!r} came from {object_} with data {data!r}." def proclaim(object_, message, data): print(MESSAGE.format(**locals()))
One example is implementing python-argument-binders, used like this:
>>> from functools import partial >>> def f(a, b): ... return a+b >>> p = partial(f, 1, 2) >>> p() 3 >>> p2 = partial(f, 1) >>> p2(7) 8
This is from the functools.partial python docs: partial is ‘relatively equivalent’ to this impl:
def partial(func, *args, **keywords): def newfunc(*fargs, **fkeywords): newkeywords = keywords.copy() newkeywords.update(fkeywords) return func(*(args + fargs), **newkeywords) newfunc.func = func newfunc.args = args newfunc.keywords = keywords return newfunc
| https://techstalking.com/programming/python/why-use-kwargs-in-python-what-are-some-real-world-advantages-over-using-named-arguments/ | CC-MAIN-2022-40 | refinedweb | 909 | 62.07 |
A line symbol type, for rendering LineString and MultiLineString geometries. More...
#include <qgssymbol.h>
A line symbol type, for rendering LineString and MultiLineString geometries.
Definition at line 1192 of file qgssymbol.h.
Constructor for QgsLineSymbol, with the specified list of initial symbol layers.
Ownership of the layers are transferred to the symbol.
Definition at line 2010 of file qgssymbol.cpp.
Returns a deep copy of this symbol.
Ownership is transferred to the caller.
Definition at line 2252 of file qgssymbol.cpp.
Create a line symbol with one symbol layer: SimpleLine with specified properties.
This is a convenience method for easier creation of line symbols.
Definition at line 1524 of file qgssymbol.cpp.
Returns data defined width for whole symbol (including all symbol layers).
Definition at line 2128 of file qgss 2182 of file qgssymbol.cpp.
Set data defined width for whole symbol (including all symbol layers).
Definition at line 2092 of file qgssymbol.cpp.
Sets the width for the whole line symbol.
Individual symbol layer sizes will be scaled to maintain their current relative size to the whole symbol size.
Definition at line 2017 of file qgssymbol.cpp.
Sets the width units for the whole symbol (including all symbol layers).
Definition at line 2044 of file qgssymbol.cpp.
Returns the estimated width for the whole symbol, which is the maximum width of all marker symbol layers in the symbol.
Definition at line 2057 of file qgssymbol.cpp.
Returns the symbol width, in painter units.
This is the maximum width of all marker symbol layers in the symbol.
This method returns an accurate width by calculating the actual rendered width of each symbol layer using the provided render context.
Definition at line 2077 of file qgssymbol.cpp. | https://qgis.org/api/classQgsLineSymbol.html | CC-MAIN-2021-04 | refinedweb | 286 | 61.12 |
39287/how-to-auto-scale-amazon-dynamodb-throughput
DynamoDB doesn’t provide inbuilt capabilities to auto scale throughput based on Dynamic Load. It provides API to scale up or down throughput. But i am being charged hourly basis for provisioned read & write throughput.
What are the different ways to alter the throughput of dynamodb and achieve cost-saving benefits as well?
There are different tools using which you can achieve your use case. One such tool is Dynamic DynamoDb. Some key points that you should keep in mind when you are scaling DynamoBB are:
For more details, you can refer to this documentation by AWS.
You could always use the Amazon RDS ...READ MORE
Hey @Jino, If you are talking about ...READ MORE
Boto3 is the library to use for ...READ MORE
AIM and SOAP are historical APIs. The ...READ MORE
It can work if you try to put ...READ MORE
Hey @nmentityvibes, you seem to be using ...READ MORE
Follow the guide given here in aws ...READ MORE
Check if the FTP ports are enabled ...READ MORE
Here is a bash command that will ...READ MORE
The code would be something like this:
import ...READ MORE
OR | https://www.edureka.co/community/39287/how-to-auto-scale-amazon-dynamodb-throughput | CC-MAIN-2019-30 | refinedweb | 198 | 86.2 |
tQuick Controls 2 Button on Android Phone with TalkBack activated to fulfill requirements of WCAG 2.0 / WCAG2ICT
Expectations
- A button should call "onClicked" only on touch without TalkBack is off.
- A button should call "Accessible.onPressAction" only if TalkBack is on
Observation
Clicking on the button calls both functions.
If the code is for example a toggle logic like "variable = !variable" this could not work.
Code
import QtQuick 2.15 import QtQuick.Controls 2.15 import QtQuick.Window 2.15 Window { visible: true Button { text: "Button" onClicked: () => { console.log("onClicked"); } Accessible.name: text
Attachments
Issue Links
- relates to
QTBUG-93278 Android A11Y TalkBack: MouseArea click AND pressAction are called
- Closed | https://bugreports.qt.io/browse/QTBUG-93272 | CC-MAIN-2021-39 | refinedweb | 111 | 53.78 |
Step 3. Testing Your First Python Application
Remember, in the first tutorial you’ve created your first Python application, and in the second tutorial you’ve debugged it. Now it’s time to do some testing.
Choosing the test runner
If you used nosetest, py.test, or Twisted Trial before, you have to choose unittest. To learn how it's done, see Choosing Your Testing Framework.
Creating test
A quick way to create tests is to have PyCharm stub them out from the class we’d like to test. To do this, we need to open
Car.py, then right-click the editor background, point to , and then choose (or just press Ctrl+Shift+T):
A pop-up appears that suggests to create a new test:
OK, let’s do it. We are going to test whether our car is able to accelerate and brake, so let's select those checkboxes:
A new Python test class is created:
If we run these tests (on the context menu), we can see that they fail by default:
Now we know that we can run tests, let’s start writing some actual test code.
Writing test
How to write unit tests is out of scope for this article. If you’re interested in learning about using the `unittest` framework, you can check out their docs.
For our example let’s use these tests:
import unittest from Car import Car class TestCar(unittest.TestCase): def setUp(self): self.car = Car() class TestInit(TestCar): def test_initial_speed(self): self.assertEqual(self.car.speed, 0) def test_initial_odometer(self): self.assertEqual(self.car.odometer, 0) def test_initial_time(self): self.assertEqual(self.car.time, 0) class TestAccelerate(TestCar): def test_accelerate_from_zero(self): self.car.accelerate() self.assertEqual(self.car.speed, 5) def test_multiple_accelerates(self): for _ in range(3): self.car.accelerate() self.assertEqual(self.car.speed, 15) class TestBrake(TestCar): def test_brake_once(self): self.car.accelerate() self.car.brake() self.assertEqual(self.car.speed, 0) def test_multiple_brakes(self): for _ in range(5): self.car.accelerate() for _ in range(3): self.car.brake() self.assertEqual(self.car.speed, 10) def test_should_not_allow_negative_speed(self): self.car.brake() self.assertEqual(self.car.speed, 0) def test_multiple_brakes_at_zero(self): for _ in range(3): self.car.brake() self.assertEqual(self.car.speed, 0):
def brake(self): if self.speed < 5: self.speed = 0 else: self.speed -= 5.
To do that, do two things:
- First, click the
button on the Run toolbar.
- Second, click the
, select the Set Autotest Delay command, and then choose the delay value.
Then, every time you enter changes in your project files (as it was done earlier), after the specified amount of time, the tests will run without any intervention from you. For example:
Summary
So, this brief tutorial is over. Let's repeat what you've done with the help of PyCharm:
- Selected the test runner.
- Created and modified the test code.
- Ran the test.
- Debugged the test.
- Ran it automatically. | https://www.jetbrains.com/help/pycharm/2018.2/testing-your-first-python-application.html | CC-MAIN-2018-30 | refinedweb | 493 | 59.9 |
I’m on vacation right now, enjoying skiing by day at the lovely town and resort of Breckenridge. I did give myself something to do over the vacation, though. Ever since I was convinced of the flux architecture, I’ve been using my own simple flux library. It’s basically using the Facebook dispatcher, but my own store implementation that is tied into component views using the componentWillMount and componentWillUnmount methods of the React API. It works and it is simple.
I’m not a fan of building a new wheel just because one can, though. If I am to maintain my own library, then it must offer something that no other library can. In the case of my flux library, that isn’t the case. It’s at the bottom of a pack of flux implementations. In addition, every single list of things that React programmers can learn is to use Redux. Redux is the flux implementation at the top of the list if you look at popularity. So I set myself a task of learning React Redux over my vacation.
My requirements are very simple – I have a single API that receives authentication information from the server. I need to initiate the request to the server and then handle the response. I have two actions in my application right now – one to do the request and one to handle the response. My code is just 214 lines of code, but it has lots of side effects. Let’s see how it goes.
With apologies to Dan Abramov – I’m sure he (and several others) will cringe as I go through this…
Step 1: Action Creators
An action is an object with a type field and potentially some associated data. Something like this:
{ type: 'AUTH-ANONYMOUS' } { type: 'AUTH-AUTHENTICATED', providerInfo: response } { type: 'AUTH-ERROR', error: error.message }
If you follow most standard flux implementations, they will tell you to create a function that dispatches the action. Redux doesn’t do that. Instead you create an Action Creator that returns an action. It isn’t dispatched (yet). This philosophy changes when we discuss async functions. However, let’s create some action creators. I’ve created a directory called redux where I am going to store all the implementation details. In there, I have an action.js that will hold my action creators:
/** * Redux Action Creator for handling anonymous response * @returns {Object} Redux Action */ function receiveAnonymousAuth() { return { type: 'AUTH-ANONYMOUS' }; } /** * Redux Action Creator for handling authenticated response * @param {Array} response the response from the server * @returns {Object} Redux Action */ function receiveAuthenticatedAuth(response) { return { type: 'AUTH-AUTHENTICATED', providerInfo: response[0] }; } /** * Redux Action Creator for handling error conditions * @param {Error} the error that happened * @returns {Object} Redux Action */ function receiveErrorCondition(error) { return { type: 'AUTH-ERROR', error: error }; }
These are internal actions – I am not expecting my application UI to initiate these actions. I do not export these functions because they are internal. We’ll get onto the linkage eventually. Right now there is one action creator for each of my potential results – authenticated, anonymous and error. I’ve merged all three cases in my existing code which isn’t the best, so take the opportunity to refactor the code into something more maintainable as well.
Redux suggests that you have a big list of constants for the type. However, they are just strings – in smaller applications (like mine), you can do away with the constants and just specify the strings.
Step 1A: Async Action Creators
I have another action creator – the one that initiates the request. That one was a problem. To be honest, I don’t think Redux actually deals with async well. It should “just work” and it doesn’t. Here is my function:
import fetch from 'isomorphic-fetch'; // baseUrl is required for the fetch actions let baseUrl = ''; if (window.GRUMPYWIZARDS && window.GRUMPYWIZARDS.base) baseUrl = window.GRUMPYWIZARDS.base.replace(/\/$/, ''); /** * Redux Action Creator for requesting authentication information * @returns {Function} */)); }); }; }
There is a little code at the top to figure out where the API actually is. The majority of the code is a direct copy from my original store implementation. Instead of doing the store update right there, I dispatch another action (created by one of the action creators I wrote earlier) to handle the actual action.
Step 2: Reducers
The next element in the Redux implementation is a reducer. A reducer takes a state and an action and turns it into the new state. This is a relatively simple concept. The “reducer” terminology is from the Array.reduce() functionality within JavaScript, which is designed to provide an accumulator functionality – you start with an initial value and mutate it based on each value within the array. In the case of Redux, you start with the current state of the store and you mutate it based on the action. Except that you don’t mutate the state – you return a new copy of the state. So, I guess the analogy breaks down there.
One of the golden rules of Redux is this: Reducers must be free of side effects.
In other words, if you call the reducer with the same state and the same action, it will return the same state each time. You can’t put “call this other API” or “look in a database” or that sort of thing. Right now, I’ve got one reducer (called reducers.js) that has a case statement in it to handle each action.
/* eslint-disable no-case-declarations */ const initialState = { phase: 'pending', user: null, error: null }; export default function authReducer(state, action) { if (typeof state === 'undefined') { state = initialState; } switch (action.type) { case 'AUTH-ANONYMOUS': return Object.assign({}, state, { phase: 'anonymous', user: null, error: null }); case 'AUTH-AUTHENTICATED': let claims = action.providerInfo.reduce((target, claim) => { target[claim.typ] = claim.val; if (claim.typ.indexOf('') !== -1) target[claim.typ.slice(claim.typ.lastIndexOf('/') + 1)] = claim.val; return claims; }); let user = { accessToken: action.providerInfo.access_token, claims: claims, firstname: claims.firstname || '', id: action.providerInfo.user_id, provider: action.providerInfo.provider_name, providerToken: action.providerInfo.authentication_token, surname: claims.surname || '' }; return Object.assign({}, state, { phase: 'authenticated', user: user, error: null }); case 'AUTH-ERROR': return Object.assign({}, state, { phase: 'error', user: null, error: action.error.message }); default: return state; } }
UPDATE Dan Abramov contacted me and suggested that putting the initialState with the reducer was “the right pattern”. We don’t want to be propagating anti-patterns. As a result, I have moved the initialState to the reducer after the initial publication of this blog post. You can also use an ES6 default argument to set the initialState.
Why is this requirement for no side effects so important? You do like to test your code, right? This requirement enables the testability of the code, and that is very important.
There are definitely other ways to organize your code once your store (and state requirements) grow, and you can read about them over on the Redux web site. However, this is good enough for now. We’ve got an action creator, a set of actions that describe the manipulations to application state we want to handle, including one action that won’t actually work (we’ll get onto that later, but take a wild guess as to which one!) and a reducer that will return the new state when it is fed an action.
Step 3: Creating the Store
The next question, of course, is how do we tie all this together? Well, that’s the job of the store. Note that this is a singular entity. In my implementation of flux, I would suggest that you have different stores to handle different types of data. If you are doing a blog, you might have a store for authentication, a store for blogs, a store for comments, and so on. In Redux, there is only one store.
Here is how I implemented the store given the reducers and actions I’ve already created:
import { createStore } from 'redux'; import reducer from './reducers'; let store = createStore(reducer); export default store;
The store is created with createStore(). The first argument is the reducer (or set of reducers if your application is more complex).
UPDATE: createStore() takes a second optional argument – the initialState. However, Dan Abramov contacted me and suggested that this was an anti-pattern in client applications. I’ve since moved the initialState to the reducer (above).
You can now use this store to subscribe to store changes and to dispatch actions. Dispatching actions that are not asynchronous is easy:
// store.dispatch(actionCreator(args)); // For example: store.dispatch(receiveAuthenticatedAuth(response));
You’ve actually already seen this in the actions.js file I showed earlier. However, that async method is going to take something. Here is how I started with it:
import { createStore } from 'redux'; import { requestAuthInfo } from './actions'; import reducer from './reducers'; const initialState = { phase: 'pending', user: null, error: null }; let store = createStore(reducer, initialState); // Dispatch the initial action let requestAction = requestAuthInfo(); requestAction(store.dispatch); export default store;
It doesn’t actually dispatch an action. I’ve got a problem with that, but I’ll come back to that later. On to our components:
Step 4: Update the Component Views
I’ve got one component view – the Chrome.jsx file. There are also a couple of calls you need to know – the main one being that you can subscribe to changes and then unsubscribe later on. Let’s take a look at my Chrome.jsx – at least the important parts. First off, the constructor:
constructor(props) { super(props); logger.entry('$constructor', props); this.state = { phase: 'pending', user: null, error: null, leftMenu: { isOpen: false } }; logger.debug('state = ', this.state); logger.exit('$constructor'); }
Note that the state includes all my store variables. This actually is fairly important at this point. It will become less so later on. Now, onto the component state lifecyle functions:
/** * React API: Called when the component is mounting itself in the DOM * * @returns {void} * @overrides React.Component#componentWillMount */ componentWillMount() { logger.entry('componentWillMount'); this.unsubscribe = store.subscribe(() => { return this.updateState(); }); logger.exit('componentWillMount'); } /** * React API: Called when the component is removed from the DOM * * @returns {void} * @overrides React.Component#componentWillUnmount */ componentWillUnmount() { logger.entry('componentWillUnmount'); this.unsubscribe(); logger.exit('componentWillUnmount'); } /** * Update the internal state of the component-view from the flux store */ updateState() { logger.entry('updateState'); this.setState(store.getState()); logger.debug('New State = ', this.state); logger.exit('updateState'); }
This is pretty much standard stuff for flux. You register your interest in the componentWillMount() method and then deregister in the componentWillUnmount() method. In the updateState() method, I merge the stores state with the components state which will then re-render the DOM.
That’s pretty much all there is to redux. Instead of a complicated function in a store, the store is completely separated and reducers take over the task of actually doing the state transitions. Otherwise, this is as vanilla Redux as you can get. Sure, it’s a little more modular, and smaller (141 lines of code instead of 214 for my store implementation), but…
The only real advantage is that I’m not writing the store myself.
However, there is much more to Redux than what we’ve done thus far. In the next post, I’m going to take a look at two of those things. Firstly, I am going to simplify my component view to get rid of a lot of the boiler-plate code for handling state updates. Secondly, I’m going to look at the role of middleware to handle async functions. Until then, check out the code in my GitHub Repository. | https://shellmonger.com/2016/02/16/an-introduction-to-react-redux-part-1/ | CC-MAIN-2017-51 | refinedweb | 1,921 | 58.08 |
Summary
In this article, the first of three parts, I compare the traditional approach to client-server interaction, using protocols and documents, with Jini's approach of using objects and interfaces. This first part looks at how objects and documents differ when servers interact with client programs that have no client-side user.
These days the dominant way that servers interact with people across the network is by sending HTML documents to Web browsers. Recently, XML has generated a lot of excitement among developers as an alternative document format that offers many advantages over HTML. Like HTML, XML enables servers to interact with people across the network via their Web browsers. But unlike HTML, XML also enables servers to easily interact with client software that has no user present.
In the Jini universe, in contrast to the document approach of both HTML and XML, servers interact with client programs by sending objects across the network. Like XML, Jini enables servers to interact with client programs regardless of whether a user is present at the client.
In this three-part series, I will compare and contrast two fundamental ways that servers can interact with clients: using documents and using objects. In this article, the first of three parts, I'll look primarily at how objects and documents compare when servers interact with client programs that have no user present.
Creating a Java news page
I recently wrote a Python script to generate a Java news page for my Website, Artima.com. I planned to get the news items from Moreover.com, which offered a free news feed devoted to Java. As a Webmaster, I had several options, all of which involved servers sending documents to clients.
Perhaps my most straightforward option was to insert a large, hairy chunk of JavaScript code, kindly provided by Moreover.com, into my page. Whenever a user visited my Java news page, the embedded JavaScript would land in his or her browser, contact Moreover.com, grab the most recent Java news data, and construct the news page on the fly. I discarded this option partly because I have found JavaScript to be unreliable (as a result, my site contains no JavaScript), but primarily because I didn't want the user to have to wait for the JavaScript to make a socket connection to Moreover.com in order to grab the data. One of my main goals for Artima.com is to have pages that load quickly, and every socket connection takes time.
Another option was to use a script that ran on the server. In that approach, the URL of my news page would actually refer to a script. When a user hit the URL, the Web server would run the script. The script would contact Moreover.com and obtain the news information in the same way the JavaScript would. Again, I discarded this option because I didn't want the client to have to wait for that socket connection to Moreover.com.
Ultimately, I decided to write a script that contacted Moreover.com, grabbed the most recent Java news data, generated my Java news page, and saved the page in a file. I planned to set up a cron job that automatically ran the script every hour, so that the file would be refreshed regularly. In this approach, the user wouldn't have to wait for a socket connection, because it would be made behind the scenes once every hour. Given that Moreover.com seemed to be updating the contents of its Java news feed at most once or twice a day, I decided that an hourly poll would yield a sufficiently fresh page for my Website.
Deciding upon a data format
Moreover.com offers its news feeds in several data formats, each available at a different URL. Thus, I next had to decide which data format my script should use for processing.
One data format that I did not choose, but which I'd like to mention here, is HTML. Among other data formats, Moreover.com offers an HTML Webpage full of the latest Java news. The trouble with this approach, of course, is that HTML pages are intended to be consumed by people, not programs. Although the information my Python script needs is contained in an HTML page, the page's markup tags make it difficult for programs like my script to acquire the information. Rather, HTML markup tends to focus on enabling a Web browser to render the information buried in a screen's markup, so that a human user can gaze upon the screen and pull the information into his or her brain.
In HTML, information intermingles freely with directions on presenting that information. For example, here's a snippet of HTML code from the HTML news page at Moreover.com:
<TR BGCOLOR="#ffffff"><TD><FONT FACE="Arial, Helvetica, sans-serif">
<A HREF=<FONT SIZE="-1" COLOR="#333333"
><B>Java, XML to survive Sun/Microsoft war...</B></FONT></A><BR>
<A HREF= TARGET=_blank>
<FONT SIZE="-2" COLOR="#ff6600">vnunet.com</FONT></A>
<FONT SIZE="-2" COLOR="#ff6600"> Wed Apr 12 09:34:25 GMT-0700 (Pacific Daylight Time) 2000</FONT>
</TD></TR><TR BGCOLOR="#ffffff"><TD BGCOLOR="#ffffff" HEIGHT="5"></TD></TR>
Aside from the trouble of parsing out the information from all this HTML markup, a far more insidious problem exists with the parsed-HTML approach. Given that HTML pages are intended to be rendered by browsers and read by people, Webmasters have no qualms about changing their pages in ways that browsers and people can deal with, but programs cannot. So even if I decided to parse the information out of the HTML, chances are good that eventually Moreover.com's Webmaster would make a change to its Webpages' structure that would break my script.
Looking at XML
The document-style format that looked most promising to me was Moreover.com's XML feed. XML was designed to enable just the kind of software parsing I wanted to do in my Python script. In an XML document, in contrast to one in HTML, information and presentation are cleanly separated. The information contained in the document is marked up in tags that, rather than describe how the information should be presented, hints at the semantic meaning of the information. For example, here's a snippet of XML code from the XML feed at Moreover.com:
<article id="_6547546">
<url></url>
<headline_text>Java, XML to survive Sun/Microsoft war</headline_text>
<source>vnunet.com</source>
<media_type>text</media_type>
<cluster>Java news</cluster>
<tagline> </tagline>
<document_url></document_url>
<harvest_time>Apr 12 2000 4:34PM</harvest_time>
<access_registration> </access_registration>
<access_status> </access_status>
</article>
Directions on how to present the information contained in the XML document's semantic tags can be defined separately, using a style markup language such as CSS or XSL. In the Moreover.com case, the XML document is intended to be consumed only by programs, not by people, so no style markup is provided. Nevertheless, the primary reason my Python script could parse the XML feed more easily than the HTML feed is that XML is designed to avoid HTML's intermingling of information and presentation.
Settling on tab-separated values
I liked the XML approach, but unfortunately I was unable to figure out quickly enough how to work with XML in Python. All I wanted to do was pass a chunk of XML to some library routine, get back a nice data structure corresponding to the XML document, and use it to effortlessly write out the news page. I was (and still am) on the Python learning curve, and as I was rooting around in the Python documentation looking for my desired library routine, I noticed that Moreover.com also offered a tab-separated value (TSV) feed. At that point I paused and said to myself, "Self, if you just use this TSV feed, then you can get this job done right now." For reasons of speed, therefore, I abandoned my search for the elusive XML-to-data-structure Python library routine and completed my script using the TSV feed.
Here's one line from the TSV feed at Moreover.com. (The single line is split into three lines with
\\ and tabs are replaced with
\t here, but not in the actual feed.)\t\\
Java, XML to survive Sun/Microsoft war\tvnunet.com\ttext\t\\
Java news\t \t\tApr 12 2000 4:34PM\t \t
XML, data models, and DTDs
The structure and tag names in Moreover.com's XML feed form a "data model" of a news feed. Moreover.com thought about what it meant to be a news feed. It identified and gave a name to each piece of information, gave each item the name "article," and decided that its XML document would be an ordered list of articles. (The TSV version also represents a minimalist expression of the same conceptual data model.)
XML lets you express your data model in a Data Type Definition (DTD). In fact, Moreover.com provides the DTD for its XML news-feed documents. The DTD looks like this:
<!ELEMENT moreovernews (article*)>
<!ELEMENT article (url,headline_text,source,media_type,cluster,tagline,document_url,harvest_time,
access_registration,access_status)>
<!ATTLIST article id ID #IMPLIED>
<!ELEMENT url (#PCDATA)>
<!ELEMENT headline_text (#PCDATA)>
<!ELEMENT source (#PCDATA)>
<!ELEMENT media_type (#PCDATA)>
<!ELEMENT cluster (#PCDATA)>
<!ELEMENT tagline (#PCDATA)>
<!ELEMENT document_url (#PCDATA)>
<!ELEMENT harvest_time (#PCDATA)>
<!ELEMENT access_registration (#PCDATA)>
<!ELEMENT access_status (#PCDATA)>
I won't go into the details of the DTD syntax, but basically, Moreover.com's DTD says that each of its news-feed documents (named "moreovernews") are composed of a set of zero or more "articles." Each article is composed of several pieces of information, including a "url," a "headline_text," and so on. In short, an XML DTD is a written definition of the abstract data model to which an XML document adheres.
Data models and network protocols
Lurking behind all the communication approaches between Moreover.com and Artima.com is an important assumption: that the client will fetch the document via the HTTP's GET command. In fact, perhaps a better way to look at Moreover.com's document formats is as a part of several high-level protocols that define the interaction between Moreover.com's clients (such as Artima.com) and its server. The combination of a news category URL, the low-level HTTP GET protocol, and Moreover.com's XML DTD, for example, combine to form a high-level network protocol, which can be summarized as follows:
A fetch protocol
An alternative protocol
The Python script currently executing at Artima.com plays the client role in a protocol that corresponds closely to the Fetch protocol. The difference is that my Python script fetches a TSV, not an XML, document. The TSV format does not come with an official DTD, but conceptually its structure corresponds to the same data model described by the XML DTD.
Now although my Java news page seems to be working fine, the truth is, I'd prefer that Moreover.com notify my Website whenever it changed the contents of its Java news feed. That way I would need to rewrite my Java news page only when its contents actually change. Since I would be notified of changes rather than polling hourly, my news page would be updated more promptly whenever new news appeared.
If Moreover.com is ever to offer such a notification-based approach, it will have to define a protocol that implements one. Given that the server will be "pushing" a notification down to the client, rather than relying on the client to "pull" the latest news from the server, the client will probably have to have some kind of server running. For Moreover.com to know where those client-side servers are, and what categories of news each client-side server wants, a protocol that lets clients subscribe to the notification service will be necessary (in addition to the notification protocol itself). Here are outlines of a subscription protocol and a notification protocol, in which I call the client-side server a "listening" server:
A subscription protocol
A notification protocol
This is a quick first sketch of news-feed subscription and notification protocols. In an actual protocol design project, the details of the DTDs would need to be specified. In addition, many other issues, including what should happen if a listening server disappears from the network without canceling its subscription, should also be considered.
Java news-object style
So far, I've shown that the traditional way of defining client-server interaction across a network is to define protocols, and that when documents are sent across the network, the structure of those documents is really part of a protocol. I demonstrated several protocols that a client at Artima.com could conceivably use to interact with a server at Moreover.com to create an automatically refreshed page of Java news.
Now I'd like you to consider a different approach to the news-feed business. What if, instead of working exclusively with documents and protocols, Moreover.com had also offered an option that raised the level of discourse to objects and interfaces? As a thought experiment, imagine that Moreover.com could send a Jini service across the Internet to Artima.com, and that it also offered a Jini version of its news feed. What might the interface of the Jini service look like? In the next few sections, I'll show some classes and interfaces that form a news-feed API.
A NewsFeed interface
Since some clients may prefer to poll and others may prefer to be notified, perhaps a news-feed API should provide an object whose interface lets clients do both. This functionality is represented in the following interface:
package com.artima.news;
import java.rmi.RemoteException;
import java.rmi.MarshalledObject;
import java.rmi.Remote;
import net.jini.core.event.RemoteEventListener;
import net.jini.core.event.EventRegistration;
import java.io.Serializable;
/**
* Interface implemented by Jini news-feed service object. This interface allows
* clients to register (via the addNewsListener() method) to receive
* NewsFeedEvents, which are propagated whenever the news-feed contents
* change. Alternatively, or in addition, clients can poll the news-feed service at
* any time via the getNews() method.
*/
public interface NewsFeed extends Serializable, Remote {
/**
* Registers a remote event listener as interested in receiving
* NewsFeedEvents for the passed news category. To stop receiving events,
* clients can simply cancel lease returned as part of the EventRegistration.
*/
EventRegistration addNewsListener(RemoteEventListener rel,
int newsCategory, MarshalledObject handback) throws RemoteException;
/**
* Returns an array of news items, ordered from most recent (at array index 0) to
* the oldest (at index array length -1), for the passed news category.
*/
NewsItem[] getNews(int newsCategory) throws RemoteException;
}
The
addNewsListener() method of the
NewsFeed interface lets a client register interest in a breaking news event via the Jini distributed event model. The
getNews() method lets a client poll the news feed for the latest news.
Convenient constants in the NewsCategories interface
When you register a listener via
addNewsListener(), or request the current news via
getNews(), you must provide an
int value that indicates the category of news feed you want. (My Website, for example, is interested exclusively in the Java news category.) For convenience, you could collect the
int values for the categories in an interface that is also included in the news-feed API:
package com.artima.news;
/**
* A collection of constants that represent categories of
* news feeds.
*/
public interface NewsCategories {
int JAVA_NEWS = 99772;
// Other news categories would receive logical numbers
// in here as well....
}
A NewsFeedEvent class
A listener registered with the news service via
addNewsListener() will be notified of changes via the following distributed event:
package com.artima.news;
import net.jini.core.event.RemoteEvent;
import java.rmi.MarshalledObject;
/**
* Remote event that represents a change in a news feed.
* Events that would cause this event object to be propagated
* include:
* <UL>
* <LI>One or more news items have been added to a feed.
* <LI>One or more news items have been deleted from a feed.
* <LI>One or more news items currently part of a feed have been changed.
* </UL>
*/
public class NewsFeedEvent extends RemoteEvent {
private NewsItem[] news;
/**
* Constructs a NewsFeedEvent object.
*/
public NewsFeedEvent(Object source, long eventID, long seqNum,
MarshalledObject handback, NewsItem[] news) {
super(source, eventID, seqNum, handback);
this.news = news;
}
/**
* Returns an array of news items, ordered from most recent (at array index 0) to
* the oldest (at index array length -1).
*/
public NewsItem[] getNews() {
return news;
}
}
A NewsItem class
Whether a client polls a news feed via the
getNews() method or receives a
NewsFeedEvent, the client extracts the actual list of news items from an array of
NewsItem objects. Here's the
NewsItem class:
package com.artima.news;
import java.io.Serializable;
import java.util.Date;
/**
* NewsItem encapsulates one item of news.
*/
public class NewsItem implements Serializable {
private long articleID;
private String url;
private String headlineText;
private String source;
private String mediaType;
private String cluster;
private String tagline;
private String documentURL;
private Date harvestTime;
private String accessRegistration;
private String accessStatus;
/**
* Constructs a NewsItem object.
*/
public NewsItem(long articleID, String url, String headlineText,
String source, String mediaType, String cluster, String tagline,
String documentURL, Date harvestTime, String accessRegistration,
String accessStatus) {
this.articleID = articleID;
this.url = url;
this.headlineText = headlineText;
this.source = source;
this.mediaType = mediaType;
this.cluster = cluster;
this.tagline = tagline;
this.documentURL = documentURL;
this.harvestTime = harvestTime;
this.accessRegistration = accessRegistration;
this.accessStatus = accessStatus;
}
/**
* Returns the article ID of the news item.
*/
public long getArticleID() {
return articleID;
}
/**
* Returns the URL of the news item.
*/
public String getURL() {
return url;
}
/**
* Returns the headline text of the news item for
* the current locale.
*/
public String getHeadlineText() {
return headlineText;
}
/**
* Returns the source of the news item for
* the current locale.
*/
public String getSource() {
return source;
}
/**
* Returns the media type of the news item.
*/
public String getMediaType() {
return mediaType;
}
/**
* Returns the cluster of the news item for the
* current locale.
*/
public String getCluster() {
return cluster;
}
/**
* Returns the tag line of the news item for
* the current locale.
*/
public String getTagline() {
return tagline;
}
/**
* Returns the document URL of the news item.
*/
public String getDocumentURL() {
return documentURL;
}
/**
* Returns the harvest time of the news item.
*/
public Date getHarvestTime() {
return harvestTime;
}
/**
* Returns the access registration of the news item.
*/
public String getAccessRegistration() {
return accessRegistration;
}
/**
* Returns the access status of the news item.
*/
public String getAccessStatus() {
return accessStatus;
}
}
A Jini news-feed service
Those classes and interfaces are a rough sketch of what a news-feed API might look like. If this were an actual API design project, many more design iterations would be in order, in conjunction with one or more peer design reviews.
Some questions I might ask at a design review of this API are:
BreakingNewsevents be offered in two separate interfaces that are both extended by
JavaNews?
NewsItemreturn strings for a specific locale?
NewsItemsarray, should a class be invented to hold this array and an
intnews category?
What's the difference really?
In the subsequent two articles in this series, I'll delve into the advantages and disadvantages of objects versus documents. At this point, however, I'd like to try and identify the crux of the difference between the two approaches.
In short, a document is a bundle of information; an object is a bundle of services. Each instance method in an object's public interface offers a service to the outside world. By invoking a method on an object, you are asking the object to do something for you -- to provide a service for you. In Jini, the entire object is called a service, because that's what it represents to the client. Each object offers a bundle of methods that individually provide low-level services and that in combination provide a high-level service. For example, in the news-feed API shown earlier in this article, the low-level services
addNewsListener() and
getNews() combine to form a higher-level news-feed service, offered by any object that implements the
NewsFeed interface.
You can ask an object to perform a service for you by invoking one of its methods. The object will either perform the requested service or throw an exception back at you indicating why it couldn't perform the service. By contrast, you can do things with or to a document, but you can't ask it to do something for you. Well, I suppose you could ask, but the document would just lie there and your coworkers would wonder why you were talking to it.
Deconstructing objects
An object can perform services for clients because objects embody behavior. An object usually has state, defined by the values of its instance variables, and behavior, defined by the code of its instance methods. An object's state is data, like the data contained in a document. But in general, an object uses its state to decide how to behave when its methods are invoked. The key difference between a network-mobile object and a network-mobile document, therefore, is that when you send a document across a network, you're sending data, but when you send an object across the network, you're sending data plus code -- a combination that yields behavior.
To send a Java object across the network, you can simply serialize the object to get a stream of bytes that encode the object's state. You can then send its state across the network by sending those bytes. To send the code, you can send the class files that define the object's class, perhaps embedded in one or more jar files.
But wait a minute, isn't a class file just ones and zeros that adhere to a particular data format? Isn't a class file itself just data? In truth, when you send an object across the network, you're sending state (which is data) and code (which is also data). Thus, an object is made up of data that adheres to certain formats, just as any document is made up of data that adheres to certain formats. An object is a kind of document. So where does the crux of the difference between objects and documents really lie?
A generic model of computation
I believe the answer to the previous question can help illuminate what Java technology is really all about. XML lets you model concepts and express those models in DTDs. You could consider the news-feed DTD given earlier in this article as representing a model of the concept called a "news feed."
You can find far more complicated, already existing XML DTDs for many other conceptual models, such as chemistry, mathematics, and music. What James Gosling did -- in my mind it's the primary innovation of Java technology in its original form -- was create a conceptual model of computation itself.
Of course, conceptual models of computation can come in many forms. You could call many different kinds of data "code." Is it not JavaScript code that sits in a Webpage? Could you not consider HTML itself as code that is understood and executed by a Web browser? If so, then why is Java different or special?
I believe Java is important for two reasons: First, Java is very object oriented. In Java, the object is the unit in which behavior is sent across a network. Programmers that use Java to send behavior across a network, therefore, enjoy the benefits of object-oriented programming. Second, Java's abstract model of computation is as generic as it can be in the context of untrusted code. HTML and JavaScript code, to a great extent, assume that they will be executed in the context of a Webpage. Java, by contrast, assumes only that generic computation will occur, directed by code that is potentially untrusted.
To understand any document sent by a server, a client has to have code written by a programmer who understood (had prior knowledge of) the data model used by that document and the model's semantics. Likewise, to use a network-mobile Java object, which travels across the network as serialized state and class files, a client needs code that was written by programmers who understood Java's object-oriented model of computation. The code needed by the client is called the Java virtual machine (JVM).
To take advantage of Java's conceptual model of computation, therefore, you must have a JVM. In fact, the JVM specification is Java's abstract model of computation. My sense is that the primary purpose of the JVM is to serve as a landing pad for network-mobile objects. It lets you fire tiny bullets of behavior across the network and have them understood and used by the recipient. As Bill Joy said at the first Jini Community Summit, "We built the JVM to let objects move around."
Conclusion
In this article I've tried to accomplish two things. First, I pointed out that network-mobile objects, such as Jini's service object, offer an alternative way to deliver services across a network -- a way distinctly different from the traditional documents and protocols approach. Second, I showed what the crux of the difference between the two approaches actually is: the abstract model of computation embodied in the Java virtual machine. In the subsequent two articles in this series, the first of which will appear in July, I'll discuss the advantages and disadvantages of objects versus documents for delivering services across the network to programs and people. in JavaWorld, a division of Web Publishing, Inc., June 2000. | http://www.artima.com/jini/jiniology/objdoc1P.html | CC-MAIN-2016-44 | refinedweb | 4,242 | 55.34 |
WAP
Showing page 1 of 7..125 weekly downloads
kXML
kXML is a lean Common XML API with namespace and WAP support that is intended to fit into the JAVA KVM for limited devices like the Palm Pilot.104 weekly downloads
Apache Mobile Filter
The fastest and easiest way to detect mobile devices63 weekly downloads
51Degrees.mobi-PHP
Mobile Device Detection for PHP - 4 Step Setup - 3 Minutes49 weekly downloads
Linux driver for CPiA webcams
We provide Linux drivers for webcams based on the popular Vision VLSI CPiA chipset, including the Creative WebCam II. Both parport and USB is supported.48
PHP Weather
PHP Weather makes it easy to show the current weather on your webpage. All you need is a local airport, that makes some special weather reports called METARs. The reports are updated once or twice an hour.26 weekly downloads
51Degrees.mobi-Java
Mobile Device Detection for Java - 4 Step Setup - 3 minutes10 weekly downloads
ICT Printing Press
ICT Printing Press is product for offset Printing Press. It simply takes few related parameters of customer's order as input and present total estimation of project, most economical job size, machine, paper stock cutting plan for sheet or reel.12 weekly downloads
Wap Search engine
WAP-based search engine written in Perl. With this script you can add a search engine to your wap site10
jMmsLib
Java library for encoding/decoding MMS messages. Also provides a simlpe client for sending MMS through a WAP gateway.7 weekly downloads
python-mms
Python Multimedia Messaging Service (MMS) library The python-mms library provides an "mms" module for Python, consisting of classes that facilitate the creation, manipulation and encoding/decoding of MMS messages used in mobile phones.6.6 weekly downloads
struts-wml
WML taglib for WAP enabled devices. Based on struts-html 1.1b2 and 1.1b3.6 weekly downloads
51Degrees.mobi-C
Mobile Device Detection for C and C++7 weekly downloads
Native webmail for DBmail
DBmail-webmail is a PHP webmail application that uses PHP's mysql functions for access to the DBmail database, no IMAP or POP is used.5 weekly downloads
Tera-WURFL Enhanced PHP WURFL Library
Tera-WURFL can identify the capabilities of mobile devices using PHP, a MySQL database backend and the standardized Wireless Universal Resource File (WURFL).5 weekly downloads
romanHunter
ROuter MAN HUNTER detects wireless attackers and captures their MAC4
Wapmess
WAP-mess - ICQ (Instant Messenger) wap gateway that enables icq on ANY wap-phone.2 weekly downloads
Alembik
Media Transcoding Server Alembik is a Java (J2EE) application providing transcoding services for variety of clients. It is fully compliant with OMA's Standard Transcoder Interface specification and is distributed under the LGPL open source license.3 weekly downloads | http://sourceforge.net/directory/internet/wap/os:mac/ | CC-MAIN-2014-35 | refinedweb | 456 | 54.42 |
Message-ID: <378213679.763824.1386211223836.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_763823_800904725.1386211223835" ------=_Part_763823_800904725.1386211223835 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Authors: David Maddison, jboner jboner=20
This tutorial does not explain AOP, so if your new to the idea of AOP th= en please check out JavaWorld's= series of articles to get you started.=20
What this tutorial will do is to try to walk you through a simple exampl= e on how you can write, define and weave in an aspect into your application= .=20
Download the latest release and unzip it into the relevant location. Thi= s tutorial is based on the 2.0 version of AspectWerkz but works equally wit= h 1.0 final.=20
The latest distribution can be found here.= p>=20
After installation you need to set the
ASPECTWERKZ_HOME env=
ironment variable to point to the installation directory. This is because q=
uite a few of the scripts use this to find the required libraries. How this=
variable is set depends on you OS. Since I'm using Linux I've amended my <=
code>.bashrc file, windows users could do this by using the control =
panel.
Now we've installed aspectwerkz, we need a test application into which t= o weave our aspects. As is the tradition, I'm going to use the standard Hel= loWorld application.=20
package testAOP; public class HelloWorld { public static void main(String args[]) { HelloWorld world =3D new HelloWorld(); world.greet(); } public String greet() { System.out.println("Hello World!"); } }=20
This is simply a standard Java application, and can be compiled with
Next we need to develop the aspect which will contain the code to be wea= ved into our HelloWorld class. In this example I'm going to output a statem= ent before and after the greet method is called.=20
package testAOP; import org.codehaus.aspectwerkz.joinpoint.JoinPoint; public class MyAspect { public void beforeGreeting(JoinPoint joinPoint) { System.out.println("before greeting..."); } public void afterGreeting(JoinPoint joinPoint) { System.out.println("after greeting..."); } }=20
Notice the signature of the aspect methods. They need to take this
JoinPoint argument otherwise the AspectWerkz weaver won't be able t=
o identify the method when the aspect is weaved in, (and can leave you scra=
tching your head as to why the weaving isn't working!).
(Note: for 2.= 0, specific optimizations can be applied by using the StaticJoinPoint= em> interface or no interface at all. Please refer to the AspectWerkz 2.0 d= ocumentation)
To compile this aspect class you'll need to include the
aspectwerk=
z-0.10.jar in the classpath, i.e.
javac -d target -classpath $ASPECTWERKZ_HOME/lib/aspectwerkz-2.0.RC1.jar My= Aspect.java=20
For AspectWerkz 1.0 final:=20
javac -d target -classpath $ASPECTWERKZ_HOME/lib/aspectwerkz-1.0.jar MyAspe= ct.java=20
At this point we have the test application and the actual aspect code, b=
ut we still need to tell AspectWerkz where to insert the aspect methods (
Specifying pointcuts and advice can be done using either of (or a mixtur= e of), the following methods.=20
The XML definition file is just that, an XML file which specifies the po=
intcuts and advice using XML syntax. Here's one that will weave our MyAspec=
t class into our HelloWorld program (
aop.xml):
<aspectwerkz> <system id=3D"AspectWerkzExample"> <package name=3D"testAOP"> <aspect class=3D"MyAspect"> <pointcut name=3D"greetMethod" expression=3D&q= uot;execution(* testAOP.HelloWorld.greet(..))"/> <advice name=3D"beforeGreeting" type=3D"b= efore" bind-to=3D"greetMethod"/> <advice name=3D"afterGreeting" type=3D"af= ter" bind-to=3D"greetMethod"/> </aspect> </package> </system> </aspectwerkz>=20
Most of this should be pretty straight forward, the main part being the = aspect tag. Whilst I'm not going to explain every bit of this definition fi= le, (I'll leave that up to the official documentation), I will explain a fe= w important points.=20
When specifying the
pointcut the name can be any label you =
like, it's only used to bind the
advice. The expression should=
be any valid expression occording to the Join point selection pattern language=
however you MUST make sure that the full package+clas=
s name is included in the pattern. If this isn't done, or if the pattern is=
slightly wrong, AspectWerkz won't be able to correctly identify the gr=
eet method.
In the
advice tag, the
name attribute should b=
e the name of the method in the aspect class, (specified in the
aspec=
t tag), which you wish to insert at the specific joinpoint. Type is =
set to
before,
after, or
around, dep=
ending on where exactly you wish to inser the method in relation to the joi=
npoint.
bind-to specifies the name of the
pointcut to which this
advice will be bound.
This example identifies the
HelloWorld.greet() method and a=
ssigns it the pointcut label
greetMethod. It then inserts the =
MyAspect.beforeGreeting method just before
greet =
is called, and
MyAspect.afterGreeting just after the
gre=
et method returns.
Annotations provide a way to add metadata to the actual aspect class, ra= ther than specifying it in a seperate definition file. Aspect annotations a= re defined using JavaDoc style comments a complete list of which is availab= le here. Using annotations, our aspect class would look as follows:=20
package testAOP; import org.codehaus.aspectwerkz.joinpoint.JoinPoint; public class MyAspectWithAnnotations { /** * @Before execution(* testAOP.HelloWorld.greet(..)) */ public void beforeGreeting(JoinPoint joinPoint) { System.out.println("before greeting..."); } /** * @After execution(* testAOP.HelloWorld.greet(..)) */ public void afterGreeting(JoinPoint joinPoint) { System.out.println("after greeting..."); } }=20
After adding annotations you need to run a special AspectWerkz tool. Thi=
s is done after compiling your aspect class files, (i.e. after running
AnnotationC compiler, =
can be invoked as follows, passing in the source directory (
.)=
, and the class directory (
target):
java -cp $ASPECTWERKZ_HOME/lib/aspectwerkz-2.0.RC1.jar org.codehaus.aspectw= erkz.annotation.AnnotationC . target=20
For AspectWerkz 1.0 final:=20
java -cp $ASPECTWERKZ_HOME/lib/aspectwerkz-1.0.jar org.codehaus.aspectwerkz= .annotation.AnnotationC . target=20
More information on the
AnnotationC compiler can be found <=
a href=3D"
te compilation" class=3D"external-link" rel=3D"nofollow">here.
Although using annotations means you don't have to write all aspect deta= ils in XML, you do still have to create a tiny XML 'stub' which tells the A= spectWerkz runtime system which Java classes it should load and treat as as= pects. An example of this is show below:=20
<aspectwerkz> <system id=3D"AspectWerkzExample"> <aspect class=3D"testAOP.MyAspectWithAnnotations "/>= ; </system> </aspectwerkz>=20
There are basically two ways to actually weave the code together, one ca= lled online weaving performs the weaving as the classes are loaded= into the JVM. The other is offline weaving, and is done before th= e code is actually run.=20
When using online weaving you need to decide which JVM your goi=
ng to use. This is because the hook which allows AspectWerkz to we=
ave the classes together on the fly, is different in Sun HotSpot (=
where JDI/HotSwap is used), as apposed to BEA JRockit (where a Pre=
Processor is used). The default is setup to use Sun JDK 1.4.2, however if y=
ou want to use JRockit, simply edit the
bin/aspectwerkz file (
Using JRockit is the preferred choice since it will not only perform much be= tter (no need to run in debug mode, which using HotSwap, e.g. Sun and IBM, = requires) and be more stable, but will also work on JDK 1.3, 1.4 and 1.5. <= /p>=20
Performing the weaving is then just a matter of using the
aspectwe=
rkz command line tool to run
java with the relevant cla=
sses, pointing it to the definition file, (even if using annotations you st=
ill need the 'stub' definition file), i.e.
$ASPECTWERKZ_HOME/bin/aspectwerkz -Daspectwerkz.definition.file=3Daop.xml -= cp target testAOP.HelloWorld=20
This produces the expected output:=20
before greeting... Hello World! after greeting...=20
With offline weaving, the test applications classes are modifie=
d on the disk with the aspect calls. That is to say offline weaving amends your actual class definition, (as opposed to online weaving which doesn't modify any classes). To perform offline weaving, =
you use the
aspectwerkz command line tool with the
-offl=
ine option, as follows:
$ASPECTWERKZ_HOME/bin/aspectwerkz -offline aop.xml -cp target target=20
The last option on the command (
target) tells AspectWerkz w=
here your classfiles are and is very important that you type in correctly, =
else nothing will get weaved into your target classes and you will wonder w=
hy nothing is happening.
Running the aspect is then just a matter of invoking your main class, al= though you still need some of the AspectWerkz jar's on your classpath, and = you still need to provide an XML definition file:=20
java -cp $ASPECTWERKZ_HOME/lib/aspectwerkz-2.0.RC1.jar:target=20 -Daspectwerkz.definition.file=3Daop.xml testAOP.HelloWorld=20
For AspectWerkz 1.0 final:=20
java -cp $ASPECTWERKZ_HOME/lib/aspectwerkz-1.0.jar:target=20 -Daspectwerkz.definition.file=3Daop.xml testAOP.HelloWorld=20
Note: Windows users need to replace the ":" path separator= by a ";"=20
This produces the expected output:=20
before greeting... Hello World! after greeting...=20
Now we have learned how to:=20
Want more?=20
Then read the next tutorial Hijacking Hello World or the online documentation= =20
Want to use AOP in your appplication server?=20
Then start by reading this dev2dev article on how to enable AOP in WebLogic Server (= the concepts are generic an works for any application server).=20
This tutorial is based on a tutorial written by David Maddison= a> (with modifications and enhancements by jboner jboner) | http://docs.codehaus.org/exportword?pageId=5033 | CC-MAIN-2013-48 | refinedweb | 1,654 | 55.54 |
Device and Network Interfaces
- ZFS file system
#include <sys/libzfs.h>
ZFS is the default root file system in the Oracle Solaris release. ZFS is a disk based file system with the following features:
Uses a pooled storage model where whole disks can be added to the pool so that all file systems use storage space from the pool.
A ZFS file system is not tied to a specific disk slice or volume, so previous tasks, such as repartitioning a disk or unmounting a file system to add disk space, are unnecessary.
ZFS administration is simple and easy with two basic commands: zpool(1M) to manage storage pools and zfs(1M) to manage file systems. No need exists to learn complex volume management interfaces.
All file system operations are copy-on-write transactions so the on-disk state is always valid. Every block is checksummed to prevent silent data corruption. In a replicated RAID-Z or mirrored configuration, ZFS detects corrupted data and uses another copy to repair it.
A disk scrubbing feature reads all data to detect latent errors while the errors are still correctable. A scrub traverses the entire storage pool to read every data block, validates the data against its 256-bit checksum, and repairs the data, if necessary.
ZFS is a 128-bit file system, which means support for 64-bit file offsets, unlimited links, directory entries, and so on.
ZFS provides snapshots, a read-only point-in-time copy of a file system and cloning, which provides a writable copy of a snapshot.
A ZFS storage pool and ZFS file system are created in two steps:
# zpool create tank mirror c1t0d0 c1t1d0 # zfs create tank/fs1
A ZFS file system is mounted automatically when created and when the system is rebooted by an SMF service. No need exists to edit the /etc/vfstab file manually. If you need to mount a ZFS file manually, use syntax similar to the following:
# zfs mount tank/fs1
For more information about managing ZFS file systems, see the Oracle Solaris Administration: ZFS File Systems.
See attributes(5) for a description of the following attributes:
du(1), df(1M), zpool(1M), zfs(1M), attributes(5)
Oracle Solaris Administration: ZFS File Systems
ZFS does not have an fsck-like repair feature because the data is always consistent on disk. ZFS provides a pool scrubbing operation that can find and repair bad data. In addition, because hardware can fail, ZFS pool recovery features are also available.
Use the zpool list and zfs list to identify ZFS space consumption. A limitation of using the du(1) command to determine ZFS file system sizes is that it also reports ZFS metadata space consumption. The df(1M) command does not account for space that is consumed by ZFS snapshots, clones, or quotas.
A ZFS storage pool that is not used for booting should be created by using whole disks. When a ZFS storage pool is created by using whole disks, an EFI label is applied to the pool's disks. Due to a long-standing boot limitation, a ZFS root pool must be created with disks that contain a valid SMI (VTOC) label and a disk slice, usually slice 0. | http://docs.oracle.com/cd/E23824_01/html/821-1475/zfs-7fs.html | CC-MAIN-2015-35 | refinedweb | 535 | 59.33 |
Diving into OpenStack Network Architecture - Part 3 - Routing
By Ronen Kofman on Jun 18, 2014
In the previous posts we have seen the basic components of OpenStack networking and then described three simple use cases that explain how network connectivity is achieved. In this short post we will continue to explore networking setup through looking at a more sophisticated (but still pretty basic) use case of routing between two isolated networks. Routing uses the same basic components to achieve inter subnet connectivity and uses another namespace to create an isolated container to allow forwarding from one subnet to another.
Just to remind what we said in the first post, this is just an example using out of the box OVS plugin. This is only one of the options to use networking in OpenStack and there are many plugins that use different means.
Use case #4: Routing traffic between two isolated networks
In a real world deployment we would like to create different networks for different purposes. We would also like to be able to connect those networks as needed. Since those two networks have different IP ranges we need a router to connect them. To explore this setup we will first create an additional network called net2 we will use 20.20.20.0/24 as its subnet. After creating the network we will launch an instance of Oracle Linux and connect it to net2. This is how this looks in the network topology tab from the OpenStack GUI:
If we further explore what happened we can see that another namespace has appeared on the network node, this namespace will be serving the newly created network. Now we have two namespaces, one for each network:
# ip netns list
qdhcp-63b7fcf2-e921-4011-8da9-5fc2444b42dd
qdhcp-5f833617-6179-4797-b7c0-7d420d84040c
To associate the network with the ID we can use net-list or simply look into the UI network information:
# nova net-list
+--------------------------------------+-------+------+
| ID | Label | CIDR |
+--------------------------------------+-------+------+
| 5f833617-6179-4797-b7c0-7d420d84040c | net1 | None |
| 63b7fcf2-e921-4011-8da9-5fc2444b42dd | net2 | None |
+--------------------------------------+-------+------+
Our newly created network, net2 has its own namespace separate from net1. When we look into the namespace we see that it has two interfaces, a local and an interface with an IP which will also serve DHCP requests:
# ip netns exec qdhcp-63b7fcf2-e921-4011-8da9-5fc2444b42dd
19: tap16630347-45: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:bd:94:42 brd ff:ff:ff:ff:ff:ff
inet 20.20.20.3/24 brd 20.20.20.255 scope global tap16630347-45
inet6 fe80::f816:3eff:febd:9442/64 scope link
valid_lft forever preferred_lft forever
Those two networks, net1 and net2, are not connected at this time, to connect them we need to add a router and connect both networks to the router. OpenStack Neutron provides users with the capability to create a router to connect two or more networks This router will be simply an additional namespace.
Creating a router with Neutron can be done from the GUI or from command line:
# neutron router-create my-router
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | fce64ebe-47f0-4846-b3af-9cf764f1ff11 |
| name | my-router |
| status | ACTIVE |
| tenant_id | 9796e5145ee546508939cd49ad59d51f |
+-----------------------+--------------------------------------+
We now connect the router to the two networks:
Checking which subnets are available:
# neutron subnet-list
+--------------------------------------+------+---------------+------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+------+---------------+------------------------------------------------+
| 2d7a0a58-0674-439a-ad23-d6471aaae9bc | | 10.10.10.0/24 | {"start": "10.10.10.2", "end": "10.10.10.254"} |
| 4a176b4e-a9b2-4bd8-a2e3-2dbe1aeaf890 | | 20.20.20.0/24 | {"start": "20.20.20.2", "end": "20.20.20.254"} |
+--------------------------------------+------+---------------+------------------------------------------------+
Adding the 10.10.10.0/24 subnet to the router:
# neutron router-interface-add fce64ebe-47f0-4846-b3af-9cf764f1ff11 subnet=2d7a0a58-0674-439a-ad23-d6471aaae9bc
Added interface 0b7b0b40-f952-41dd-ad74-2c15a063243a to router fce64ebe-47f0-4846-b3af-9cf764f1ff11.
Adding the 20.20.20.0/24 subnet to the router:
# neutron router-interface-add fce64ebe-47f0-4846-b3af-9cf764f1ff11 subnet=4a176b4e-a9b2-4bd8-a2e3-2dbe1aeaf890
Added interface dc290da0-0aa4-4d96-9085-1f894cf5b160 to router fce64ebe-47f0-4846-b3af-9cf764f1ff11.
At this stage we can look at the network topology view and see that the two networks are connected to the router:
We can also see that the interfaces connected to the router are the interfaces we have defined as gateways for the subnets.
We can also see that another namespace was created for the router:
# ip netns list
qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11
qdhcp-63b7fcf2-e921-4011-8da9-5fc2444b42dd
qdhcp-5f833617-6179-4797-b7c0-7d420d84040c
When looking into the namespace we see the following:
# ip netns exec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11
20: qr-0b7b0b40-f9: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:82:47:a6 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/24 brd 10.10.10.255 scope global qr-0b7b0b40-f9
inet6 fe80::f816:3eff:fe82:47a6/64 scope link
valid_lft forever preferred_lft forever
21: qr-dc290da0-0a: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:c7:7c:9c brd ff:ff:ff:ff:ff:ff
inet 20.20.20.1/24 brd 20.20.20.255 scope global qr-dc290da0-0a
inet6 fe80::f816:3eff:fec7:7c9c/64 scope link
valid_lft forever preferred_lft forever
We see the two interfaces, “qr-dc290da0-0a“ and “qr-0b7b0b40-f9. Those interfaces are using the IP addresses which were defined as gateways when we created the networks and subnets. Those interfaces are connected to OVS:
# ovs-vsctl show
8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1
Bridge "br-eth2"
Port "br-eth2"
Interface "br-eth2"
type: internal
Port "eth2"
Interface "eth2"
Port "phy-br-eth2"
Interface "phy-br-eth2"
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port "int-br-eth2"
Interface "int-br-eth2"
Port "qr-dc290da0-0a"
tag: 2
Interface "qr-dc290da0-0a"
type: internal
Port "tap26c9b807-7c"
tag: 1
Interface "tap26c9b807-7c"
type: internal
Port br-int
Interface br-int
type: internal
Port "tap16630347-45"
tag: 2
Interface "tap16630347-45"
type: internal
Port "qr-0b7b0b40-f9"
tag: 1
Interface "qr-0b7b0b40-f9"
type: internal
ovs_version: "1.11.0"
As we see those interfaces are connected to “br-int” and tagged with the VLAN corresponding to their respective networks. At this point we should be able to successfully ping the router namespace using the gateway address (20.20.20.1 in this case):
We can also see that the VM with IP 20.20.20.2 can ping the VM with IP 10.10.10.2 and this is how we see the routing actually getting done:
The two subnets are connected to the name space through an interface in the namespace. Inside the namespace Neutron enabled forwarding by setting the net.ipv4.ip_forward parameter to 1, we can see that here:
# ip netns exec qrouter-fce64ebe-47f0-4846-b3af-9cf764f1ff11 sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
We can see that this net.ipv4.ip_forward is specific to the namespace and is not impacted by changing this parameter outside the namespace.
Summary
When a router is created Neutron creates a namespace called qrouter-<router id>. The subnets are connected to the router through interfaces on the OVS br-int bridge. The interfaces are designated with the correct VLAN so they can connect to their respective networks. In the example above the interface qr-0b7b0b40-f9 is assigned IP 10.10.10.1 and is tagged with VLAN 1, this allows it to be connected to “net1”. The routing action itself is enabled by the net.ipv4.ip_forward parameter set to 1 inside the namespace.
This post shows how a router is created using just a network namespace. In the next post we will see how floating IPs work using iptables. This becomes a bit more sophisticated but still uses the same basic components.
@RonenKofman | https://blogs.oracle.com/ronen/tags/openvswitch | CC-MAIN-2015-40 | refinedweb | 1,321 | 60.55 |
Getting Started with HTTP¶
HTTP (vapor/http) is a non-blocking, event-driven HTTP library built on SwiftNIO. It makes working with SwiftNIO's HTTP handlers easy and offers higher-level functionality like media types, client upgrading, streaming bodies, and more. Creating an HTTP echo server takes just a few lines of code.
Tip
If you use Vapor, most of HTTP's APIs will be wrapped by more convenient methods. Usually the only HTTP type you
will interact with is the
http property of
Request or
Response.
Vapor¶
This package is included with Vapor and exported by default. You will have access to all
HTTP APIs when you import
Vapor.
import Vapor
Standalone¶
The HTTP package is lightweight, pure Swift, and only depends on SwiftNIO. This means it can be used as an HTTP framework in any Swift project—even one not using Vapor.
To include it in your package, add the following to your
Package.swift file.
// swift-tools-version:4.0 import PackageDescription let package = Package( name: "Project", dependencies: [ ... .package(url: "", from: "3.0.0"), ], targets: [ .target(name: "Project", dependencies: ["HTTP", ... ]) ] )
Use
import HTTP to access the APIs.
The rest of this guide will give you an overview of what is available in the HTTP package. As always, feel free to visit the API docs for more in-depth information. | https://docs.vapor.codes/3.0/http/getting-started/ | CC-MAIN-2018-22 | refinedweb | 222 | 66.94 |
Getting started with continuous integration in React Native - Part 3: Custom CI setup with Bitrise
Knowledge of React and React Native is required. Your machine should be set up for React Native development.
This is the third and final part of the series on getting started with continuous integration in React Native. In this part, we’re going to use Bitrise for a more customizable CI setup. Specifically, you’re going to learn the following:
- How to set up a React Native project in Bitrise.
- How to run Jest and Detox tests.
- How to configure the build workflow.
Prerequisites
To follow this tutorial, you need to have basic knowledge of React and React Native. The project that we will be working on uses Redux, Redux Saga, and Detox, so experience in using those will be helpful as well.
These are the package versions that we will be using:
- Node 8.3.0
- Yarn 1.7.0
- React Native 0.50
- Detox 8.1.6
- Mocha 4.0.1
For other dependencies, check out the
package.json file of the project.
Reading the first and second part of this series is optional if you already have previous knowledge of how continuous integration is done in React Native.
If you want to have a brief overview of the app that we’re working on, be sure to check out part one of this series.
Initial project setup
To make sure the new project will be as clean as possible, we will be initializing a new React Native project and push it on a repo separate to the one we used on part two. Go ahead and create a new repo named ReactNativeCI_Bitrise on GitHub.
Next, clone the project repo (the GitHub repo for this series, not the one you just created) and switch to the
part2 branch:
git clone cd ReactNativeCI git checkout part2
We’re switching to the
part2 branch so we can get the final output from the second part of this series.
Next, initialize a new React Native project which uses the same version as the project repo. We’re naming it ReactNativeCI instead of ReactNativeCI_Bitrise so we won’t have any naming issues. You can also rename your GitHub repo to ReactNativeCI if you don’t have any further use for the source code we used on part two of this series:
react-native init ReactNativeCI --version react-native@0.50 cd ReactNativeCI
After that, copy the
src folder,
App.js, and
package.json file from the repo you cloned earlier to the project you just created.
Update the
package.json file so it looks like this. Note that this removes all the App Center packages from part two:
{ "name": "ReactNativeCI", "version": "0.0.1", "private": true, "scripts": { "start": "node node_modules/react-native/local-cli/cli.js start", "test": "jest" }, "dependencies": { "react": "16.0.0", "react-native": "0.50", "react-native-vector-icons": "^5.0.0", "react-redux": "^5.0.7", "redux": "^4.0.0", "prop-types": "^15.6.2" }, "devDependencies": { "babel-jest": "23.4.2", "babel-preset-react-native": "4.0.0", "jest": "23.5.0", "mocha": "4.0.1", "react-test-renderer": "16.0.0" }, "jest": { "preset": "react-native" } }
Next, install all the packages, link the native modules, and run the app:
yarn install react-native link react-native run-android react-native run-ios
Only proceed to the next section if you managed to run the app locally. Because if it wouldn’t work locally then it’s not going to work on the CI server either.
Once you got the app running, commit your changes and push it to your repo:
git add . git commit -m "initialize project" git remote add origin git@github.com:YOUR_GITHUB_USERNAME/YOUR_REPOS_NAME.git git push origin master
Adding an app to Bitrise
Create a Bitrise account if you haven’t done so already. Once you’re logged in, you’ll be greeted with the following screen:
Click on the Add first app button to add your app. First, select your GitHub account and the ReactNativeCI repository you forked earlier.
After that, you have to specify the repository access. This is the method used by Bitrise to get access to the repo you forked earlier. Since you’ve already connected your GitHub account to Bitrise, Bitrise is able to add the SSH key used for accessing your repo to your GitHub account. So click on the No, auto-add SSH key button. You will then see it added on your GitHub’s account security page.
Next, it will ask you to enter the name of the branch. Put master on the text field.
At this point, Bitrise will start validating the repository. This is where Bitrise determines what kind of project this is so that it can recommend a specific configuration that you can select. It might be a good idea to grab a drink while it’s validating as it will take a minute or two:
If it’s taking too long, you can click on the Expand Logs link to see what Bitrise is doing behind the scenes.
Once it’s done validating the repository, it should have pre-selected the
android and
gradlew path. It will then let you select a few more settings. Make sure you end up with the following once you’re done selecting the config:
From the above configuration, you can see that Bitrise has configurations for both Android and iOS. Note that this doesn’t mean that we will only have to maintain a single Bitrise app instance.
Just like in App Center, we’ll still be creating two app instances, one for each platform. This is to separate the code integration (and eventually the release and deployment) of changes made to the app.
Due to how young React Native is as a platform, there will be times when there are unexpected bugs that only occurs only on Android or iOS. This causes a delay in the time in which new features will be tested, integrated, and delivered to users. This separation makes it easy to only release on Android or iOS, but not both.
The final step is to register a webhook. This allows Bitrise to automatically build the project every time a change is made to the branch you selected earlier. Again, you will see this webhook is registered in your GitHub account’s security page.
Once that’s done, Bitrise will build the app for the first time. We don’t really want to build the app yet because it will fail, so click on the Builds tab and abort the current build. We’ll proceed to manually initiating a build once we’re sure that it will succeed.
Note that when you sign up for a Bitrise account, you’re automatically signed up to the Developer plan. This gives you an unlimited number of builds per month, and each build can take up to 45 minutes. So don’t worry about meeting the maximum builds per month until you come out of their 14-day trial.
Creating the other app instance
Before creating the other app instance for the other platform, first, rename the one you just created to ReactNativeCI-Android. You can do that by going to the Settings tab and updating the Title field. We need to do this so we won’t get confused because Bitrise uses the name of the GitHub repo by default.
Once that’s done, go through the same steps that you just followed to create a new app. Don’t forget to rename the new instance to ReactNativeCI-iOS.
Making changes to the project
Just like in part two, we’ll be making a few changes in this part as well. This time, we will add the functionality for saving the favorited Pokemon to local storage. This way, they will still be marked as a favorite even after the user restarts the app.
The Git workflow we’ll be using is still the same as the one we used on part two. I explained the workflow in part one, so if you haven’t read that, you can do so by going to the CI workflow in React Native section in part one of this series.
Start by creating a
develop branch and creating a new branch off of that:
git checkout -b develop git checkout -b local-storage
We will be using a couple of new dependencies. One for handling local storage, and another for handling asynchronous operations while working with Redux:
yarn add react-native-simple-store redux-saga
Next, update the
src/action/types.js file to include the new action types for handling asynchronous activity:
export const FAVORITED_CARD = "favorited_card"; // add these export const LOCAL_DATA_REQUEST = "local_data_request"; // when fetching the data from local storage export const LOCAL_DATA_SUCCESS = "local_data_success"; // when the data is received export const LOCAL_DATA_FAILURE = "local_data_failure"; // when there's an error receiving the data
Next, add the code that will dispatch the actions throughout the lifecycle of the local storage data request:
// create new file: src/sagas/index.js import { takeLatest, call, put } from "redux-saga/effects"; import store from "react-native-simple-store"; // library for working with local storage // action types import { LOCAL_DATA_REQUEST, LOCAL_DATA_SUCCESS, LOCAL_DATA_FAILURE } from "../actions/types"; // watch for actions dispatched to the store export function* watcherSaga() { yield takeLatest(LOCAL_DATA_REQUEST, workerSaga); } // function for getting the data from local storage function getLocalData() { return store.get("app_state"); // fetch the data from local storage that is stored in the "app_state" key } function* workerSaga() { try { const response = yield call(getLocalData); // trigger the fetching of data from local storage const cards = response.cards; yield put({ type: LOCAL_DATA_SUCCESS, cards }); // dispatch the success action (data has been fetched) } catch (error) { yield put({ type: LOCAL_DATA_FAILURE, error }); // dispatch the fail action (data was not fetched) } }
In the reducer file, make sure that all of the new action types are handled accordingly:
// src/reducers/CardReducer.js import { FAVORITED_CARD, // add these: LOCAL_DATA_REQUEST, LOCAL_DATA_SUCCESS, LOCAL_DATA_FAILURE } from "../actions/types"; import store from "react-native-simple-store"; // add this switch (action.type) { case FAVORITED_CARD: let cards = state.cards.map(item => { return item.id == action.payload ? { ...item, is_favorite: !item.is_favorite } : item; }); // update the local storage with the copy of the new data store.update("app_state", { cards }); return { ...state, cards }; // add these: case LOCAL_DATA_REQUEST: // triggered when requesting data from local storage return { ...state, fetching: true }; case LOCAL_DATA_SUCCESS: // triggered when data is successfully returned from local storage return { ...state, fetching: false, cards: action.cards }; // only triggered the first time the app is opened because there's no data in the local storage yet case LOCAL_DATA_FAILURE: store.update("app_state", INITIAL_STATE); // initialize the local storage return { ...state, fetching: false, cards: INITIAL_STATE.cards // return the initial state instead }; default: return state; }
Next, we need to hook up the watcher saga in the Provider component. This way, it will get triggered when the
LOCAL_DATA_REQUEST action is dispatched:
// src/components/Provider.js import { createStore, applyMiddleware } from "redux"; import createSagaMiddleware from "redux-saga"; const sagaMiddleware = createSagaMiddleware(); import { watcherSaga } from "../sagas"; const store = createStore(reducers, applyMiddleware(sagaMiddleware)); sagaMiddleware.run(watcherSaga);
Lastly, update the
CardList component to make use of the new
fetching state, as well as trigger the action for fetching the data from local storage:
// src/components/CardList.js import { View, FlatList, ActivityIndicator } from "react-native"; import { FAVORITED_CARD, LOCAL_DATA_REQUEST } from "../actions/types"; class CardList extends Component { componentDidMount() { this.props.requestLocalData(); } render() { const { fetching, cards } = this.props; // add activity indicator (show while fetching data from local storage) return ( <View style={styles.container}> <ActivityIndicator size="large" color="#333" animating={fetching} /> <FlatList contentContainerStyle={styles.flatlist} data={cards} renderItem={this.renderCard} numColumns={2} keyExtractor={(item, index) => item.id.toString()} /> </View> ); } } const mapStateToProps = ({ cards, fetching }) => { return { ...cards, ...fetching }; }; const mapDispatchToProps = dispatch => { return { // dispatch action instead of returning the object containing the action data favoritedCard: id => { dispatch({ type: FAVORITED_CARD, payload: id }); }, // add function for dispatching action for initiating local storage data request requestLocalData: () => { dispatch({ type: LOCAL_DATA_REQUEST }); } }; }; export default connect( mapStateToProps, mapDispatchToProps )(CardList);
Once that’s done, update the snapshot (this was added in the starter app so don’t worry about adding it) and commit the changes:
yarn test -u git add . git commit -m "add local storage functionality"
At this point, do some manual testing by marking a few Pokemon as a favorite then relaunch the app. If the ones you selected is still selected when the app is relaunched, it means that the new feature is working.
Once you’ve confirmed that the new feature is working, switch back to the
develop branch and merge the new feature:
git checkout develop git merge local-storage git branch -d local-storage
We’re not going to push the changes yet because we still have to add some end-to-end testing code with Detox.
Adding Detox tests
In this section, we’ll be setting up end-to-end testing for the app using Detox.
Setting up Detox
Start by following the Install Dependencies section on Detox’s Getting Started documentation.
Next, create a new branch off of the
develop branch:
git checkout develop git checkout -b add-detox-test
Setting up Detox on Android
If you’re working on an Android app, you need to upgrade to Gradle 3 first because that’s what Detox is using. You can check the following files as your guide for upgrading to Gradle 3. Each line that has to do with the Gradle 3 upgrade is started with a “Gradle3” comment. You can find the commit here, and these are the files to update:
android/build.gradle
android/gradle/wrapper/gradle-wrapper.properties
If you’re following this tutorial wanting to apply it on your own projects, and you are using packages which uses a lower version of Gradle, you can actually fork the GitHub repo of those packages and update them to use Gradle 3.
Once you’re done updating the files, execute
react-native run-android on your terminal to check if everything is still running correctly. Don’t forget to launch a Genymotion emulator or Android emulator instance before doing so.
Once you’ve verified that the app is still running correctly, you can start installing Detox and Mocha:
yarn add detox@8.1.6 mocha@4.0.1 --dev
Next, you need to link Detox to your Android project. For that, you need to update the following files. All changes that have to do with linking Detox to the project starts with the “Detox” comment. You can find the commit here, and these are the files to update:
android/settings.gradle
android/build.gradle
android/app/build.gradle
android/app/src/androidTest/java/com/reactnativeci/DetoxTest.java- create this.
Setting up Detox on iOS
For iOS, you don’t really need to do any additional configuration. Just make sure that you have the latest version of Xcode installed (or at least one of the more recent ones). This way, you can avoid having to deal with issues that only occurs when running older versions of Xcode.
Adding the tests
Update your
package.json file to include the
detox config. This allows you to specify which specific emulator or simulator to be used by Detox when running the tests as well as the command to execute for building the app on both platforms:
"detox": { "configurations": { "ios.sim.debug": { "binaryPath": "ios/build/Build/Products/Debug-iphonesimulator/reactnativeci.app", "build": "xcodebuild -project ios/reactnativeci.xcodeproj -scheme reactnativeci -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build", "type": "ios.simulator", "name": "iPhone 5s" }, "android.emu.debug": { "binaryPath": "./android/app/build/outputs/apk/debug/app-debug.apk", "build": "cd android && ./gradlew assembleDebug assembleAndroidTest -DtestBuildType=debug && cd ..", "type": "android.attached", "name": "192.168.57.101:5555" } }, "test-runner": "mocha", "specs": "e2e", "runner-config": "e2e/mocha.opts" }
The only things you need to change in the configuration above is the
type and
name under the
ios.sim.debug and
android.emu.debug.
If you’re using Genymotion like I am, you can keep the
android.emu.debug config in there. Just be sure to replace
192.168.57.101:5555 with the actual IP address that’s listed when you execute
adb devices while the Genymotion emulator is open.
If you’re using an Android emulator installed via Android Studio, go to the folder where Android SDK is installed. Once inside, go to the
sdk/tools/bin directory and execute
./avdmanager list avd. This will list all of the available Android emulators. Simply copy the displayed name and use it as the value for the
name under
android.emu.debug:
If you’re using the iOS simulator, execute
xcrun simctl list to list all of the installed iOS simulators on your machine. The value on the left side (for example: iPhone 5s) is the one you put as the value for the
name:
Next, initialize the test code:
detox init -r mocha
This will create an
e2e folder in your project’s root directory. This folder contains the config and test files for running the tests.
Next, remove the contents of your
e2e/firstTest.spec.js file and add the following. This will test if all the functionality of the app is working:
describe("App is functional", () => { beforeEach(async () => { await device.reloadReactNative(); // reload the app before running each of the tests }); it("should show loader", async () => { await expect(element(by.id("loader"))).toExist(); // we're using toExist() instead of isVisible() because the ActivityIndicator component becomes invisible when a testID prop is passed in }); it("should load cards", async () => { // assumes that if one card exists, then all the other cards also exists await expect(element(by.id("card-Blaziken"))).toExist(); }); it("card changes state when it is clicked", async () => { await element(by.id("card-Entei")).tap(); // not favorited by default await expect(element(by.id("card-Entei-heart"))).toExist(); // should be marked as favorite await element(by.id("card-Entei")).tap(); // clicking for a second time un-favorites it await expect(element(by.id("card-Entei-heart-o"))).toExist(); // should not be marked as favorite }); it("card state is kept in local storage", async () => { await element(by.id("card-Entei")).tap(); // not favorited by default await device.reloadReactNative(); // has the same effect of re-launching the app await expect(element(by.id("card-Entei-heart"))).toExist(); // should still be favorited after app is reloaded }); });
Since we don’t want Jest to be matching our newly created Detox tests, limit it to only look for tests inside the
__tests__ directory:
// package.json "jest": { // current config here... "testMatch": ["<rootDir>/__tests__/*"] },
Once that’s done, we need to hook up the
testID to each of the components that the tests above are targeting. First, add it to the
ActivityIndicator:
// src/components/CardList.js class CardList extends Component { ... render() { const { fetching, cards } = this.props; return ( <View style={styles.container}> <ActivityIndicator size="large" color="#333" animating={fetching} ... </View> ); } }
For the
Card component, we’re using the
testID supplied in the
Icon component to check whether the card is favorited or not. We’re simply appending the name of the Pokemon (
text) and the
icon used to determine this:
// src/components/Card.js const Card = ({ image, text, is_favorite, action }) => { const icon = is_favorite ? "heart" : "heart-o"; return ( <TouchableOpacity onPress={action} testID={"card-" + text}> <View style={styles.card}> ... <Icon name={icon} size={30} color={"#333"} testID={"card-" + text + "-" + icon} /> </View> </TouchableOpacity> ); }
Don’t forget to update the Jest snapshot as well:
yarn test -u
Commit the changes once you’re done:
git add . git commit -m "add detox tests"
Run the tests locally
The final step before we get to play around with Bitrise is to run the tests. First, run the Jest snapshot test. This should succeed since we’re always updating the snapshots with
yarn test -u whenever we make changes to the components:
yarn test
As for Detox, start by running whichever platform you’re testing on:
react-native run-android react-native run-ios
Next, run the tests. Confirm that the metro builder is running (
react-native start) and be sure to pass the
--reuse flag so that it will reuse the already installed app:
detox test -c ios.sim.debug --reuse detox test -c android.emu.debug --reuse
Note that you can also try building the app with Detox and then test it directly:
detox build -c ios.sim.debug detox build -c android.emu.debug detox test -c ios.sim.debug detox test -c android.emu.debug
The above method works for iOS, but I never got it to work on Genymotion. So it’s better to opt for the
--reuse option.
Once you’ve confirmed that all the tests pass and merge your changes to the
develop branch:
git checkout develop git merge add-detox-test git branch -d add-detox-test
Configure the build workflow
Now we’re ready to configure Bitrise to build the project and run the same tests that we’ve set up for the app.
Configure the build workflow for iOS
First, go to your app dashboard and select ReactNativeCI-iOS then go to the Settings tab. From there, update the Default branch to
develop and save the changes.
Next, go to the Workflows tab and select Stack. Select Xcode 9.4.x… as the default stack. This should automatically select this stack as the value for Workflow Specific Tasks as well. But if not, be sure to pick the same stack and save the changes:
The Stack is the type of machine where each of your workflows will be executed. In this case, we’re selecting Xcode 9.4 because it’s the latest stable version that’s currently available for iOS development. More importantly, it’s the same version of Xcode that I have on my local machine.
To ensure that your builds will be as smooth flowing as possible, always select a similar stack to your local machine. If that’s not possible, then select the one that’s only a version lower or higher than what you have.
Next, go back to the Workflows tab so we can configure each individual step for building the app. Delete everything else except for these steps and save the changes:
- Activate SSH key (RSA private key)
- Git Clone Repository
- Run npm command - rename this to “Install Packages”
After the Git Clone Repository step, create a new one called “Install detox dependencies”.
A modal window will pop-up asking you to select the step you want to add. Make sure that the ALL tab is selected, search for “script”, and click on the one which says “Script”:
As you can see, Bitrise has a bunch of pre-written steps. All you have to do is look for them and add it to your own workflow. But for things that don’t have a pre-written script, there are also steps that allow you to add them. One of those is the Script step which allows you to supply your own script.
Add the following script under the Script content field and save the changes:
#!/usr/bin/env bash # fail if any commands fails set -e # debug log set -x echo "Installing Detox dependencies..." npm install -g detox-cli brew tap wix/brew brew install applesimutils --HEAD
From the script above, you can see that these are the same commands you can find on Detox’s Getting Started guide to install Detox, so be sure to update these with the ones you find on that page in case it changes in the future.
If you scroll down a little bit, you will see the configuration for this script. Most of the time, you don’t really need to make any change to this one because Bitrise’s default config is already okay:
From the config above, the Working directory is
$BITRISE_SOURCE_DIR. By default, this points out to the root directory of your React Native project.
If you see something that starts with the dollar sign, it means that it’s an environment variable. In Bitrise, these can be set under the Env Vars tab. If you examine the values closely, you’ll see that it’s the same ones from when you have created this new app instance. This is where you can change them in case you messed up the selection earlier. If you notice any hard-coded values that you’re repeating over and over in each of your build steps, this is a good place to put them:
Note that you can’t find
$BITRISE_SOURCE_DIR anywhere in the Env Vars tab. This is because it’s one that’s set by Bitrise by default so it always points out to the same thing.
Right after the Install packages step, add a new script step called “Jest Snapshot test”. Put the following and save it:
#!/usr/bin/env bash # fail if any commands fails set -e # debug log set -x # write your script here echo "Running snapshot tests..." yarn test
After the Jest Snapshot test step, add a new script step called “Build iOS app with Detox”:
#!/usr/bin/env bash set -e set -x echo "Building iOS app..." detox build -c ios.sim.debug
Lastly, add the script for running the end-to-end tests with Detox. Call the script “Test iOS app with Detox”:
#!/usr/bin/env bash set -e set -x echo "Testing iOS app..." detox test -c ios.sim.debug
Once that’s added, your workflow should now look something like this:
- Activate SSH key (RSA private key)
- Git Clone Repository
- Install Detox dependencies
- Install packages
- Jest Snapshot test
- Build iOS app with Detox
- Test iOS app with Detox
It’s a good practice to make each individual step only do one thing even though you can bring all the commands into a single script. Aside from keeping things lightweight and allowing you to easily debug your scripts, this also allows you to easily rearrange your steps (via drag and drop) and delete the ones you don’t need.
Configure build workflow for Android
If you’ve skipped to this section because you only want to build for Android, you should scan through the section above on configuring the build workflow for iOS because this section assumes you already know to configure the build workflow on Bitrise.
If you haven’t done so already, go to the settings tab of the ReactNativeCI-Android app and set its default branch to
develop.
Next, click on the Workflow tab and click on the Stack tab. This time, select Android & Docker, on Ubuntu 16.04 - LTS Stack as the default stack. This should give you the best environment for building an Android app with React Native. Don’t forget to save the changes once you’re done.
To make the configuration of the build workflow faster, instead of using the workflow editor, we’ll be using the
bitrise.yml file to configure the build. Copy the contents of the file from the GitHub repo then copy it to the editor in the bitrise.yml tab. Save the changes once you’re done:
Once the changes are saved, you can switch back to the Workflows tab to see the visual representation of the build workflow:
When you’re using the workflow editor, Bitrise actually updates the
bitrise.yml to match what you have on your workflow. This makes it really easy for developers to transfer a workflow that they have on an older app over to a newer app.
If you scroll all the way down on your workflow steps, you can see that we’re not running any end-to-end testing with Detox. This is because I couldn’t get the Detox tests to run on Android. The build is working, but running the app isn’t. Booting up an Android emulator takes a really long time so it defeats the purpose of building the app on a CI server because the build takes a long time to complete
Run the build on Bitrise
Now that you’ve fully configured your build workflow, you can now push all your changes to the repo. This will trigger a build on both the Android and iOS version of the app:
git push origin --all
Note that you can actually have different workflows for different build processes. In this tutorial, we’ve only configured the “primary” workflow which is the default build process that what we want to do everytime some changes is pushed into the repo. But you can also have a “deploy” workflow or a “testing” workflow, and the steps for that can be different from the one you have in your primary workflow.
Once the build is done, here’s what it will look like for the Android app:
And here’s what it will look like for iOS:
Run the build with Bitrise CLI
Another good thing about Bitrise is that you can run your builds using the Bitrise CLI. This is Bitrise’s open-source task runner for running your builds locally. You can follow the instructions on that page to setup Bitrise CLI.
Once you’ve setup Bitrise CLI, you can simply download your project’s
bitrise.yml file and copy it over to your React Native project’s root directory.
To run the build, use the
bitrise run command and append the name of the workflow you want to run:
bitrise run primary
If you find that the Bitrise CLI doesn’t meet your requirements, or you get errors that you don’t get while running the build on Bitrise, you can also make use the Bitrise Docker image. This allows you to run your builds locally using the same environment as the one used by Bitrise’s virtual machines.
Conclusion
That’s it! In this tutorial, you learned how to use Bitrise for a solid mobile continuous integration setup. Specifically, you learned how to set up a custom build workflow that runs Jest snapshot tests, Detox end-to-end test, and then build the app.
That also wraps up this series so I hope you’ve gained the necessary skills in setting up continuous integration for your React Native app.
You can find the code used in this series on its GitHub repo. The
master branch contains the final output for this entire series.
September 25, 2018
by Wern Ancheta | https://pusher.com/tutorials/continuous-integration-react-native-part-3/ | CC-MAIN-2022-21 | refinedweb | 5,040 | 63.39 |
leppie
Without giving a Computer Science 101 lecture, I'll provide you with 2 pictures that says a thousand words. In short, DFA stands for Deterministic Finite Automation (or Automata). It's cousin is NFA, non-deterministic FA. An example of a NFA is looping through a big list looking for a word. The word might be found quickley, or it might only find it in the last entry. This is said to be non-deterministic. So we can then say for a DFA we will know exactly how long a search for a word (as per the NFA example) will take.
The "raw" data (in no particular order) ready for NFA processing (you know the lazy way).
TABLE A
leppie can swim
leppie can talk
leppie looks good
leppie looks left
leppie goes home
john looks fine
john can walk
john wants nothing
leppie wants food
leppie can code
leppie can code C
leppie can code C#
leppie can code C# well
leppie can code C++ poorly
leppie lives in stellenbosch
leppie lives with a flatmate
leppie drinks plenty coke
leppie drinks coffee
leppie eats many hotdogs
leppie eats food
leppie knows alot
leppie knows alot about code
leppie knows a little about girls
john can swim
john can talk too
TABLE B
stop
stopper
stood
step
standard
stank
stance
authors
automatic
back
back's
backed
background
backing
backing's
backs
backwards
bad
badly
balance
balance's
ball
ball's
chain
chain's
chair
chair's
chairman
chairman's
chance
chance's
chances
change
changed
changes
changing
channel
channel's
channels
chaos
chaos's
chapter
chapter's
char
charge
charged
charges
fall
fallen
falling
falls
false
familiar
family
family's
famous
fan
fan's
fancy
far
farm
DFA representation of Table A. A circle with an outer circle denotes an endstate.
DFA representation of Table B. A circle with an outer circle denotes an endstate.
You might ask what that System.Object is doing there, well nothing. UPDATE: I have now removed the root object from the generated graph. As you can see, it is not an endstate, so it will not affect anything. But it does serve a purpose. It acts as the very first node that is created regardless of the key type. I can remove it, but then I would have to check for null everytime and that would be too costly. So dont worry about it!
public class AtomicState
{
protected AtomicState();
protected AtomicState(object key);
protected bool Accepts(Array stack);
public Array AcceptStates(Array stack);
protected bool Add(Array stack);
public AtomicState GetAtomicStateAt(Array stack);
protected Array Match(Array stack);
protected bool Remove(Array stack);
internal protected virtual void RenderStateNodeAttributes attr);
internal protected virtual void RenderTransistion(EdgeAttributes attr);
public override string ToString();
public int AcceptStateCount { get; }
public int TotalStateCount { get; }
protected object key;
}
This base implementation for the DFA state machine. I have provided several implementations for typesafe Int32, String, Char, Boolean, Float, Object and Type. Feel free to add your own and see comments I have meda in the code. Arrays of Arrays have terrible casting issues. I have also included a Combination class that I did many moons back, and never had a place to put. Combinations are very good to build DFA's. Also included is a utiliy class to generate all these pretty pictures (yes, this is aimed directly at you, Marc Clifton ;P) you can see here (you will need GraphViz for this). UPDATE: The GraphViz utilities has been greatly improved, and from above you can see you can now specify rendering options on a State level. The implementation is similar to ASP.NET's custom control Rendering. In other words, if you need to change something, just override RenderXXX. See TypeState's implementation for a good example.
The class consists mostly of non-human comprehensable code (almost every function is either running in a while(true) loop or recusively). This is most mostly a generic port of a C project that was done for Computer Science. From the 2 count functions's code, you will see the "multi dimensional linked list" is rather easily "walkable" with recursion (I had some stack issues, but havent been able to replicate them).
So you may ask yourself why or how do I use this? The way the machine has been setup, is to take a "path" at a time, so adding a sentence or a word, get automatically laid out in the machine. No other intervention is required. One of the most difficult things to do is changing a NFA to a DFA, this is all done for you. Then all you need to do is query it. Some functions like Match() can take longer with "wildcards" (set these up as null, see the implementations).
You might ask me now why I dont just use an ArrayList or a HashTable? Simple. An ArrayList is good for batches of data, but poor at searching. A HashTable is excellent for lookups, but fails to be useful for pattern matching or "path" finding. In fact you should be using either of the afore mentioned in most cases. Examples of what this is useful for:
As with any DFA performance is key. This implementation is comparable to a Hashtable. In fact, if boxing was removed, lookups would be even nearer (2x instead of 3x in my experiments). Here is a simple console output from a 500 000 wordlist loaded in a char[]:
Extracting all entriesTime: 5752.794559msCharState Testing 10000 random lookups from 502069 entriesTime: 133.561947msAvg time: 0.013356msHashTable Testing 10000 random lookups from 502069 entriesTime: 37.712056msAvg time: 0.003771ms
As one can see, lookups are 3 times slower than a Hashtable, but remember that a HashTable is incredibly fast, but does not have the same characteristics. Note the slowness of extracting all the entries. Not bad if you think that the DFA is converted to raw data again.
Thanks to the person who wrote the HP timer class I used for the timing (sorry, cant remember your name now).
Say for example you wanted a quick inheritance graph of an assembly. You would simple do this:
TypeState typeroot = new TypeState();
Type[] types = Assembly.GetAssembly(typeof(AtomicState)).GetTypes();
foreach (Type type in types)
{
ArrayList arr = new ArrayList();
Type etype = type;
do
{
arr.Insert(0, etype);
etype = etype.BaseType;
}
while (etype != null);
typeroot.Add( (Type[]) arr.ToArray(typeof(Type)));
}
TextWriter writer = File.OpenText("file.dot");
GraphViz.Generate(typeroot, writer);
Now we have a string we can pass to Dot (the GraphViz executable) and we end up with:
Pretty isn't it? But hey, that isnt even the purpose! Luckily GraphViz fits like a glove and its making many more things possible. If any one wants to send some network trace logs, I'll put up some renderings. UPDATE: Here they are! I have cropped the graph somewhat.
A question might arise why things like "leppie can eat" and "john can eat" does flow back into each other. This is infact possible, but a problem arises when you add "leppie can eat meat". Does this mean john can eat meat too? It depends how accurate you want to be. In my case, I have opted for accuracy. Adding this facility would only increase load time and to make it 100% accurate will require an NFA pass or a major change to the algorhythm. Suggestions are welcome.
I hope you enjoy using this, and feel free to comment as usual. The test project, basically contains just some tests I have run, and will probably cause an exception on your system. Look at it however to see how its meant to be used.
Now, so all of you will come back and back again, I'm listing some plans I have for the future (viable suggestions will be added too):>);
Console.WriteLine("Enter the assembly name:"); <br />
string assemblyName = Console.ReadLine(); <br />
DoCustomTypeDemo(assemblyName);
static void DoCustomTypeDemo(string assemblyName)<br />
{<br />
<br />
assemblyName.Trim();<br />
<br />
if (assemblyName != "")<br />
{<br />
Assembly customAssembly = null;<br />
<br />
if (!System.IO.File.Exists(assemblyName))<br />
{<br />
Console.WriteLine("\n {0} : was not found. Please check again. \n", assemblyName); <br />
return;<br />
}<br />
customAssembly = Assembly.LoadFrom(assemblyName);<br />
<br />
TypeState root = new TypeState();<br />
Type[] types = customAssembly.GetExportedTypes();<br />
<br />
foreach (Type type in types)<br />
{<br />
ArrayList arr = new ArrayList();<br />
Type etype = type;<br />
do <br />
{<br />
arr.Insert(0, etype);<br />
etype = etype.BaseType;<br />
}<br />
while (etype != null);<br />
<br />
root.Add( (Type[]) arr.ToArray(typeof(Type)));<br />
}<br />
<br />
string filename = "customtypes";<br />
TextWriter writer = File.CreateText(filename + ".dot");<br />
GraphViz.Generate(root, writer);<br />
writer.Close();<br />
RunDot(filename, "jpg");<br />
}<br />
else<br />
{<br />
Console.WriteLine("\n No assembly name was entered. \n"); <br />
}<br />
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/4121/Generic-DFA-State-Machine-for-NET?fid=15466&df=90&mpp=25&prof=True&sort=Position&view=Normal&spc=Relaxed&select=686569&fr=26 | CC-MAIN-2019-30 | refinedweb | 1,489 | 66.03 |
Hello,
I am only starting to design a C program that opens a file;
#include <stdio.h> #include <string.h> #include <stdlib.h> int main(void) { FILE *fp; fp = fopen ("file.txt", "r"); if (fp == NULL) { perror("Can't open the file"); return EXIT_FAILURE; } printf ("%d\n", fp); system("pause"); return EXIT_SUCCESS; }
The file is a text says "the quick brown fox jumps over the lazy dog", but when I ran the program, it showed some random number than I expected :confused: How do you run the program to open the file within the terminal?
My goal is to replace a character, and the option must be supplied via command line arguments.
For example, the original text file shows
"the quick brown fox jumps over the lazy dog"
and if the program runs, using the following command line:
"assignment1 abc DEF input"
to replace from a to D, b > E, c > F
then the file should have the following contents:
"the quiFk Erown fox jumps over the lDzy dog"
It would be great if you give me an example of codes for those functions. | https://www.daniweb.com/programming/software-development/threads/280903/c-program-how-do-i-replace-a-character-in-a-file | CC-MAIN-2021-25 | refinedweb | 184 | 71.48 |
/*
* kernel/workqueue_internal.h
*
* Workqueue internal header file. Only to be included by workqueue and
* core kernel subsystems.
*/
#ifndef _KERNEL_WORKQUEUE_INTERNAL_H
#define _KERNEL_WORKQUEUE_INTERNAL_H
#include <linux/workqueue.h>
#include <linux/kthread.h>
struct worker_pool;
/*
* The poor guys doing the actual heavy lifting. All on-duty workers are
* either serving the manager role, on idle list or on busy hash. For
* details on the locking annotation (L, I, X...), refer to workqueue.c.
*
* Only to be used in workqueue and async.
*/
struct worker {
/* on idle list while idle, on busy hash table while busy */
union {
struct list_head entry; /* L: while idle */
struct hlist_node hentry; /* L: while busy */
};
struct work_struct *current_work; /* L: work being processed */
work_func_t current_func; /* L: current_work's fn */
struct pool_workqueue *current_pwq; /* L: current_work's pwq */
struct list_head scheduled; /* L: scheduled works */
struct task_struct *task; /* I: worker task */
struct worker_pool *pool; /* I: the associated pool */
/* L: for rescuers */
/* 64 bytes boundary on 64bit, 32 on 32bit */
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */
int id; /* I: worker id */
/* for rebinding worker to CPU */
struct work_struct rebind_work; /* L: for busy worker */
/* used only by rescuers to point to the target workqueue */
struct workqueue_struct *rescue_wq; /* I: the workqueue to rescue */
};
/**
* current_wq_worker - return struct worker if %current is a workqueue worker
*/
static inline struct worker *current_wq_worker(void)
{
if (current->flags & PF_WQ_WORKER)
return kthread_data(current);
return NULL;
}
/*
* Scheduler hooks for concurrency managed workqueue. Only to be used from
* sched.c and workqueue.c.
*/
void wq_worker_waking_up(struct task_struct *task, unsigned int cpu);
struct task_struct *wq_worker_sleeping(struct task_struct *task,
unsigned int cpu);
#endif /* _KERNEL_WORKQUEUE_INTERNAL_H */
Imprint & Privacy Policy | https://source.denx.de/Xenomai/ipipe/-/blame/b31041042a8cdece67f925e4bae55b5f5fd754ca/kernel/workqueue_internal.h | CC-MAIN-2022-05 | refinedweb | 263 | 52.29 |
fault too, since I listen to completely different music styles depending on mood, so what happens is that the recommendation engine does not differentiate between moods and I end up having to do some manual work to reach the desired state of mind at the given timee.
Yet, moods set aside (product feature I’d highly recommend the Spotify team to consider/test), I always wondered how Spotify tended to figure out these titles even when there are no ratings in its system except the “save as favorite” button, which rather sends a classification signal than a magnitude one… Until I recently realized that they used a combination of different architectures for their recommendation engine:
1. Memory-based collaborative filtering Recommender: which focus on the relationship of users and items in question (ideal when the data contains ratings for the various items offered). Matrix Factorization here is a powerful mathematical tool to discover the latent interactions between users and items. Let’s say for example person A and B listen to Song X, and person B listens often to song Y, then A is very likely to like song Y as well.
2. Content-based Recommender: which focus on features of the items themselves. So instead of analyzing the proactive interactions of users/customers with the items, the analysis is mostly made at the level of the latter, examining and measuring therefore the similarity in the items’ characteristics. To stay in the music context, let’s say for example that you listen very often to song X and Y, and both these songs happen to be from an Italian Musician who uses distinct piano tunes and happens also to pertain to a Music genre and era specified in the song tag.
This recommender method will use different machine learning techniques (e.g. Natural Language Processing, Audio Modeling etc.) to determine a song Z that has similar properties.
I wanted to get my hands dirty with the first type of Recommender as it seemed simpler than the second. Most importantly, I wanted to understand and provide a simple intuition for the math behind the algorithm, and perhaps also to lay a foundation for how recommendation systems work in practice before moving to more complicated models.
The 10k Books Dataset
In this tutorial I picked the Goodbooks-10k dataset I found on Kaggle to get started. I had always feared being disappointed by a book after finishing a fascinating one, so I thought this would solve a personal struggle, and could be in general just a fun thing to run through friends who ask me for advice on what to read next.
The zip file contains multiple datasets (book_tags, books, ratings, tags). We’ll use only the books and ratings datasets which contain columns that are relevant to our analysis.
First things first, let’s import the necessary libraries.
import pandas as pd import numpy as np import sklearn from sklearn.decomposition import TruncatedSVD import warnings
Let’s upload the datasets. The “books” dataset contains 23 columns. We’ll slice the data and drop variables to only keep columns of interest. We’ll keep the ratings dataset as is.
Next, we’ll merge both datasets on “book_id”. Book_id is more reliable than original_title since some titles can have some variations in their formatting. Before we advance to creating a matrix, we’ll drop duplicates in pairwise combinations of user_id and book_id, as well as user_id and original_title.
books = pd.read_csv('books.csv', sep=',')
books = books.iloc[:, :16]
books = books.drop(columns=['title', 'best_book_id', 'work_id', 'books_count', 'isbn', 'isbn13', 'original_publication_year','language_code','work_ratings_count','work_text_reviews_count'])
books.head(5)
ratings = pd.read_csv('ratings.csv', sep=',')
ratings.head(5)
df = pd.merge(ratings, books, on="book_id")
df.head(5)
df1= df.drop_duplicates(['user_id','original_title'])
df1= df.drop_duplicates(['user_id','book_id'])
df1.head(10) #went down from 79701 to 79531
df1.shape #(79531, 8)
The Matrix Factorization Method & SVD — Intuition
We’re now about to create a matrix model using the Matrix Factorization method with the Single Value Decomposition model (SVD). You can find a number of excellent technical resources online to describe the models in deeper ways, but I’m going to break it down to you in simple terms here.
What we’ll do next is call the pivot function to create a pivot table where users take on different rows, books different columns, and respective ratings values within that table with a shape (m*n).
######################################
####MATRIX FACTORIZATION
######################################
books_matrix = df1.pivot_table(index = 'user_id', columns = 'original_title', values = 'rating').fillna(0)
books_matrix.shape #(28554, 794)
books_matrix.head()
If you look at the graphical representation below, you’ll have an idea what happened behind the scenes. Firstly, we created an A matrix with shape (m*d)=(books*user_id) and B matrix with shape (d*n)=(ratings*user_id).
The result is a product (matrix factorization) between the two matrices that mathematically computes values as follows:
We’ll need to create a set of training data — What our training data consists of is essentially smaller matrices that are factors of the ratings we want to predict. For this we’ll set another matrix X which is the transposition of the resulting matrix created above (“books_matrix”), also called invertible matrix.
X = books_matrix.values.T
X.shape
#Fitting the Model
SVD = TruncatedSVD(n_components=12, random_state=0)
matrix = SVD.fit_transform(X)
matrix.shape #(812, 12)
You’ll notice here that our newly created matrix is extremely sparse. Depending on which random state you specified, the number of columns are in the 5 digits, which means a 5 digit dimensional space. That’s where the SVD method intervenes.
Just like we would use a PCA/Kernel PCA feature extraction method on other datasets, SVD is another method we apply to matrices in recommendation applications. SVD can boil our dimensions down to smaller number to describe the variance in the data. What happens here is that SVD will look for latent features and extract them from the data, to go down from say 10.000 features to only 10, and will save us huge amounts of computational power, in addition to avoiding to overfit the data. For this exercise, I had set the number of components to our SVD to 12. What’s left now in order to apply the model is fitting and transforming the training data X.
Note: In general, after applying SVD, a common practice is to introduce a regularization term (the term on the right below) to avoid overfitting to the data:
The minimization term on the left is a minimization of errors on the ratings we have no information about (e.g. unknown current or future ratings). We can come up with those values with an objective function above mathematically, and in practice with methods such as Stochastic Gradient Descent (SGD).
For the purpose of simplicity in this tutorial, I have disregarded introducing both terms in practice, and just filled the missing information with a null value (.fillna(0)).
Creating a Correlation Coefficient
Next we create a correlation coefficient function for all elements in the matrix with the numpy function np_corrcoef. We’ll call this “corr”.
Once we apply “corr” to a book wereally like, the function will compute all correlation coefficients with the remaining books and will return all the books that we are most likely to like as well.
import warnings
warnings.filterwarnings("ignore",category =RuntimeWarning)#to avoid RuntimeWarning #Base class for warnings about dubious runtime behavior.
corr = np.corrcoef(matrix)
corr.shape
Correlation coefficients range from 0 to 1, with 0 meaning no correlation exists between the two items, to 1 being the opposite. In our example, the closer we get to 1, the more likely the other books suggested have features that are highly correlated with the book you entered as input, and therefore you are more likely to like those books as a result.
Checking the Results
Let’s now check the results. I’ll create a vector called “titles” and list the items. I’ll pick a book I like as an index. “Memoirs of a Geisha” is one of my favorite fiction books, so let’s go with this.
title = books_matrix.columns
title_list = list(title)
samia = title_list.index('Memoirs of a Geisha')
corr_samia = corr[samia]
list(title[(corr_samia >= 0.9)])
After I run the entire code, here is a list of books the algorithm suggested:
This looks good! Already read a good amount out of the above and can testify some of their were on my top list at some point(e.g. Persepolis, Into the Wild, The Great Gatsby, 100 years of solitude, the Heart of the Matter etc.). I’m still curious though which latent features the algorithm used to pick “Think & Grow Rich” here as well, as I would have classified it in another category (nonfiction + other considerations), but then again this unveils a limitation in this algorithm which might be relevant to the weights of the independent variables we fed it.
Evaluating Results
I evaluated this model just by having a look at the list of books the algorithms spit out, and since I had already read (and really liked) some of the titles recommended, and run the model through other people for fun to see if they would agree for the most part- I thought I’d call it a day. Of course, this is not the right way to do it if you have to provide sharp performance metrics.
For more accuracy, there are many ways to evaluate a recommender system, and the method will differ across types of recommenders (for example, content-based vs collaborative filtering). One way to do so is to apply a cross validation model — dividing your users into k-fold groups and going in a loop: taking (k-1) fold as training set, and testing on the remaining fold, the averaging the results of all. Perhaps this will be the topic of a later article:)
Disclaimer & Departing thoughts:)
Finally, this was my first hands-on data science/machine learning post, so I hope this was a useful tutorial with intuitive explanations of the mathematics behind the models.
Have fun building your own recommendation engine and don’t forget to subscribe to my channel or ask questions below for clarification! 🙂
Follow me on Linkedin
Connect on Twitter | https://gradientdissent.org/2019/09/26/example-post/ | CC-MAIN-2020-40 | refinedweb | 1,711 | 52.19 |
I can understand having run mode for the the sling:OsgiConfig. But what is Factory Configuration? How to use it?
Some of the services, such as the Logging Service, is not singletons but creates sub-services that can have different configurations. Like with the Logging Service, you can define how you want the logger to reakt and log and for that you configure the Factory that creates the actuall Logger Services. This means that you can have multiple configurations that are identified by an id.
Thanks. Any detail example? I can't find that.
Go to and click on the "plus" button of the "Apache Sling Logging Logger Configuration".
The magic required to implement a service factory on your own:
@Service
@Component(label = "foobar implementation", metatype = true, configurationFactory = true, policy = ConfigurationPolicy.REQUIRE)
public class foo implements bar {
...
}
See the maven-scr plugin for some more details ( tions.html).
Jörg
Please see for example.
Yogesh | https://forums.adobe.com/thread/1164769 | CC-MAIN-2018-13 | refinedweb | 154 | 51.55 |
I want to create command line aliases in one of my python scripts. I've tried os.system(), subprocess.call() (with and without shell=True), and subprocess.Popen() but I had no luck with any of these methods. To give you an idea of what I want to do:
On the command line I can create this alias:
alias hello="echo 'hello world'"
I want to be able to run a python script that creates this alias for me instead. Any tips?
I'd also be interested in then being able to use this alias within the python script, like using subprocess.call(alias), but that is not as important to me as creating the alias is.
You can do this, but you have to be careful to get the alias wording correct. I'm assuming you're on a Unix-like system and are using ~/.bashrc, but similar code will be possible with other shells.
import os alias = 'alias hello="echo hello world"\n' homefolder = os.path.expanduser('~') bashrc = os.path.abspath('%s/.bashrc' % homefolder) with open(bashrc, 'r') as f: lines = f.readlines() if alias not in lines: out = open(bashrc, 'a') out.write(alias) out.close()
if you then want the alias to be immediately available, you will likely have to
source ~/.bashrc afterwards, however. I don't know an easy way to do this from a python script, since it's a bash builtin and you can't modify the existing parent shell from a child script, but it will be available for all subsequent shells you open since they will source the bashrc.
EDIT:
A slightly more elegant solution:
import os import re alias = 'alias hello="echo hello world"' pattern = re.compile(alias) homefolder = os.path.expanduser('~') bashrc = os.path.abspath('%s/.bashrc' % homefolder) def appendToBashrc(): with open(bashrc, 'r') as f: lines = f.readlines() for line in lines: if pattern.match(line): return out = open(bashrc, 'a') out.write('\n%s' % alias) out.close() if __name__ == "__main__": appendToBashrc() | https://codedump.io/share/gBPvulgNTDGE/1/creating-command-line-alias-with-python | CC-MAIN-2017-13 | refinedweb | 334 | 76.72 |
tetrix in Scala: day 11
Yesterday we've written a test harness to automate scripted games to tune various components of the heuristic function. The overall performance improved from 7 +/- 2 lines to 34 +/- 18, almost 5x improvement.
HAL moment
I decided to watch a game through on the swing UI, since I had been script testing mostly. From the beginning I could feel the improvement of the game quality as it was able to manage the blocks lows, and kept deleting the lines. After it went past 60 lines, it made a few mistakes and the blocks started to stack up to maybe 10th row, but nothing unmanagable. Then, all of a sudden, the agent started droping the piece. One after another.
It was as if the agent suddently gave up the game. Later during the day I realized that likely it had reached a timeout in one of the actors.
variable thinking cycle
Instead of telling the agent to think at a regular interval, let's let it think as long as it wants to. To be fair to human response time, let's throttle an action to around 3 per second.
sealed trait GameMasterMessage case object Start class GameMasterActor(stateActor: ActorRef, agentActor: ActorRef) extends Actor { def receive = { case Start => loop } private[this] def loop { val minActionTime = 337 var s = getState while (s.status != GameOver) { val t0 = System.currentTimeMillis agentActor ! BestMove(getState) val t1 = System.currentTimeMillis if (t1 - t0 < minActionTime) Thread.sleep(minActionTime - (t1 - t0)) s = getState } } private[this] def getState: GameState = { val future = (stateActor ? GetState)(1 second).mapTo[GameState] Await.result(future, 1 second) } }
In order to slow down the game a bit, let's substitute
Drop for a
Tick also:
class AgentActor(stageActor: ActorRef) extends Actor { private[this] val agent = new Agent def receive = { case BestMove(s: GameState) => val message = agent.bestMove(s) if (message == Drop) stageActor ! Tick else stageActor ! message } }
To prevent the agent from taking too long time to think, let's cap it to 1000 ms.
val maxThinkTime = 1000 val t0 = System.currentTimeMillis ... nodes foreach { node => if (System.currentTimeMillis - t0 < maxThinkTime) actionSeqs(node.state) foreach { seq => ... } else () }
man vs machine
Now that the agent is tuned, the next logical step is to play against the human. Let's set up two stage actors with identical initial state. One controlled by the player, and the other controlled by the agent.
private[this] val initialState = Stage.newState(Nil, (10, 23), Stage.randomStream(new util.Random)) private[this] val system = ActorSystem("TetrixSystem") private[this] val stateActor1 = system.actorOf(Props(new StateActor( initialState)), name = "stateActor1") private[this] val stageActor1 = system.actorOf(Props(new StageActor( stateActor1)), name = "stageActor1") private[this] val stateActor2 = system.actorOf(Props(new StateActor( initialState)), name = "stateActor2") private[this] val stageActor2 = system.actorOf(Props(new StageActor( stateActor2)), name = "stageActor2") private[this] val agentActor = system.actorOf(Props(new AgentActor( stageActor2)), name = "agentActor") private[this] val masterActor = system.actorOf(Props(new GameMasterActor( stateActor2, agentActor)), name = "masterActor") private[this] val tickTimer1 = system.scheduler.schedule( 0 millisecond, 701 millisecond, stageActor1, Tick) private[this] val tickTimer2 = system.scheduler.schedule( 0 millisecond, 701 millisecond, stageActor2, Tick) masterActor ! Start def left() { stageActor1 ! MoveLeft } def right() { stageActor1 ! MoveRight } def up() { stageActor1 ! RotateCW } def down() { stageActor1 ! Tick } def space() { stageActor1 ! Drop }
Currently
view returns only one view. We should modify this to return a pair.
def views: (GameView, GameView) = (Await.result((stateActor1 ? GetView).mapTo[GameView], timeout.duration), Await.result((stateActor2 ? GetView).mapTo[GameView], timeout.duration))
Next, the swing UI need to render both the views.
def onPaint(g: Graphics2D) { val (view1, view2) = ui.views val unit = blockSize + blockMargin val xOffset = mainPanelSize.width /) } def drawStatus(g: Graphics2D, offset: (Int, Int), view: GameView) { val unit = blockSize + blockMargin g setColor bluishSilver view.status match { case GameOver => g drawString ("game over", offset._1, offset._2 + 8 * unit) case _ => // do nothing } g drawString ("lines: " + view.lineCount.toString, offset._1, offset._2 + 7 * unit) }
Since
drawBoard was refactored out, this was simple.
We can let
GameMasterActor be the referee and determine the winner if the other loses.
case object Victory extends GameStatus ... class GameMasterActor(stateActor1: ActorRef, stateActor2: ActorRef, agentActor: ActorRef) extends Actor { ... private[this] def getStatesAndJudge: (GameState, GameState) = { var s1 = getState1 var s2 = getState2 if (s1.status == GameOver && s2.status != Victory) { stateActor2 ! SetState(s2.copy(status = Victory)) s2 = getState2 } if (s1.status != Victory && s2.status == GameOver) { stateActor1 ! SetState(s1.copy(status = Victory)) s1 = getState1 } (s1, s2) } }
We need to display the status on the UI:
case Victory => g drawString ("you win!", offset._1, offset._2 + 8 * unit)
And this is how it looks:
attacks
Currently -b try/day11 $ sbt "project swing" run | http://eed3si9n.com/node/85 | CC-MAIN-2017-34 | refinedweb | 763 | 52.26 |
Hi,
Is it possible to get the gradient (i.e. the derivative with respect to each independent degree of freedom) of the energy score evaluated on a given protein pose? It seems like the Minmover subroutine moves the pose in a small step in the direction of this gradient, meaning the gradient in implicit in the Rosetta code. I'd like to access the value of the gradient itself at any fixed pose, for example in terms of dihedral coordinates or Cartesian coordinates. Technically one could numerically approximate the gradient by perturbing each degree of freedom (say dihedral angle) one at a time and seeing how the energy function changes, but this is a very inefficient solution given the high dimensionality of the situation.
Since the formula for energy is very complicated and probably a nightmare to differentiate by hand, I'd be quite content to have the energy function in a complete explicit form so that I could enter it into an automatic differentiation software package (of which there are many good ones these days). So far most of the references I've seen in the Rosetta literature give a high-level explanation of each term but not quite an explicit form (or the various constants and so on needed to get that explicit form).
Thanks!
Kyler
I'm not going to actually answer your question, but I'll explain a little of how the code works.
The score function is a container full of EnergyMethods. Each EnergyMethod will provide a score AND the derivatives. So, Rosetta already has analytic derivatives baked in for most scorefunction terms. (There are a few that either can't have derivatives, or don't bother to define them; those obviously perform poorly in minimization). You don't need any help outside of Rosetta to get the gradients.
As to how to access the derivatives in PyRosetta - that I don't know. I would crack open the minimizer in the C++ code to start figuring it out, which clearly isn't helpful to you. There are wiki pages on and but they don't really help at the code leve.
This isn't something that has easy access in Rosetta. Part of the reason for that is that Rosetta's minimization machinery uses the formalism of Abe, Braun, Noguti and Go (1984 Computers & Chemistry 8(4) pp. 239-247) for the efficient calculations of derivatives with respect to torsional minimization. This optimization means that the native derivatives of EnergyMethod objects (F1/F2 vectors) are less than easily interpretable.
I'm guessing that to get some of the information you need, you're going to have to pull apart some of the minimization machinery.
The first thing you want to do is set up the minimization:
To this point this is basically a minimial alteration of how the standard minimizer sets things up. At this point we skip the minimization process, and just pull out the derivatives.
At this point, dE_dvars (which is actually just a utility::vector1< Real>) should contain the analytical derivatives for each degree of freedom. If you have more than one DOF enabled in the minimization, you should be able to call min_map.dof_nodes() to get a parallel list of DOF details. This object should have details about the DOF (including residue&atom number and the DOF_type [torsion/angle/length])
That's for the standard (internal coordinate) MinimizerMap. The CartesianMinimizerMap works slightly differently, where you can call min_map.get_atom(n) to get the core.id.AtomID (residue&atom number) for the atom which corresponds to the 3N-2, 3N-1 and 3Nth entry (x, y, and z, respecitively, I believe) in the dE_dvars Multivec.
Note that I haven't tested *any* of this in a PyRosetta context, and am only marginally sure of how this work in the C++ context. There's likely to be a large amount of fiddling to get things to work. I'd also highly recommend that you run a fair number of tests (e.g. by comparing simple systems with manually computed numerical derivatives) to make sure that the numbers you're getting out match what you think they should be and there isn't a bug in your implementation or in the descriptions I gave to you.
Thanks for the replies! @rmoretti, I wasn't able to get any of the code with Multivec to work. Do you know how I can import it or otherwise get it to run?
What sort of errors are you getting? If it's simply because the core.optimization.Multivec name isn't recognized, you can juse use the PyRosetta equivalent of `utility::vector1< Real >`, which I think would be as simple as
Great, that works! I checked at least the numerial partial derivatives in each coordinate and they seem to be consistent with what dE_dvars outputs.
Actually, while Rosetta's derivatives seem to agree with the numerical derivatives for the full-atom score function, they don't seem to agree anymore when I use the centroid version. Any reason why that might be? Here's the pyrosetta code I'm trying.
from pyrosetta import *
init()
p = pose_from_pdb('1qlq.clean.pdb')
switch = SwitchResidueTypeSetMover('centroid')
switch.apply(p)
from pyrosetta.teaching import *
sf = get_cen_scorefxn()
start_score = sf(p)
movemap = MoveMap()
movemap.set_bb(True)
minmap = rosetta.core.optimization.MinimizerMap()
minmap.setup(p,movemap)
sf.setup_for_minimizing(p,minmap)
multifunc = rosetta.core.optimization.AtomTreeMultifunc(p,minmap,sf)
num_dofs = minmap.nangles()
x = Vector1([0.0]*num_dofs)
minmap.copy_dofs_from_pose(p,x)
E = multifunc(x)
dE_dx = Vector1([0.0]*num_dofs)
multifunc.dfunc(x,dE_dx)
print 'rosetta dE_dx: '
print dE_dx
eps = 0.001
numerical_derivs = []
for i in range(100):
x_pert = Vector1(list(x))
x_pert[1+i] += eps
E_pert = multifunc(x_pert)
numerical_derivs.append((E_pert-E)/eps)
numerical_derivs = Vector1(numerical_derivs)
print 'first 100 numerical derivs: '
print numerical_derivs
The standard centroid scorefunction (and the one you're getting with get_cen_scorefxn()) is set up for point evaluation, not minimization. Generally speaking, most of the time that you're using a centroid scoring function you're doing a coarse Monte Carlo sampling and don't need to do derivatives, so many of the centroid scoreterms are set up as a binned lookup evaluation, rather than as a continuous function. As such, there's discontinuities and other issues which mean that the derivatives aren't sensible.
That said, there are "smooth" versions of centroid scoreterms which are set up for minimization. Basically, the smoothed versions takes the binned statistical data the regular centroid scorefunctions are based on and do things like fit splines to them, such that you get reasonable derivative behavior.
If you're looking for derivatives in centroid mode, I'd recommend using something like the `score4_smooth` weights file (`create_score_function("score4_smooth")`). Or, if you're interested in getting something reasonably close to what you get with get_cen_scorefxn(), use the `cen_std_smooth` weights with the `score4L` patch. (`create_score_function("cen_std_smooth","score4L")`). | https://www.rosettacommons.org/comment/10161 | CC-MAIN-2020-50 | refinedweb | 1,146 | 54.52 |
IRC log of tagmem on 2003-07-22
Timestamps are in UTC.
15:14:05 [RRSAgent]
RRSAgent has joined #tagmem
15:16:26 [DanC-AIM]
DanC-AIM has joined #tagmem
15:17:10 [DanC-AIM]
Hi from Rosie's.
15:17:27 [DanC-AIM]
Ping?
15:18:34 [TBray]
TBray has joined #tagmem
15:19:09 [IanYVR]
Hi Dan
15:20:44 [Norm]
Norm has joined #tagmem
15:22:19 [DanC-AIM]
Hi. I'm walking over.
15:22:20 [Roy]
Roy has joined #tagmem
15:22:38 [DanC-AIM]
Zakim, who's here?
15:22:38 [Zakim]
sorry, DanC-AIM, I don't know what conference this is
15:22:39 [Zakim]
On IRC I see Roy, Norm, TBray, DanC-AIM, RRSAgent, Zakim, IanYVR
15:23:33 [IanYVR]
Roll call: TBL, NW, TB, RF, DO, PC, SW, IJ
15:23:51 [IanYVR]
Agenda:
15:24:05 [IanYVR]
Section 4 of agenda
15:24:18 [Stuart]
Stuart has joined #tagmem
15:24:23 [IanYVR]
TBray: I think that for issues 7, 20, 24, 31, 37, there is enough info in arch doc.
15:25:57 [Norm]
Norm has changed the topic to:
15:26:07 [IanYVR]
[Process discussion around last call]
15:26:59 [IanYVR]
PC: Need to schedule last call with groups where there are dependencies
15:27:37 [DanC-AIM]
Hmm... How to ask IETF to review it?
15:29:04 [DaveO]
DaveO has joined #tagmem
15:29:07 [IanYVR]
TBL: If there's a group where you know there are issues, resolve those before last call.
15:29:36 [IanYVR]
TBL: Don't say "we think we're done" while people are still banging on the document on the list.
15:30:42 [IanYVR]
TBray: If we wait to go to last call before reaching consensus with web ont folks, that's going to take a long time.
15:31:03 [IanYVR]
TBL: We have an elephant in the room. People are telling us we're not using terms consistently.
15:31:28 [DanC-AIM]
same room today?
15:31:35 [IanYVR]
TBL: Namely, "resource" is being used in two different ways that makes the document unreadable (issue 14).
15:31:53 [DanC-AIM]
Should I ring the schemasoft door or the antarctica door?
15:32:07 [IanYVR]
You can get all the way to the door of the ping pong room
15:32:24 [IanYVR]
DO: I think you can elicit the problem without solving.
15:32:37 [DanC-AIM]
From the entrance on homer?
15:32:39 [IanYVR]
Yes
15:33:16 [IanYVR]
PC: Unlike other groups, we will have open issues when we go to last call.
15:33:46 [IanYVR]
[Chris joins]
15:33:52 [DaveO]
q+
15:34:08 [IanYVR]
[DanC joins]
15:34:47 [Stuart]
q?
15:35:46 [Stuart]
ack Dave0
15:36:03 [TBray]
q+
15:36:33 [IanYVR]
DO: I think we need to explain to people why arch doc is different from other specs.
15:36:48 [IanYVR]
CL: The model is that we have a stream of issue and we gradually refine the document over time.
15:37:13 [Stuart]
q?
15:37:21 [Stuart]
ack TBray
15:37:59 [IanYVR]
TBray: I think that we still can benefit the community by publishing something that's not complete.
15:38:08 [DavidOrch]
DavidOrch has joined #tagmem
15:38:14 [DavidOrch]
q?
15:38:29 [Stuart]
ack Dave0
15:39:08 [IanYVR]
DO: Need to explain why we chose to stop where we did for v1.
15:40:04 [IanYVR]
PC: Need to explain to people also that we expect to skip CR.
15:40:19 [DavidOrch]
And say that there will be a V2..
15:40:24 [DanC_jam]
DanC_jam has joined #tagmem
15:40:52 [IanYVR]
TBray: We could call this web arch level 0
15:42:03 [IanYVR]
PC: Question of whether to create a new mailing list just for last call comments.
15:42:21 [DavidOrch]
q-
15:42:39 [IanYVR]
q- DaveO
15:42:55 [IanYVR]
PC: I prefer a separate comments list.
15:43:04 [IanYVR]
SW: Me too, with discussion on www-tag
15:43:22 [IanYVR]
TBray: It's going to be hard from keeping discussion going on last call list.
15:44:37 [IanYVR]
PC: Create a monitored list and force every message to be permitted.
15:45:50 [IanYVR]
DC: I agree that cross-posted threads are a pain.
15:46:58 [DanC_jam]
hmm... a moderated list isn't a bad idea... we'll pretty much have to do the equivalent of moderating it anyway
15:47:35 [Norm]
Norm has joined #tagmem
15:48:45 [IanYVR]
"www-tag-review"
15:48:53 [IanYVR]
s/www/public
15:49:03 [DanC_jam]
yeah... public-webarch-comments@w3.org
15:49:55 [IanYVR]
SW: Cost of going to last call on group: increased tracking of issues on doc, ongoing agenda item
15:50:22 [IanYVR]
TBL: It's my opinion that the document is imperfect and we're saying we want to take to last call anyway.
15:50:33 [IanYVR]
TBray: It's incomplete, but I think good enough to go to last call.
15:51:03 [IanYVR]
RF: Last call fine with me. I think it needs a lot of work, but I think it's useful for the process to put out a draft.
15:51:17 [TBray]
TBray: What we have is consistent and essentially correct, but not complete
15:51:22 [IanYVR]
TBL: That's not a last call draft, that's a draft.
15:51:30 [IanYVR]
[TBL comment to RF]
15:52:49 [DanC_jam]
(where are we in the agenda?)
15:53:44 [IanYVR]
Tues morning: Arch Doc
15:55:33 [TBray]
q+
15:55:52 [TBray]
q+ Paul
15:56:13 [IanYVR]
TBL: (Re httpRange-14) I feel that if we don't address this, this will be like the effect of the XML Namespaces Rec ambiguity.
15:56:42 [DaveO]
q+
15:56:44 [DanC_jam]
ack danc'
15:56:47 [DanC_jam]
ack danc
15:56:47 [Zakim]
DanC_jam, you wanted to suggest that we *do* take advantage of Candidate Recommendation
15:57:10 [IanYVR]
DC: I think CR would be a good idea. The doc's intended to have an effect; we can test whether it has the intended effect.
15:57:13 [Chris]
Chris has joined #tagmem
15:57:21 [TimBL-YVR]
TimBL-YVR has joined #tagmem
15:57:22 [IanYVR]
PC: What's the success criterion?
15:58:09 [IanYVR]
DC: Yes, I think we can find groups to use the document.
15:58:23 [IanYVR]
TBL: XML Schema had a similar issue.
15:58:30 [Chris]
Chris has joined #tagmem
15:59:05 [IanYVR]
PC: The infoset spec from XML Core is an example of a spec that you don't implement; you reference normatively. The Core WG left CR when referred to normatively from other specs.
15:59:19 [IanYVR]
PC: Perhaps that's the best we can hope for.
15:59:23 [IanYVR]
ack TBray
15:59:45 [IanYVR]
TBray: I think we should go to last call sooner rather than later. I think a lot of what we've written hasn't been written down in one place before.
16:00:14 [IanYVR]
TBray: I also perceive that the areas where we lack consensus all involve a layer that is "above" where we are now.
16:00:29 [IanYVR]
TBray: These are additional constraints imposed on what we've said already. And layer cleanly.
16:00:52 [IanYVR]
TBray: While I accept that the document does not reflect a reality shared by some, I'm convinced that there's nothing in their that hinders them in their goals.
16:00:56 [TimBL-YVR]
q+
16:01:24 [Norm]
Norm has joined #tagmem
16:01:55 [IanYVR]
RF: Pay Hayes' comments were that RFC2396 covers an information space that is larger than that covered by the Arch Doc.
16:02:27 [TBray]
q+ Paul
16:03:53 [IanYVR]
RF: Pat is not "actually confused"; it's that the Web Arch document doesn't cover the entire space of the Web. He was saying that if you restrict your discussion to information resources, then the document makes sense. It also makes sense if you limit the scope to the information system that is the classic Web. But it doesn't if you include the infosystems that are the sem web or web services.
16:04:14 [IanYVR]
TBL: I believe that the document should not go to last call without this issue resolved.
16:04:16 [IanYVR]
ack Paul
16:04:40 [TimBL-YVR]
X-Archived-At:
16:04:48 [TimBL-YVR]
Pat's message
16:05:07 [IanYVR]
PC Summarizing his interpretation: There's a thread on www-tag that TBL thinks needs to be resolved before we go to last call. Do you think that thread is separable from httpRange-14?
16:05:09 [IanYVR]
TBL: No.
16:05:42 [IanYVR]
TBL: I think the change to the document is fairly easy: introduce "information resource" where appropriate.
16:08:07 [IanYVR]
TBL: I don't know what the solution to the issue is. Perhaps we could resolve Pat's issue without mentioning HTTP. I don't know what the form the result would take.
16:08:11 [TBray]
q+
16:08:28 [IanYVR]
DO: I'm disappointed that this is coming up.
16:09:08 [IanYVR]
DO: We told AC we were going to last call, agreed this would not be on the agenda, and now there's a sort of veto.
16:09:16 [Stuart]
ack DO
16:09:21 [DaveO]
q-
16:09:22 [Stuart]
ack DaveO
16:09:28 [DanC_jam]
ack timbl
16:09:31 [TimBL-YVR]
TBL: However, the issue is SO close to hhtp-range-14 that we couldn't discuss one wiithoyt being allowed to discuss the other.
16:09:32 [IanYVR]
TBL: We could apologize up front that we use the term in two different ways. I might be able to live with last call if there's a red flag at the front of the document.
16:09:37 [Stuart]
ack TBray
16:10:24 [IanYVR]
TBray: I think that Pat Hayes' comment is wrong. I can produce counter-examples. I think his assertion that we are using the term "resource" in two different ways is wrong. I think the document is consistent.
16:11:46 [IanYVR]
ack DanC
16:11:46 [Zakim]
DanC_jam, you wanted to say I disagree with PatH as well; the webarch doc is consistent; but I think it would be cost effective to try the 'information resource' edit; not that
16:11:50 [Zakim]
... costly and could satisfy a lot more readers
16:12:06 [IanYVR]
DC: I disagree with Pat as well. I still think it would be cost-effective to talk about both classes of resources. Lots of people have that angst and that's our audience.
16:12:27 [Chris]
is that just renaming one use, or both uses?
16:12:32 [IanYVR]
q?
16:13:01 [IanYVR]
[Break]
16:20:17 [Norm]
Norm has joined #tagmem
16:50:15 [IanYVR]
[Resume]
16:52:12 [IanYVR]
TBray: I've seen no evidence to convince me that if we proceed with this draft, we are cutting off options.
16:52:44 [IanYVR]
TBray: We don't say much about "what a resource is"; we impose no constraints.
16:52:47 [Norm]
Norm has joined #tagmem
16:52:58 [IanYVR]
TBray: People point out that in the real world there are constraints.
16:53:10 [IanYVR]
TBray: We don't say that and I think that we're right not to make that distinction.
16:53:26 [IanYVR]
TBray: There are a number of taxonomies we could choose for categorizing resources.
16:53:47 [IanYVR]
TBray: I can give you examples of things that are resources but you'd have to stretch to think that they're information resources.
16:54:11 [IanYVR]
TBray: There are other taxonomies that are at least as interesting: class of resources published by hostile govts. for example.
16:54:25 [DaveO]
DaveO has joined #tagmem
16:54:26 [IanYVR]
TBray: I agree that we need a better way to talk about taxonomies of URIs.
16:54:31 [DaveO]
q?
16:54:44 [IanYVR]
TBray: Our formalism is well defined: URI, resource, representation.
16:54:50 [DaveO]
q+
16:55:10 [IanYVR]
TBray: The Web today doesn't have a way to talk about whether something is an information resource, and the software all works fine.
16:55:36 [IanYVR]
TBray: I think that the document is well-enough along, passes the minimal progress necessary to declare victory.
16:55:52 [IanYVR]
TBray: We can't ignore the angst; we need to say something about it, but we don't need to make a big change.
16:56:03 [IanYVR]
ack DaveO
16:56:31 [IanYVR]
DO: I think most of the TAG feels we don't need to solve httpRange-14 before going to last call. Clearly TBL does.
16:57:13 [TBray]
nope
16:57:25 [TBray]
q?
16:57:43 [IanYVR]
[Process discussion]
17:00:29 [IanYVR]
DO: I have concerns about the process. What does my vote mean? TBL has the last word anyway.
17:00:35 [IanYVR]
DC: Yes, and you signed up for the group knowing htat.
17:00:49 [IanYVR]
TBL: I am definitely uncomfortable when my technical role and my process role overlap.
17:01:04 [IanYVR]
DO: We are trying hard not to put TBL in that position.
17:01:08 [Stuart]
q+
17:01:13 [TBray]
q+ Stuart
17:01:29 [TBray]
q+ Stuart
17:01:33 [IanYVR]
TBL: I have avoided talking about an issue that I think is fundamental for the last year. I've not acted in my role as Director as I'd like the group to reach consensus.
17:01:42 [IanYVR]
TBL: I'm not sure that ignoring the issue is the solution.
17:01:44 [IanYVR]
ack Stuart
17:02:01 [IanYVR]
SW: I note that httpRange-14 is open (from Feb 2003) even if not on this agenda.
17:02:42 [DaveO]
I also said that we just may have to vote and then live with a Director prohibiting Last Call publication, if he chooses to exercise that authority.
17:03:01 [IanYVR]
q?
17:03:43 [IanYVR]
PC: Director doesn't gate advancement to Last Call.
17:06:08 [IanYVR]
Straw poll: Does TAG wish to advance arch doc to last call substantially as is (with some editorial changes).
17:08:15 [IanYVR]
DC: I'm not satisfied with how issue 20 is handled in 16 July draft
17:08:53 [IanYVR]
IJ: See "User agents should detect such inconsistencies but should not resolve them without involving the user (e.g., by securing permission or at least providing notification). User agents must not silently ignore authoritative server metadata."
17:09:02 [IanYVR]
DC: That's not enough about error-handling.
17:09:39 [IanYVR]
DC: Before reviewing the doc in substance, I'm not prepared to say "go forward"
17:10:51 [IanYVR]
Straw poll: 5 move forward, 1+.5+.5 against, 1 abstain
17:12:22 [Stuart]
)q?
17:12:24 [DanC_jam]
DanC: I want the arch doc to say "silent recovery from errors considered harmful" and say that louder than "data format specs should say what to do in the presence of errors"
17:12:34 [Stuart]
q?
17:13:54 [IanYVR]
RF: It's hard to write down principles of things that have not been deployed.
17:14:30 [IanYVR]
Review of
[16 July draft]
17:15:08 [IanYVR]
1. Introduction
17:15:38 [IanYVR]
DC: Lots of terms get bolded. Please bold "Web".
17:15:48 [IanYVR]
DC: What are we doing with Editor's notes?
17:15:57 [IanYVR]
"Editor's note: Todo: Introduce notions of client and server. Relation of client to agent and user agent. Relation of server to resource owner."
17:16:05 [IanYVR]
DC: I don't think that that's critical.
17:16:13 [IanYVR]
TBray: Seems appropriate for section 4 when it arrives.
17:16:15 [IanYVR]
DC: I like the intor.
17:16:17 [IanYVR]
TBray: Me too.
17:16:43 [IanYVR]
1.1. About this Document
17:17:23 [IanYVR]
RF: I don't understand why there's a 1.1 and 1.1.1
17:18:07 [IanYVR]
Idea: create 1.1.x on intended audience or drop 1.1.1. subheading.
17:18:43 [Chris]
q+ to worry about "3. The representation consists those bits that would not change regardless of the transfer protocol used to exchange them.
17:18:43 [Chris]
"
17:18:46 [IanYVR]
IJ: I will try to integrate scenario more into section 2.
17:19:17 [IanYVR]
TBray: s/The TAG/This document is intended to inform...
17:19:32 [IanYVR]
(same for second "TAG" instance later in third para of 1.1.1)
17:19:38 [IanYVR]
DC: Paste some of that into status section.
17:19:49 [IanYVR]
2. Identification and Resources
17:19:51 [Chris]
q+ to note objection on record to "User agents must not silently ignore authoritative server metadata.
17:19:51 [Chris]
"
17:20:21 [IanYVR]
TBray: Need to reword Principle: "Use URIs: All important resources SHOULD be identified by a URI."
17:20:38 [IanYVR]
DC: The doc doesn't say "if it doesn't have a URI, that doesn't mean it's not a resource."
17:20:58 [Chris]
q+ to request clarification on "3.2 and semantics"
17:21:32 [IanYVR]
RF: I have a rewrite of paragraph "Although there's not precise..."
17:23:13 [IanYVR]
RF: Don't mix up "identity" and "identify". Something can have identity (or many identities). There are means of identification (N-to-1) to those things.
17:23:34 [IanYVR]
TBray: I would be comfortable saying "URIs should be assigned for all important resources."
17:23:53 [Chris]
q+ to agree with dan about "Specifications of data formats SHOULD be clear about behavior in the presence of errors. It is reasonable to specify that errors should be worked around, or should result in the termination of a transaction or session. It is not acceptable for the behavior in the face of errors to be left unspecified."
17:23:53 [IanYVR]
[Discussion of identify/denote/name]
17:24:13 [IanYVR]
TBL: Not all URIs are necessarily assigned.
17:24:16 [IanYVR]
(e.g., hashes)
17:24:23 [IanYVR]
TBL: Only in delegated spaces.
17:24:27 [IanYVR]
RF: That's still assignment.
17:24:45 [IanYVR]
TBray: Right, at the end of the day you end up with a string that has been assigned to some resource.
17:25:15 [IanYVR]
DC: I support TB proposed wording changed.
17:25:28 [IanYVR]
TBray: The reason I'm proposing this is to stay away from the word "identity".
17:25:36 [Norm]
Norm has joined #tagmem
17:26:04 [IanYVR]
DC: "Assign" is useful. The question is WHO should do something. I think therefore that "assign" is a step in the right direction.
17:26:36 [IanYVR]
DC: The idea is that if everybody shares, we all win.
17:26:42 [Stuart]
q?
17:27:16 [IanYVR]
TBL: It would help a lot if we say "Identifier" here is used in the sense of "naming".
17:27:32 [IanYVR]
TBL: The difference is that Tim Bray is named by "Tim Bray", though he can also be identified by his flashy shirt.
17:28:37 [IanYVR]
TBray: I would be comfortable saying "using id in the sense of name". I am more worried about "denote".
17:28:39 [IanYVR]
DC: I concur.
17:28:50 [IanYVR]
DC: I think "name" helps and "denote" doesn't (without lots of explanation).
17:29:02 [IanYVR]
DC: Actually, maybe "denote" would be incorrect.
17:29:37 [IanYVR]
[We read a paragraph that RF has written on this section]
17:30:57 [IanYVR]
PC: I am wondering whether we might say something nearer to the top about "stable" URIs.
17:32:26 [IanYVR]
PC: Feature of "stability" is also an aspect of importance.
17:32:34 [Chris]
it all hinges on an appropriate definition of 'consistency'
17:33:12 [IanYVR]
TBL: I'm not happy with RF's text.
17:33:36 [TimBL-YVR]
s/produce or consume/convey/
17:34:45 [IanYVR]
TBL: RF's text doesn't address sem web resources.
17:34:47 [TBray]
q+
17:35:11 [DanC_jam]
ack chris
17:35:11 [Zakim]
Chris, you wanted to worry about "3. The representation consists those bits that would not change regardless of the transfer protocol used to exchange them. and to note objection
17:35:14 [Zakim]
... on record to "User agents must not silently ignore authoritative server metadata. and to request clarification on "3.2 and semantics" and to agree with dan about
17:35:17 [Zakim]
... "Specifications of data formats SHOULD be clear about behavior in the presence of errors. It is reasonable to specify that errors should be worked around, or should result in
17:35:21 [Zakim]
... the termination of a transaction or session. It is not acceptable for the behavior in the face of errors to be left unspecified."
17:35:40 [IanYVR]
TBray: General remark on the document - People are going to take this document seriously. There will be lots of debates.
17:35:54 [IanYVR]
TBray: One of the ways we should be careful is to take out sentences that don't need to be here.
17:36:03 [Chris]
rrsagent, pointer?
17:36:03 [RRSAgent]
See
17:36:03 [TimBL-YVR]
q+
17:36:12 [IanYVR]
TBray: Every sentence that is not contentful should be removed.
17:36:47 [IanYVR]
TBL: I feel that the namespaces spec would have been improved on if some of those sentences had not been removed. I don't want us to follow that path.
17:37:10 [IanYVR]
q+ to talk about "on the Web" in RF's text.
17:37:32 [IanYVR]
RF: Delete "transclude all or part of it into another resource." You can do transclusion without a URI.
17:37:47 [IanYVR]
RF: Transclusion is not a rationale for many people.
17:37:51 [IanYVR]
DC: I disagree.
17:38:00 [IanYVR]
TBray: Purists will debate our use here.
17:38:13 [IanYVR]
RF: Say instead "include by reference."
17:38:15 [IanYVR]
DC: Yes.
17:38:28 [IanYVR]
RF: Delete last para of 2 (before 2.1)
17:39:58 [IanYVR]
TBL: The second paragraph of 2 is where you would put in the distinction about an information resource.
17:40:49 [IanYVR]
TBL: "A resource can be anything. Certain resources convey information when a resource has a link to another one."
17:40:53 [IanYVR]
q?
17:41:12 [IanYVR]
TBray: Would it meet TBL's needs to ack the class of information resources?
17:42:29 [IanYVR]
TBray: I suggest that we say that the universe of resources has a subset which we will call "information resources" that convey information. And stop there. We ack the distinction but don't put all HTTP URIs on one side of the border or the other.
17:42:36 [IanYVR]
CL: Add "electronic"?
17:42:44 [IanYVR]
TBL: No, could transfer via light, for example.
17:43:03 [Chris]
not about transfer, about category of information
17:43:07 [Chris]
but okay
17:43:33 [IanYVR]
IJ: About "on the Web"
17:43:47 [IanYVR]
TBray: I think "information resource" is isomorphic to DO's concept of "on the Web"
17:44:43 [IanYVR]
[Question of whether "on the Web" means "really does have a representation that is available"
17:45:09 [IanYVR]
TBL: From the semantic Web point of view, things are on the semantic Web from the moment you use the URI.
17:45:23 [IanYVR]
TBL: But in the common parlance, I think "it really does work" for information objects.
17:45:34 [IanYVR]
CL: In the common parlance, electronic is also understood...
17:45:48 [Chris]
the common parlance thus applies excluusively to electronic information objects
17:46:49 [IanYVR]
DO: I want "information resource" connected to "on the Web" in the document.
17:46:56 [TimBL-YVR]
URIs identify resources. A resource can be be anything. Certain resources are information resources, which convey information. These are termed information resources. Much of this document discusses information resources, often using the term resource.
17:47:33 [DanC_jam]
Bray: I like that parap
17:48:07 [DanC_jam]
DanC: it doesn't discuss "on the web". hmm...
17:48:10 [TimBL-YVR]
An information resource is on the Web when it can be accessed in practice.
17:48:13 [Chris]
last sentence, change to "Much of this document, while discussing resources in general, applies primarily or most obviously toinformation resources"
17:48:27 [TBray]
+1 to Chris
17:48:34 [Chris]
changes from an apology to an explanation
17:49:02 [IanYVR]
IJ: I can work with "on the Web" as a parenthetical remark tieing into other terms, but I don't think it needs to identify a formal part of the architecture and I don't imagine it being used later in the document.
17:49:22 [TimBL-YVR]
q+
17:49:24 [IanYVR]
DO: I think the definition should stay away from actual availability.
17:49:38 [Chris]
ian, wsa does indeed want to use it as a defined term, I understand
17:49:51 [IanYVR]
DO: Fine by me to say that there's a general expectation that a representatoin will be available.
17:50:02 [DaveO]
Indeed, SOAP 1.2 does use the term "on the web".
17:51:28 [Roy]
I don't think that there is an information resource and non-information resource. I think that some resources are accessible on the Web information system and others are not (or are only indirectly accessible). That is because anything that has state is an information-bearing resource, but you may not have access to that information.
17:52:27 [DanC_jam]
ack danc
17:52:27 [Zakim]
DanC_jam, you wanted to ask for the figure to go in section 1 or 2; it has the word "identifies"
17:52:28 [IanYVR]
q- IanYVR
17:52:54 [IanYVR]
DC: The word "identify" is used in the illustration I'd like to see in section 1 or 2. I'd like the label "identifies" in the figure.
17:53:04 [IanYVR]
RF: That is Pat Hayes' objection. You can argue it with him.
17:53:09 [IanYVR]
Review of diagram
17:53:22 [IanYVR]
from SW
17:54:16 [Chris]
17:54:30 [IanYVR]
PC: What does "is a" mean.
17:54:37 [IanYVR]
DC: I believe it's clear enough.
17:54:47 [IanYVR]
TB, RF: Diagram doesn't add much.
17:55:33 [Chris]
actually,
17:55:36 [Chris]
17:55:44 [IanYVR]
TBray: Change camel case to English
17:55:45 [TimBL-YVR]
17:55:45 [Chris]
no good reason that is not public
17:55:46 [IanYVR]
[Support for that]
17:57:46 [IanYVR]
IJ: I would like to simplify diagram by removing dotted arrows
17:58:35 [IanYVR]
DC: It's critical that there be three things in the diagram. A lot of people miss that point.
18:00:20 [IanYVR]
Action CL: Redraw diagram with (1) English words (2) no more isa arrows; just label objects
18:01:23 [Chris]
ok since we just redrew it on the whiteboard, I won't send the old one to the public list but instead, the simplified new one
18:03:00 [DanC_jam]
Ian: yes, I intend to make "on the web" a term
18:03:08 [IanYVR]
TBray: Please lose "exponentially" in para 2.
18:03:23 [IanYVR]
DC: The point is that it's non-linearly.
18:03:30 [IanYVR]
TBL: use "dramatically"
18:03:57 [IanYVR]
DC: There's a lot of data to back up "exponential"
18:04:20 [IanYVR]
[No change]
18:04:26 [TimBL-YVR]
18:04:31 [IanYVR]
[i.e., leave "exponential"]
18:05:04 [IanYVR]
2.1. Comparing Identifiers
18:05:19 [IanYVR]
RF: I think this isn't true: "An important aspect of communication is to be able to establish when two parties are talking about the same thing."
18:05:43 [IanYVR]
RF: It's an important aspect of someone else observing that they are talking about the same thing.
18:06:09 [TBray]
18:06:37 [TBray]
2.1 Awkward start. Communication between parties is facilitated by the ability to establish when they are talking about the same thing. Then lose the second sentence.
18:07:16 [IanYVR]
TBL: Parties don't identify. They talk about or refer to. They don't identify.
18:07:43 [Chris]
+1 to timbl because context of use is important
18:08:38 [IanYVR]
Delete "In the context of the Web, this means when two parties identify the same resource."
18:08:49 [IanYVR]
TBL: Say "two parties are referring to the same resource" in following sentence.
18:08:58 [IanYVR]
DC: "Most straightforward' instead of "most common"
18:09:07 [IanYVR]
RF: Delete "Depending on the application, an agent may invest more processing effort to reduce the likelihood of a false negative (i.e., two URIs identify the same resource, but that was not detected).
18:09:07 [IanYVR]
"
18:09:14 [IanYVR]
RF: It's covered better in the URI spec.
18:09:24 [IanYVR]
TBray: Anybody who cares about this really needs to read [URI]
18:09:28 [IanYVR]
DC: I'm happy to delete that sentence.
18:09:32 [DanC_jam]
pls change "The most common" to "The most straightforward"
18:10:28 [IanYVR]
TBL: I think that the last sentence is useful since it lets people know that there is a risk of false negatives.
18:10:48 [IanYVR]
TBray: Yes, it's worthwhile saying that false negs can occur; for details look at [URI]
18:12:30 [IanYVR]
CL: Some apps are more sensitive to false positives, some more sensitive to false negs; choose wisely.
18:14:29 [IanYVR]
Action IJ: Fidget with this text.
18:14:49 [IanYVR]
Editor's note: Dan Connolly has suggested the term "coreference" instead of "equivalence" to communicate that two URIs are referring to the same resource.
18:14:59 [IanYVR]
DC: I can live without that change.
18:15:54 [TimBL-YVR]
"coreference" isin the same class as "denote" - we had decided not to use technical terms.
18:17:05 [IanYVR]
2.1. Comparing Identifiers
18:17:10 [IanYVR]
TBray: There are two side trips into URI opacity
18:17:36 [IanYVR]
TBray: I think that we need to discuss separately (1) comparing and (2) looking inside
18:17:59 [IanYVR]
NW: "In general, determine the meaning of a resource by inspection of a URI that identifies it."
18:18:13 [IanYVR]
NW: I'll provide words...
18:19:11 [IanYVR]
CL: "Although that it's tempting to infer this by looking at the URI that it is about..."
18:19:33 [IanYVR]
TBL: "...not licensed by the specs..."
18:19:50 [TimBL-YVR]
<- updated
18:20:55 [IanYVR]
IJ: I'll try to create a section on opacity out of some text in 2.1
18:21:41 [IanYVR]
[Agenda comment: Substantial sentiment to continue walking through arch doc until we get done]
18:22:00 [DanC_jam]
[it seemed sufficient sentiment to consider it RESOLVED to me]
18:22:12 [IanYVR]
RF: Don't use the term "spelling" URIs.
18:22:46 [IanYVR]
DC: The point is that the string has to have the same characters.
18:23:49 [IanYVR]
RF: Change good practice title to "Consistent spelling or URIs"
18:23:57 [IanYVR]
IJ: What about lexical?
18:24:10 [IanYVR]
DC: Same string of characters.
18:24:25 [IanYVR]
DC: The agent should use the same string of characters as originally provided.
18:24:37 [IanYVR]
2.2. URI Schemes
18:24:59 [DanC_jam]
ACTION IJ: re-word "spelling" box
18:25:18 [IanYVR]
TBray: Why did "scheme" get changed to "scheme name"?
18:25:41 [IanYVR]
RF: If you're talking about the string before the colon, its the scheme name.
18:26:04 [IanYVR]
TBray: The *scheme* corresponds to a specification.
18:26:11 [IanYVR]
TBL: "There are other *schemes*..."
18:26:33 [IanYVR]
Action IJ: Prune instances of "scheme name" except for string component before ":".
18:27:03 [IanYVR]
RF: I use "scheme component" instead of "scheme name" for slot before ":"
18:27:04 [DanC_jam]
perhaps change "Each URI begins with a URI scheme name" to "Each URI follows a URI scheme" or ... hmm...
18:27:08 [TimBL-YVR]
q+
18:27:32 [IanYVR]
CL: s/to classify/to refer to/
18:27:39 [TimBL-YVR]
<defn>URI Scheme</defn> to be higher up in the para.
18:28:16 [IanYVR]
RF: Scheme names are always used in lowercase: "http URI"
18:28:27 [IanYVR]
TBL: I disagree; we're talking about the protocol HTTP
18:30:33 [IanYVR]
Resolved: Change "HTTP URI" to "http URI"
18:31:01 [IanYVR]
SW: s/identify identify/identify
18:31:21 [IanYVR]
SW: The scheme definitions use the verb "designate".
18:31:43 [IanYVR]
SW: If we use a different term than the spec we are referring to, that's problematic.
18:31:53 [IanYVR]
DC: I think we have a good reason to use a different term.
18:33:21 [IanYVR]
Resolved: Add footnote that the other specs use the term "designate". We take "identify" and "designate" to mean the same thing.
18:33:47 [IanYVR]
[Lunch]
18:36:33 [DanC-AIM]
byebye
18:36:54 [Norm]
Norm has joined #tagmem
18:46:55 [Norm]
Norm has joined #tagmem
18:58:04 [Ralph]
Ralph has joined #tagmem
18:58:35 [Norm]
Norm has joined #tagmem
18:59:33 [Zakim]
rebooting for service pack installation
19:26:24 [ndw]
ndw has joined #tagmem
19:26:34 [DanCon]
DanCon has joined #tagmem
19:26:54 [skw]
skw has joined #tagmem
19:30:04 [Stuart]
Stuart has joined #tagmem
19:43:33 [Ian]
Ian has joined #tagmem
19:56:13 [ndw]
ndw has joined #tagmem
19:59:38 [Ian]
[Resume]
20:00:15 [Ian]
TBL: There was discussion at lunch about including more best practices.
20:01:31 [DanCon]
TBL: how about "don't use the same URI for something that's an information resource and something that's not"
20:01:37 [DanCon]
TBL: e.g. dublin core title
20:02:05 [DanCon]
(Roy also sent a problem report w.r.t. XML encryption algorithm identifiers, suggesting they should *not* contain #s. have you seen that, timbl?)
20:02:10 [Ian]
[Continuing on 2.2]
20:03:14 [Ian]
RF: "Several URI schemes incorporate identification mechanisms that pre-date the Web into this syntax:"
20:03:47 [Ian]
RF: The examples are URIs; the identification mechanism is not sufficiently targeted in that sentence to distinguish talking about the URI or the information system.
20:04:04 [Ian]
RF: I would make one big list instead of two lists.
20:04:44 [Ian]
RF: change to "incorporate information systems that predate the Web into the URI syntax..."
20:04:45 [Ian]
[Yes]
20:05:17 [Ian]
TB: "We note in passing..."
20:05:23 [Ian]
TB: Get rid of "Note in passing"
20:05:45 [Ian]
SW: IRIs are indeed proving expensive.
20:06:36 [Ian]
DC: I think the sentence is insufficiently motivated, but I can't think of anything better.
20:07:53 [Ian]
TB: I propose to delete "We note in passing that even more expensive than introducing a new URI scheme is introducing a new identification mechanism for the Web; this is considered prohibitively expensive."
20:08:47 [Ian]
[DIscussion about whether IRIs are new identification mechanism.]
20:09:38 [Ian]
TBL: s/We note in passing/Of course,/
20:09:55 [DaveO]
DaveO has joined #tagmem
20:10:06 [Ian]
TB: If we are going to make a manifesto, put it higher in the document.
20:10:45 [Ian]
Resolved: Delete "We note in passing that even more expensive than introducing a new URI scheme is introducing a new identification mechanism for the Web; this is considered prohibitively expensive." since network effect covered above.
20:11:11 [Ian]
IJ: I'll delete "When finding available based on Tim Bray's discussion of this topic, link from here.
20:11:11 [Ian]
"
20:12:10 [Ian]
RF: On "If the motivation behind registering ..."
20:12:52 [Ian]
RF: There hasn't been any demonstration that there's higher cost to registering URI scheme to registering content type.
20:14:16 [Ian]
RF: Registration process same for URI schemes and MIME types in IETF space.
20:15:20 [Ian]
CL: It's worth saying to not register new schemes that aliases and existing scheme.
20:16:02 [Chris]
Chris has joined #tagmem
20:16:05 [Chris]
q+
20:16:13 [Ian]
IJ: I would move the first bullet to section on opacity.
20:16:19 [Chris]
q?
20:16:25 [Norm]
Norm has joined #tagmem
20:16:35 [Ian]
DC: There is a choice to be made about when to register a new mime type and when to register a URI scheme.
20:17:12 [Zakim]
Zakim has joined #tagmem
20:17:14 [Chris]
q+
20:18:06 [Ian]
DC: Proposed deleting from "If the motivation " through Editor's note.
20:19:02 [Chris]
ack chris
20:19:02 [Ian]
IJ: I intend to keep the first bullet but move it.
20:19:12 [Ian]
TBL: I'd like to keep the list and add:
20:19:29 [Ian]
1) Don't invent a new protocol when one exists that gets the job done. You'd have to replicate the caching structure,
20:19:40 [Ian]
and the social behavior.
20:20:35 [Ian]
2) Cost of reinventing something is that you often make the same mistakes.
20:21:00 [TBray]
TBray has joined #tagmem
20:21:06 [Ian]
RF: I agree with these points, but they belong in the section on protocols.
20:21:08 [Ian]
TBray: I agree.
20:21:26 [DanCon]
(I'm scanning the issues list... tim's comments about re-inventing HTTP are issue HTTPSubstrate-16; what's the status of that issue?)
20:21:51 [Ian]
TBL: Don't just remove text, leave a cross ref if you move it.
20:22:01 [DanCon]
(is issue 16 on our list of issues we intend to resolve for this last call?)
20:22:28 [Ian]
RF: Once you have a new protocol, you may want to say "you SHOULD have a new URI scheme for that protocol."
20:23:26 [Ian]
q+
20:24:03 [Ian]
q-
20:24:30 [Stuart]
q?
20:24:43 [Ian]
DC: There's a time and place for new uri schemes and new media types.
20:25:24 [DanCon]
itms was a time for a new media type, not for a new URI scheme. but I'm not sure how to generalize
20:25:43 [Ian]
TBL: Don't create a new URI scheme if the properties of the identifiers and their relation to resources are covered by an existing scheme.
20:27:14 [timbl]
timbl has joined #tagmem
20:27:23 [Ian]
DC: I can tell when it's done wrong, but not sure I can write done the right thing.
20:27:31 [Ian]
PC: Even writing down the wrong thing is helpful.
20:27:44 [Ian]
DC: That's IJ's finding (from TB's blog)
20:27:53 [timbl]
The properties of the space addressed (the set of things identifiable and their relationship with the identifers) is essentially the same as any existing space, that space should be used.
20:28:11 [timbl]
s/^/If/
20:29:13 [Ian]
TB, DC: Delete first bullet "The more resource metadata is included in a URI, the more fragile the URI becomes (e.g., sensitive to changes in representation). Designers should choose URI schemes that allow them to keep their options open."
20:29:30 [Zakim]
Zakim has joined #tagmem
20:30:40 [Ian]
Resolved: Delete "Reasons for this include" through bulleted list.
20:30:50 [Ian]
2.3. URI Authority
20:30:53 [Ralph]
Ralph has left #tagmem
20:32:13 [Ian]
DC: I would have expected third paragraph in section 3.
20:32:36 [Roy]
Roy has joined #tagmem
20:32:46 [timbl]
q+
20:35:07 [timbl]
The owner of a URI defines iwhat it identifies. The web protocols allow the owner to run and control a server which provides representation, and so when such a representation has been retreived it is reasonable to take it as authoritative.
20:36:24 [Stuart]
q?
20:36:31 [Stuart]
ack timbl
20:36:36 [Ian]
TBL: There is a place here to say that, because the protocols allow the URI owner to control the server, since you have protocols, it's reasonable to hold the resource owner accountable for the representations.
20:36:59 [Ian]
TBray: I note move from "authority" to "responsibility"
20:37:21 [DanCon]
I could live without this section.
20:38:32 [Ian]
IJ: Point was to introduce authority in assignment of URIs. Later authority of producing representations.
20:39:54 [Ian]
Resolved: Delete 2.3, moving paragraphs 3 and 4 to section 3 of the document.
20:40:25 [Ian]
PC: Ensure that unused refs are deleted.
20:41:02 [Ian]
2.4. Fragment Identifiers
20:41:47 [Ian]
TBL: I think in the second paragraph that "reference to" and "with respect to" are insufficiently clear.
20:41:53 [Ian]
[We note that that text is from RFC2396bis]
20:42:08 [Ian]
TBray: I think that "with respect to that resource" is incorrect.
20:42:34 [Ian]
TBray: "Additional information that is interpreted in the context of that representation."
20:43:19 [Ian]
RF: It's with respect to the *Resource*, across all representations.
20:44:10 [TBray]
q+
20:44:15 [Ian]
Does "foo#author" mean that "author" has to mean that this is the author of the primary resource? One could read it that way.
20:44:36 [Ian]
DC: I agree that "named in" works better.
20:45:53 [Ian]
TBray: So we are asserting that the frag id is interpreted w.r.t. the resource.
20:46:06 [Ian]
DC: We are observing that, yet. There are bugs and weirdnesses out there, but they are wrong.
20:46:48 [Ian]
TBL: If you dereference a URI and get a representation back, and you know the media type, and you know the frag id semantics, then you know what is identified by the frag id.
20:47:12 [Ian]
TBL: That doesn't mean that the frag id doesn't have meaning if you don't dereference the URI.
20:48:05 [Ian]
RF: change "that is merely named with respect to the primary resource." to "named by the primary reosurce."
20:48:31 [Norm]
The fragment identifier component of a URI allows indirect
20:48:31 [Norm]
identification of a secondary resource, by reference to a primary
20:48:31 [Norm]
resource and additional identifying information that is named by
20:48:31 [Norm]
that resource. The identified secondary resource may be
20:48:31 [Norm]
some portion or subset of the primary resource, some view on
20:48:33 [Norm]
representations of the primary resource, or some other resource that
20:48:35 [Norm]
is merely named by the primary resource.
20:48:49 [Chris]
rf: delete next paragraph
20:49:08 [Chris]
'Although the generic URI syntax allows ...'
20:49:29 [Chris]
nw: see above, did we agree to this
20:49:36 [Chris]
tbl: no not really
20:49:44 [Norm]
The fragment identifier component of a URI allows indirect identification of a secondary resource by reference to a primary resource and additional identifying information that is named in that resource. The identified secondary resource may be some portion or subset of the primary resource, some view on representations of the primary resource, or some other resource that is merely named by the primary resource.
20:51:42 .
20:51:47 [timbl]
When an information resource has a URI and has a representation; and in the language of that representation a given string identifies a second resource, then the concatenation of the URI, a hash mark and the string form a URI for that second resource.
20:51:55 [DanCon]
hmm... I wonder if we came up with any good text when we worked in the wiki
20:52:03 [Chris]
it needs to tie it back to the resource fetched
20:52:39 [Chris]
lets avoid 'concatenation'
20:52:46 [Norm]
yes, please!
20:53:44 [DanCon]
why avoid 'concatenation'? that's what one does with #, no?
20:54:28 [Chris]
actually no, you split it off, stuff it in your back pocket, and then use it in isolation on what you got back
20:55:49 [DanCon]
hmm... ok, concat/split, same difference
20:56:06 [Chris]
no, pretty much opposites
20:56:51 [DanCon]
i.e. same situation, 2 different ways to describe it. if long=concat(short1, short2), then short1=split(short2 from long)
20:59:18 [Ian]
[TBL draws diagram on board showing splitting URI into frag id and URI-with-no-frag-id.]
20:59:59 [Ian]
IJ: It occurs to me we ought to re-use the initial diagram several times, successively elaborating it. E.g., when we talk about what a representation is, show the "REpresentation" piece as including metadata and data.
21:00:32 [Ian]
URI-with-hash IDENTIFIES Resource2
21:00:39 [Ian]
URI-with-no-hash IDENTIFIES Resource1
21:05:09 [Ian]
NW: I want to confirm that "#foo" means the same thing in all representations and if it doesn't it's a bug.
21:05:14 [Ian]
DC: Yes, I agree.
21:07:51 [Ian]
TBL: Not exactly. It can be reasonable to give back to types of things depending on the format returned (e.g., bank statement or HTML document that's kind of equivalent).
21:08:00 [DanCon]
(this is an issue on our list too...)
21:08:09 [Ian]
TBray: But they are functionally equivalent w.r.t. the user.
21:08:56 [DaveO]
Dan, you mean frag-id issue #28?
21:08:58 [Ian]
NW: Is it architecturally unsound to serve a format with content negotiation that does not define frag id semantics (e.g., serve HTML and PDF).
21:09:11 [Ian]
?
21:09:30 [Ian]
TBL: Browsers should say "There's no frag id".
21:09:41 [Ian]
DC: This is a silent recovery from error today.
21:10:30 [Ian]
CL: Are we saying it's an error to serve foo#bar if one representation doesn't define frag id semantics?
21:10:44 [Ian]
TBray: Not a server problem, but an authoring problem.
21:11:11 [Chris]
conneg and fragments considered incompatible
21:11:16 [timbl]
When an information resource has a URI and has a representation; and in the language of that representation a given string identifies a second resource, then the concatenation of the URI, a hash mark and the string forms a URI for that second resource. The MIME type registration defines this syntax and semantics of such a string.
21:11:30 [Ian]
See 2.4.1 for this discussion...
21:12:28 [Ian]
TBray: TBL means that the format spec defines the semantics of what the frag id is used for.
21:12:42 [DaveO]
q+
21:12:56 [Ian]
DC: I think it's easier to talk about splitting a URI rather than concatenating two parts.
21:13:11 [TBray]
ack TBray
21:14:44 [Ian]
TBray: Superfluous to say "info resource" since that's the kind that has representations.
21:15:11 [Stuart]
ack Dave0
21:15:16 [Chris]
so (to clarify) items 7 .. 11 on the agenda are hereby dropped
21:15:45 [TBray]
When a resource has a representation...
21:15:57 [timbl]
When a resource has a URI and has a representation; and in the language of that representation (using a syntax and semantics defined by the MIME type specification) a given string identifies a second resource, then the concatenation of the URI, a hash mark and the string forms a URI for that second resource.
21:16:00 [DanCon]
SIGH.
21:16:03 [Ian]
DC: I thought we were going to use the term information resource that we introduced earlier.
21:17:37 [Ian]
RF: I think that the existing text in RFC2396 is superior to TBL's proposal.
21:17:53 [Ian]
DC: I agree that the second para is better.
21:18:03 [Ian]
(ie.. existing text in arch doc)
21:18:32 [Ian]
RF: I think it's important to be able to define a resource with a URI that includes a frag id without having to get back a representation.
21:18:37 [Chris]
q+ to tak about delegated authority and fragments
21:18:55 [DaveO]
q-
21:19:30 [Norm]
Norm has joined #tagmem
21:20:25 [Norm]
How is this:
21:20:35 .
21:20:44 [Norm]
The URI that identifies the secondary resource consists of the URI of the primary resource with the additional identifying information as a fragment identifer.
21:21:10 [Ian]
[Discussion of "selecive with respect to that resource"]
21:22:39 [Stuart]
q+
21:22:56 [Ian]
RF: HOw about "that is defined by that resource" instead.
21:23:06 [Ian]
RF: The MIME type is not significant here.
21:23:42 [Ian]
DC: I think RF's current text is good, and we could also include TBL's paragraph
21:24:28 [Ian]
TBray: Can we lose the word "merely"?
21:25:44 [Ian]
DC: I am ok with Norm text, but on condition that it go into 2396bis.
21:26:04 [Stuart]
q-
21:26:05 [Ian]
TBray: I think NW's second .proposal is better than TBL's:
21:26:18 [Chris]
q?
21:26:51 [DanCon]
ack chris
21:26:51 [Zakim]
Chris, you wanted to tak about delegated authority and fragments
21:27:00 [timbl]
q+ to bzzzzzzzzzzzzzzzt vague alarm
21:27:02 [Norm]
Norm has joined #tagmem
21:27:08 [Ian]
Ian has joined #tagmem
21:27:15 [Norm]
q?
21:28:31 [Ian]
CL: You don't get to fiddle around with URIs. You do, however, get to fiddle with the fragment.
21:28:32 [DanCon]
hmm... I hear a point that chris is making... but I'm not sure how to put it into an economical number of words
21:28:48 [DaveO]
Norm, I don't understand your earlier question about #foo meaning the same thing. If WSDL defines #foo to mean an abstract component *thing*, and SVG defines #foo to mean an xml element with name foo, then they don't have the same meaning.
21:29:09 [Ian]
I also think that xpointer lets you create anchors outside the original document.
21:29:43 [Ian]
So person A can create anchor in person B's representation
21:29:44 [DanCon]
so DaveO, don't make SVG and WSDL representations that use 'foo' available for the same resources
21:30:07 [Stuart]
q?
21:30:18 [DanCon]
ack timbl
21:30:18 [Zakim]
timbl, you wanted to bzzzzzzzzzzzzzzzt vague alarm
21:30:35 [Chris]
ack chris
21:30:44 [Ian]
TBL: I find NW's alternative is still vague.
21:30:55 [Ian]
TBL: If you include my paragraph after it, I will be happy.
21:31:12 [Roy]
This is what the URI spec *also* says:
21:31:21 [Chris]
dave - svg defines barename #foo to mean the xml thing because its a +xml media type
21:31:22 [Ian]
TBL: In particular, it's important to see how URIs are the same; and how to proceed with frag id.
21:31:25 [DanCon]
(stuart/ian, did we agree to include the figure from the whiteboard?)
21:31:38 [Ian]
Chris has action to do image revision.
21:31:44 [Ian]
I'd like CL to do a version of what's on board, too
21:31:46 [Chris]
dan - I believe we did and i will draw that one, too
21:31:57 [DanCon]
thx, chris
21:32:01 [Ian]
thx chris
21:32:12 [Ian]
Proposed: Include TBL para after existing para from 2396.
21:32:33 [Ian]
TBray: I prefer NW's text to that in 2396
21:32:39 [Ian]
TBray: I accept DC's caveat.
21:33:39 [Ian]
Resolved: Accept NW's second proposed text and TBL's text.
21:34:03 [Ian]
[Break]
21:34:52 [Norm]
Norm has joined #tagmem
21:38:45 [timbl]
22:00:42 [Ian]
</break>
22:00:49 [timbl]
Wheas human communication tolerates such anbiguity, machine processing does not.
22:02:01 [Ian]
[Discussion of whether Director should say ok to advance of a spec to PR (or PER) if mime type not registered.]
22:03:16 [Ian]
How to Register a Media Type with IANA (for the IETF tree)
22:03:22 [Ian]
22:03:25 [Ian]
Does this need updating?
22:05:01 [Ian]
----
22:05:32 [Chris]
minimally yes as it speaks of the ietf tree, should be standards tree
22:05:50 [Chris]
danc mentioned email, no ID required
22:06:17 [timbl]:08:31 [DanCon]
pointer to what we're looking at for the minutes? with $Date$?
22:09:23 [Ian]
Discussion of "Although the generic URI syntax allows any URI to end with a fragment identifier, some URI schemes do not specify the use of fragment identifiers. For instance, fragment identifier usage is not specified for MAILTO URIs."
22:09:34 [Norm]
Norm has joined #tagmem
22:09:39 [Ian]
RF: This is orthogonal to the URI scheme.
22:09:50 [Ian]
TBray: It's not the scheme, it's the data formats.
22:11:39 [Ian]
Resolved delete "Although the generic URI syntax allows any URI to end with a fragment identifier, some URI schemes do not specify the use of fragment identifiers. For instance, fragment identifier usage is not specified for MAILTO URIs.
22:11:40 [Ian]
"
22:12:59 [Ian]
TBray: Please see if you can either delete 2.4.1 heading or find a second heading for 2.4
22:13:01 [Chris]
>>>>>>>>>>
<<<<<<<<<<
22:13:39 [Ian]
NW: Change in 2.4.1 "Clients should not be expected to so something..." to "It is an error..."
22:15:36 [Ian]
TBray: "It is an error condition when you have a URI with a frag id and representations don't have consistent frag id semantics..."
22:15:52 [Ian]
RF: You need to be careful: The error is not creating the identifier.
22:16:12 [Ian]
RF: You may tolerate the error in some cases.
22:16:35 [Ian]
RF: Good practice note is wrong: "Authors SHOULD NOT use HTTP content negotiation for different media types that have incompatible fragment identifier semantics."
22:18:11 [Ian]
TBray: "In the case where you use coneg to serve multiple representations, and some of those representations have inconsistent frag id semantics, then you are creating an opportunity for this error to occur."
22:18:22 [DanCon]
yes, pls strike the good practice box and replace with words ala what Bray just said
22:19:14 [Chris]
or, clarify the good practice note ... but can live with tim brays text
22:19:18 [DanCon]
NW: yup
22:19:29 [Ian]
Proposed: Revise good practice note with spirit of what TB said.
22:19:54 [Ian]
TBL: I'm ok with TB's text.
22:20:04 [Ian]
Resolved: Revise good practice note with spirit of what TB said.
22:22:19 [DanCon]
misuse from 3.3: "The simplest way to achieve this is for the namespace name to be an HTTP URI which may be dereferenced to access this material."
22:22:59 [Ian]
[Discussion of "dereference" v. "retrieve"]
22:23:06 [Roy]
q+
22:23:09 [timbl]
q+ Roy
22:23:41 [DanCon]
I like 'access' and I can live with 'retrieve' and I'd like to avoid 'dereference' if we can.
22:24:20 [Ian]
DO: Please include examples in 2.5.
22:24:24 [Norm]
Norm has joined #tagmem
22:25:01 [Ian]
TBray: I like "access" as well.
22:25:42 [Chris]
dereference is used in 2.2. URI Schemes as well
22:25:53 [Chris]
Furthermore, the URI scheme specification specifies how an agent can dereference the URI.
22:25:59 [Chris]
+1 for access
22:26:01 [Ian]
TBray: I suggest deleting "Given a URI, a system may attempt to perform a variety of operations on the resource, as might be characterized by such words as "access", "update", "replace", or "find attributes". Available operations depend on the formats and protocols that make use of URIs. "
22:27:07 [timbl]
To derefernce a URI is access the resource which it identifies.
22:27:18 [Chris]
dereference is not retrieval
22:27:29 [timbl]
necessarily
22:29:09 [DanCon]
1. rename to accessing a resource
22:29:14 [Ian]
Proposed::29:26 [DanCon]
2nded.
22:29:32 [TBray]
+1
22:29:36 [Chris]
+1
22:29:36 [Ian]
IJ: I don't agree.
22:30:36 [Ian]
IJ: +1
22:30:49 [Ian]
Resolved::32:06 [Ian]
DC: In 2.8.4, I prefer "access' over "resolution"
22:33:20 [Ian]
TBL: I think we can delete "resolution" from the document.
22:33:31 [Ian]
TBL: Use "access" instead.
22:33:35 [Ian]
NW: Delete "finite" from 2.5
22:34:15 [Ian]
Resolved: Delete resolution from document (replace with access where necessary).
22:34:19 [DanCon]
if you're gonna strike finite, you might as well strike 'set'
22:35:06 [DanCon]
("Resolved" was a bit hasty there... stand by...)
22:35:13 [Ian]
"While accessing a resource..."
22:35:32 [Ian]
Resolved: Delete resolution from document (replace with access where necessary).
22:36:55 [Ian]
2.5.1. Retrieving a Representation
22:37:29 [Chris]
Some URI schemes (e.g., the URN scheme [RFC 2141]) do not define dereference mechanisms.
22:38:30 [Chris]
is it tru (yes apparently) and does it contribute anything useful
22:39:34 [Chris]
okay, chris lets it slide
22:39:52 [Ian]
2.5.1. Retrieving a Representation
22:40:03 [Ian]
TBray: Potentially misleading - " The representations communicate the state of the resource."
22:40:23 [Ian]
TBray: Representation doesn't need to represent ENTIRE state of resource.
22:41:04 [Ian]
TBL: "Some or all of the state of the resource...."
22:41:05 [DanCon]
ed note: "As stated above" as a consequence of decisions we made recently.
22:41:20 [Ian]
Resolved: "communicate some or all of the state of the resource."
22:41:28 [Chris]
"is used within an a element " is vague
22:41:47 [Ian]
SW: Change "which representations are used" to "which content types".
22:41:50 [Ian]
DC, TB: No.
22:42:17 [Ian]
TBray: A server can throw your PUT on the floor.l
22:42:22 [Chris]
suggest 'is the value of an href attribute in the xlink namespace on an a element
22:42:30 [Ian]
DO: This section is about retrieving a representation.
22:42:36 [Ian]
SW: Comment withdrawn.
22:42:41 [TBray]
note that the "As stated above" reference no longer works since we nuked that section
22:42:44 [Chris]
q+ to say just that
22:42:48 [Chris]
q?
22:42:57 [Ian]
RF: This good practice note is out of place: "Owners of important resources SHOULD make available representations that describe those resources."
22:43:20 [timbl]
Note now dead link on "authority responsible for a URI"
22:43:48 [timbl]
s/that describe those/of those/
22:43:50 [Ian]
Change to "Resource representations: Owners of important resources SHOULD make available representations of those resource."?
22:43:59 [Stuart]
q?
22:44:07 [Chris]
q+ to say that "the SVG specification suggests " is weak, too
22:44:11 [Ian]
RF: I think that moves away from original intent: I think it was that owners should provide metadata.
22:44:18 [timbl]
q+ s/that describe those/of those/
22:44:23 [Ian]
DC: No, it was about not filling the Web with 404s.
22:44:24 [timbl]
q+ to say s/that describe those/of those/
22:44:40 [Ian]
DC: Drill in this good practice Note by giving a 404 example.
22:44:46 [Ian]
DC: And show that that sucks.
22:44:48 [Chris]
ack chris
22:44:48 [Zakim]
Chris, you wanted to say just that and to say that "the SVG specification suggests " is weak, too
22:44:52 [DaveO]
q+
22:45:02 [Stuart]
ack Roy
22:45:11 [DaveO]
q-
22:45:27 [DaveO]
q+ to mention representations retrieved by other methods than GET
22:45:37 [DanCon]
logger, pointer?
22:45:53 [DanCon]
RRSAgent, pointer?
22:45:53 [RRSAgent]
See
22:46:07 [Norm]
Norm has joined #tagmem
22:46:25 [DanCon]
22:42:22 [Chris]
22:46:25 [DanCon]
suggest 'is the value of an href attribute in the xlink namespace on an a element
22:46:54 [Ian]
IJ: Nowhere does the SVG spec say "GET".
22:46:58 [Chris]
22:47:12 [Chris]
SVG provides an 'a' element, analogous to HTML's 'a' element, to indicate links (also known as hyperlinks or Web links). SVG uses XLink ([XLink]) for all link definitions.
22:48:50 [timbl]
q+
22:50:26 [Chris]
its the xlink href in the context of the a element and the other attributes on the a element that imply
22:51:06 [Norm]
Norm has joined #tagmem
22:51:46 [timbl]
ack tim
22:51:46 [Zakim]
timbl, you wanted to say s/that describe those/of those/ and to
22:52:36 [Chris]
xlink:show = 'new | replace'
22:52:36 [Chris]
Indicates whether, upon activation of the link, traversing to the ending resource should load it in a new window, frame, pane, or other relevant presentation context or load it in the same window, frame, pane, or other relevant presentation context in which the starting resource was loaded.
22:52:43 [DaveO]
ack daveo
22:52:43 [Zakim]
DaveO, you wanted to mention representations retrieved by other methods than GET
22:53:17 [Chris]
ACTION Chris tighten this language for SVG 1.2
22:53:27 [Ian]
DO: What do we say about POST - result of POST operation is a representation (or some data).
22:53:36 [Ian]
TBL: That's not a representation of any resource.
22:53:47 [Ian]
DO: Yes, it is, I can give it a content location.
22:54:17 [Ian]
TBray: Question, e.g., of, after an update, getting a mere 200 or getting updated text (i.e., representation).
22:54:21 [Chris]
"By activating these links (by clicking with the mouse, through keyboard input, and voice commands), users may visit these resources." is vague and wooly
22:54:27 [Ian]
[Discussion of HTML forms]
22:54:57 [Ian]
DC: I disagree that what is POSTed is a representation of the resource.
22:54:59 [Chris]
prefer sections 1 and 2 bring in the other context (element name, attributes)
22:55:08 [Ian]
(yes)
22:55:21 [Chris]
ie is not just the occurence of a bare URI on some random element that makes it be a hyperlink
22:55:41 [Ian]
[Agreement on "form data"]
22:55:51 [Norm]
Norm has joined #tagmem
22:55:53 [Norm]
q?
22:55:53 [Ian]
DO: I send form data to the server. Is what I get back a representation?
22:56:00 [Chris]
ACTION Ian, Chris discuss and propose improved wording
22:56:21 [Ian]
DC: No, it's not a representation.
22:56:35 [Ian]
RF: It is a representation.
22:56:45 [Ian]
RF: It's a representation of the response that you get back.
22:56:55 [Ian]
DC: It's not a representation of any thing the common specs give a name to.
22:57:21 [TBray]
q+
22:58:20 [TBray]
q-
22:58:21 [DaveO]
I want to make sure that we say that POST results are NOT retrieval operations.
22:58:29 [Chris]
q+ to say I believe we already agreed to this - that access is not trhe same ars retrieval
22:58:38 [Chris]
post result is not a retieval action
22:58:43 [Ian]
RF: An HTTP POST is not a retrieval action.
22:59:12 [Stuart]
ack Chris
22:59:12 [Zakim]
Chris, you wanted to say I believe we already agreed to this - that access is not trhe same ars retrieval
22:59:28 [Ian]
q+ to ask what changes are being suggested in 2.5.1
22:59:46 [timbl]
An HTTP POST is not a retreival action. Any resulting response is NOT a representation of the URI posted to.
23:00:58 [Stuart]
if the resullt includes a Location header, is the result a representation of the resource referenced by the location header?
23:01:01 [DanCon]
yes, let's use POST as an example to distinguish access from retrieval
23:03:12 [Ian]
Action IJ: Include POST (and other methods) as examples of deref methods at beginning of 2.5
23:03:35 [DaveO]
stuart, I don't think a POST that "happens" to contain a Location is a "retrieval". It's a deref, that could be followed by a retrieval on the Location URI.
23:03:37 [Chris]
in other words, make a positive statement that non-retrieval access is both possible and good if appropriate
23:03:48 [Chris]
some non-HTTP examples would be good, too
23:03:49 [Ian]
Delete editor's note in 2.5.1 since "on the web" handled earlier per today's discussion.
23:03:53 [Ian]
2.5.2. Safe Interaction
23:04:07 [Ian]
[Minor editorial only]
23:04:11 [Ian]
2.6. URI Persistence
23:05:07 [Ian]
Resolved: Delete "draft" before "TAG findings" globally.
23:05:59 [Ian]
DC: "Similarly, one should not use the same URI to refer to a person and to that person's mailbox."
23:06:16 [Ian]
DC: If you ask Mark Baker are you your mailbox, he'd say yes.
23:06:54 [."
23:07:09 [Ian]
[Text provided by TBL]
23:07:47 [Ian]
TBL: s/URI persistence also/It is an error for a URI to identify two different things.
23:07:48 [Ian]
DC: No.
23:08:15 [Ian]
TBray: What about retitling section "Maximizing the usefulness of URIs"
23:08:18 [Chris]
2.6. URI Persistence > new title
23:08:30 [Chris]
sunsections persestence, ambiguity, reliability
23:08:37 [Ian]
"URI Persistence and Ambiguity"
23:09:02 [Ian]
RF: Are all uses of URIs for the sake of identification?
23:09:03 [Ian]
TBL: Yes.
23:09:15 [Ian]
RF: Identification of what?
23:09:34 [Ian]
RF: What about using a URI in a sentence as an indirect identifier: "I wonder whose home page is
"
23:09:52 [Ian]
TBL: The URI refers to the home page...
23:11:11 [Ian]
TBL: I have a problem with sentence starting "For instance...."
23:11:43 [Stuart]
see
23:12:02 [Ian]
DC: "Whoever publishes the URI should be clear about whether it identifies the book, the whale, etc."
23:12:24 [Ian]
TBL: I don't like that in this case.
23:12:31 [Ian]
TBL: I don't want them to say it's a whale.
23:14:31 [DanCon]
DC: whale? yeah... take out the whale.
23:14:36 [TBray]
q+
23:14:40 [Ian]
q-
23:15:05 [Ian]
TBray: In TBL's proposed paragraph, I disagree with a lot and don't understand some points.
23:15:15 [Chris]
proposed addition makes invalid assertions. Some machines tolerate ambiguity very well
23:15:26 [Ian]
TBray: I don't agree with a straight assertion that machines don't tolerate ambiguity.
23:15:49 [Norm]
q+
23:16:02 [Chris]
q?
23:16:49 [Stuart]
ack TBray
23:16:55 [Stuart]
ack Norm
23:17:14 [Ian]
NW: People will make conflicting assertions. The system will have to deal with this.
23:17:24 [Ian]
NW: I'm satisfied with existing text.
23:18:40 [Ian]
TBL: I think our concern is not the ambiguity of "Moby Dick" it's the inconsistent uses of the URI.
23:18:47 [DanCon]
I'm not too happy with the moby paragraph.
23:18:54 [Ian]
TBray: When you mint a URI you need to be clear about what it identifies.
23:18:59 [Ian]
TBray: Remove the quotes...
23:19:21 [Norm]
q+
23:19:46 [Ian]
DC: I'd like a positive statement about what to do.
23:19:50 [DanCon]
and strike whale
23:19:57 [Stuart]
q+ Ian
23:20:00 [timbl]
q+ to say that anything which goes in this document should be consisetnt with HTTP resources being information resources
23:20:05 [Norm]
I don't see any reason to strike whale
23:20:09 [DanCon]
ack tim
23:20:09 [Zakim]
timbl, you wanted to say that anything which goes in this document should be consisetnt with HTTP resources being information resources
23:20:43 [Ian]
TBL: I don't want to say that the URI designates whatever the URI owner wants.
23:21:02 [DanCon]
tim's correct that this para, as written, takes a position on httpRange-14.
23:21:10 [DanCon]
ack norm
23:21:33 [Ian]
NW: Could I not write an RDF assertion that says that this URI identifies the while, then another assertion that conflicts with that?
23:21:35 [Ian]
TBL: Yes.
23:21:48 [Ian]
NW: Why is this special case so important?
23:22:10 [Ian]
TBL: Important axiom for version 2 of the arch doc - referent of an HTTP resource without a hash is an information resource.
23:23:58 [Stuart]
q+ Paul
23:24:10 [Ian]
NW: RF's the standards that we write for the sake of identification is orthogonal to the systems that make use of them.
23:24:13 [DanCon]
ack ian
23:24:20 [timbl]
Tim;'s comments was not abouyt th doc, about dan's proposal
23:25:25 [Roy]
q+
23:25:36 [Roy]
ack Roy
23:26:24 [Ian]
PC: May need, for forward compatibility, to ensure that something is an error in V1 though might be more meaningful in V2 of arch doc.
23:26:28 [Norm]
And it's too late already. The cat is out of the bag.
23:26:32 [Ian]
PC: Need to include a warning at least to users.
23:26:43 [Stuart]
q?
23:26:46 [DanCon]
hmm... a health-warning about httpRange-14 is an interesting idea.
23:26:48 [Stuart]
ack Paul
23:27:14 [Ian]
PC: Do we put something in arch doc v1 that is warning that we say "Don't do this; we think that there are arch reasons to use this form for explicit meaning and we haven't yet defined that."
23:27:38 [Ian]
RF: You can replace the URI with a URN...
23:27:45 [Ian]
DC: Or put a hash and frag id it.
23:27:50 [Ian]
s/it/in it/
23:28:53 [Stuart]
q?
23:28:54 [Ian]
DC: If you ask most people if something is a whale, and people can put it in their browser, they're likely to say "Nope, that's not a whale."
23:28:58 [Stuart]
ack DanCon
23:28:58 [Zakim]
DanCon, you wanted to note that timbl's axiom may be more widely accepted than norm suggested
23:29:20 [Ian]
NW: That argument suggests that if there's a hash mark at the end of a URI and it takes them to the middle of a document, then it's not a whale either.
23:29:29 [Ian]
TBL: If it's a hypertext document, it's never a whale.
23:29:52 [Ian]
NW: So I can't serve up with coneg a hypertext doc that describes an RDF vocab and the RDF vocab.
23:29:54 [Ian]
DC: Right.
23:30:01 [Ian]
DC: That seems problematic; and we discussed that earlier.
23:30:18 [DanCon]
... in the case of WSDL and HTML
23:30:28 [Ian]
q+ TB
23:30:29 [Ian]
ack TB
23:30:50 [Ian]
TBray: I would be ok with taking "whale" out of the sentence. I'm still not convinced of TBL's axiom.
23:31:13 [DaveO]
Dan's right, same thing with namespacename#WSDLFrag-ID and the collision between RDDL vs WSDL representations at the namespace URI.
23:31:24 [Ian]
RF: Just change the scheme name...
23:31:41 [DanCon]
... too foo: ... something unregistered
23:31:46 [Ian]
TBray: What about "Melville#moby"
23:33:11 [Ian]
Action TBL: Propose a replacement to "URI persistence ...person's mailbox".
23:33:37 [Ian]
---
23:35:22 [Ian]
[TAG accepts risk that continuing walkthrough puts other agenda items at risk.]
23:36:18 [Chris]
q+ to mention 2.7
23:36:21 [Ian]
2.7. Access Control
23:36:35 [Chris]
23:36:48 [Chris]
23:37:39 [Ian]
[German govt ruling in favor of permissability of deep linking]
23:38:00 [Ian]
CL: TAG could update its finding to include a link to this.
23:38:27 [TBray]
Typo: two quotes before the word Deep ("')
23:38:28 [Ian]
Action IJ: Update Deep linking finding (new revision) with reference to this decision.
23:38:53 [Ian]
2.8. Future Directions for Identifiers
23:39:02 [DanCon]
ed: 2.8 has no text before 2.8.1
23:39:15 [Ian]
2.8.2. Determination that two URIs identify the same resource
23:39:25 [Ian]
TBray: Pay Hayes says we're wrong on this.
23:39:28 [Chris]
2.8.1 no-one had any objections to the text
23:39:28 [Ian]
DC: I disagree with him.
23:39:59 [Ian]
2.8.3. Consistency of fragment identifier semantics among different media types
23:41:01 [Chris]
2.8.3 first para on automagic fragment conversion is hazy and likely not possible in general
23:41:07 [Chris]
suggest dropping it
23:41:13 [Ian]
TBL: There was discussion in HTTP community about putting frag id in headers.
23:41:28 [Ian]
Resolved: Delete "There has been some discussion but no agreement that new access protocols should provide a means to convert fragment identifiers according to media type."
23:41:50 [Chris]
2.8.3 in entirity hits the dust
23:41:52 [Ian]
Delete 2.8.3, distributing refs to issues elsewhere.
23:42:00 [Ian]
Action DC:
23:42:08 [Ian]
Include pointers in 2.8.5 to such systems.
23:42:09 [Chris]
2.8.5 needs more clarity and pointers
23:42:13 [Ian]
(e.g., freenet)
23:42:50 [Roy]
Roy has left #tagmem
23:43:15 [Ian]
Action IJ: Add text to 2.8 before 2.8.1 giving context (e.g., work going on in community, no guarantee that TAG will do this work)
23:43:54 [Ian]
TBray: This is a survey of the landscape; not a commitment to actions.
23:44:04 [Ian]
PC: And do this in 2, 3, 4.
23:45:18 [Ian]
SW: Meeting resumes at 8:30 tomorrow. Door open at 8am
23:45:21 [Ian]
ADJOURNED
23:45:24 [Ian]
RRSAgent, stop | http://www.w3.org/2003/07/22-tagmem-irc.html | crawl-001 | refinedweb | 12,722 | 71.44 |
.
For a long time I used plain Generic Handlers (ASHX files) to handle my AJAX requests but it felt stupid and painful.
I mean, the functionality was there but the whole process of handling the requests wasn't straight forward.
So I made a list of the things I would like to haveler
{
public class MyFirstHandler : BaseHandler
{
// I don't bother specifying the return type, it'll be serialized anyway
public object GreetMe(string name)
{
return string.Format("Hello {0}!", name);
}
}
}
To call this method through a URL use:
MyFirstHandler.ashx?method=GreetMe&name=AlexCode);
}
});
Like I said on my intention points above, I need to have some methods that return whatever I want like HTML, XML, images, files, etc...
The default behavior of the handler is to return JSON so, by method, we need to explicitly say that we want to handle things our way.
For that just use these lines anywhere within the method:
SkipContentTypeEvaluation = true;
SkipDefaultSerialization = true;
// you can specify the response content type as follows
context.Response.ContentType = "text/html";
public object GiveMeSomeHTML(string text)
{
StringBuilder sb = new StringBuilder();
sb.Append("<head><title>My Handler!</title></head>");
sb.Append("<body>");
sb.Append("
This is a HTML page returned from the Handler
");
sb.Append("
The text passed was: " + text + "
");
sb.Append("</body>");
context.Response.ContentType = "text/html";
SkipContentTypeEvaluation = true;
SkipDefaultSerialization = true;
return sb.ToString();
} implement them.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million | http://www.codeproject.com/Articles/353260/ASP-NET-Advanced-Generic-Handler-ASHX?fid=1700246&df=10000&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=Relaxed&select=4532542&fr=1 | CC-MAIN-2013-48 | refinedweb | 267 | 53.1 |
11 November 2011 05:39 [Source: ICIS news]
By Helen Yan
SINGAPORE (ASIA)--Spot butadiene rubber (BR) prices in Asia may continue to nosedive after shedding about $1,000/tonne (€740/tonne) from early October, tracking the decline in values of natural rubber (NR) and feedstock butadiene (BD), industry sources said on Friday.
BR prices fell below $3,000/tonne this week, and were assessed at $2,800-3,000/tonne CFR (cost and freight) northeast (NE) ?xml:namespace>
“There is still room for the BR prices to drop further to around $2,700/tonne if the NR and BD prices continue to fall,” an industry source said.
NR and BR are raw material substitutes in the production of tyres for automobiles, and their prices tend to move in the same direction.
On Thursday, NR prices slumped by more than $600/tonne from the start of the month, falling below $3,200/tonne, according to the Malaysian Rubber Board.
Asian prices of BD, the feedstock for BR production, meanwhile, plummeted by about $900/tonne from early October to around $1,550/tonne CFR NE Asia on 4 November, according to ICIS data.
In the domestic Chinese market, spot BR values fell by more than yuan (CNY) 9,000 over the past month to around CNY 22,000 this week, market sources said.
Weak demand given worries over the ongoing eurozone debt crisis and the floodings in
“BR prices have been falling sharply mainly because of the feedstock BD price, but demand has also remained weak as the downstream tyre makers are adopting a cautious stance and most are buying on a hand-to-mouth basis, given the global market uncertainty,” a trader said.
Asian economies, including
In
( | http://www.icis.com/Articles/2011/11/11/9507357/asia-br-may-fall-further-as-nr-bd-prices-on-downslide.html | CC-MAIN-2014-10 | refinedweb | 287 | 50.3 |
Subject: [Boost-build] [wince] Building Boost for Windows CE... some progress and problems...
From: Andy Schweitzer (a.schweitzer.grps_at_[hidden])
Date: 2009-06-13 20:38:31
I've been trying to build boost for windows CE. I've made some progress,
but it's not really working yet. I was asked off-list about it, so
I thought I'd post a report and some questions.
Summary:
Basically, I managed to build STLport libs/dlls for CE, using vc8 and
vc9, with minor mods to STLport. STLport test program runs, apparently
successfully on the CE emulator. A test program can link to STLport libs
and use them successfully.
Then, with a modified user-config.jam (based on VeecoFTC's), and the
proper bjam command line. I was able to build some of the boost
libraries. I have only gotten static link and vc8 building. My dlls fail
to link to STL. My vc9 has include problems. I haven't really
investigated either of these. Compiler errors result from parts of the
standard C++ libraries that are missing from CE (and not provided by
STLport). I have satisfied the compiler by providing stub
implementations - which of course wouldn't actually work, but do build.
I linked a test program to some of the built libraries and ran it
successfully in the emulator. I would like to run boost::test on the
emulator to see what works and what doesn't. So far I have been able to
build boost::test, but not the first individual test case I tried,
libs/system/test. I got an ARM vs THUMB link error which I have yet to
investigate.
Questions:
-- Any suggestions C++ standard functions missing from CE?
-- Has anyone tried wcelibex? It looks appropriate. All its
functions start with "wce_", and I think for it to work with well with
boost and STLport, those would have to be wrapped into calls with the
standard names and put into STL namespace.
-- Any idea why vc9 would have problems finding the right include
directories when vc8 does not? I diffed the setup batch files and they
appear to be indentical.
-- Any ideas on why machine conflicts (X86 vs ARM and ARM vs THUMB) seem
to occur in some cases but not others?
====================================================
Details (for reference - comments welcome from anyone
who wants to pore through details):
====================================================
Tools:
* vc8, vc9
I had to un-install IE8 before I could create
Smart Device projects. Based on web searches
there seems to be an on-going problem with
some solutions that didn't work for me.
* CE SDKs
STANDARDSDK_500
* Ran code from WM5 emulator
* bjam
====================================================
Code (modifications included in attached zip file):
--STLport-5.2.1--
* added batchfiles to setup and build STL
They assume an env var STL_ROOT pointing
to, STLport-5.2.1 dir.
if you put them there and run
STL_evc8_ARMV4I_WCE500.bat or
STL_evc9_ARMV4I_WCE500.bat, they
might "just work"
* build/Makefiles/nmake/evc4.mak - added define of
ARMV4I and use of THUMB
* src/details/fstream_win32io.cpp - open() failed
until I hacked in code to ignore failing
call to SetFilePointer. Not sure what is
going on here, or if hack is appropriate.
* stlport/using/cstring - hacked in stub of strerror
(boost::system needs it)
--Boost--
* used svn to download trunk a couple weeks ago
* tools/build/v2/tools/user-config.jam - adds toolsets
for vc8+CE+STLport and vc9+CE+STLport
* libs/system/test/error_code_user_test.cpp - stubs
errno and call to std::remove.
* libs/program_options/src/parsers.cpp
stubbed environ
* libs/iostreams/src/file_descriptor.cpp
hacked out some includes
stubbed _get_osfhandle
* libs/iostreams/src/mapped_file.cpp
added TEXT macro to get wider chars
hacked out try_may_file
* libs/filesystem/src/operations.cpp
hacked out some includes
stubbed in Get/SetCurrentDirectory,
GetFullPathName, GetShortPathName
* boost/test/impl/cpp_main.ipp
stubbed in getenv
* These built at least enough successfully that static link succeeded.
static-link successfully built:
system
filesystem
iostreams
program_options
test
thread
signals
date_time
I could not link to them from an emulator test-program until I used
--build-type=complete (which nevertheless reported erors).
I actually tried and successfully linked to:
system
filesystem
program_options
iostreams
* build command, that at least succeeded with static link:
bjam --with-system --with-thread --with-signals --with-date_time^
--with-filesystem --with-program_options --with-iostreams^
--with-test --build-type=complete^
toolset=msvc-8.0~wm5~stlport5.2^
stdlib=stlport-5.2~evc8~arm > bjam.txt 2>&1
* failures:
regex - fails on strxfrm
serialization
graph - fails on iswalnum
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/boost-build/2009/06/21997.php | CC-MAIN-2022-27 | refinedweb | 771 | 59.19 |
misfortune
fortune-mod clone
misfortune line lengths, and a "print fortune matching regex" mode (instead of just "print all fortunes matching regex" mode).
Usage
Most of the command-line flags from
fortune work with
misfortune as well. To just print a fortune, run:
misfortune
To index a new fortune file (or update the index on an existing one), run:
misfortune-strfile path/to/file
To use the fortune API in your Haskell programs:
import Data.Fortune import qualified Data.Text as T main = do f <- openFortuneFile "pangrams" '%' True appendFortune f (T.pack "The quick brown fox jumps over the lazy dog.") appendFortune f (T.pack "Quick blowing zephyrs vex daft Jim.") closeFortuneFile f putStrLn =<< randomFortune ["pangrams"]
This example will create or append to a file "pangrams" in the working directory, and create or update the corresponding index file "pangrams.dat". It then closes that file and requests a random fortune from all databases named "pangrams" in the search path - so it will either print one of the two just written or one found in another "pangrams" file. Every eligible fortune is equally likely.
Installation
Get the current release from Hackage:
cabal install misfortune
Or build the latest version from git:
git clone cd misfortune cabal install | https://www.stackage.org/nightly-2017-02-11/package/misfortune-0.1.1.2 | CC-MAIN-2017-09 | refinedweb | 206 | 63.39 |
A PROJECT REPORT ON
SUBMITTED BY
KSHAMA ACHAREKAR
ROLL NO: 01
M.COM. SEM- IV
(INDIRECT TAX)
PROF.NITIN KADAM
1
INDIRECT TAX
CERTIFICATE
During The Academic Year 2016-17 The Information Submitted Is True And
Original To The Best Of My Knowledge.
____________________ ___________________
_____________________ ___________________
Co-coordinator Principal
2
INDIRECT TAX
STUDENT
INDIRECT TAX under the guidance of project guide Prof. NITIN KADAM
during the academic year 2016-17. The information submitted is true to the
best of my knowledge.
Date: Signature
Place: Bhandup
3
INDIRECT TAX
ACKNOWLEDGEMENT
to all of them in helping me to carrying out this project work. Last but
4
INDIRECT TAX
Contents
Maharashtra Value Added Tax Act 2002
INTRODUCTION
VAT SET-OFF
Service Tax
Bibliography
5
INDIRECT TAX
Chapter-1
INTRODUCTION
VAT (Value Added Tax) is a multistage tax system for collection of sales tax. The
system envisages levy of tax on the sale at each stage and contemplates allowing of
set off of tax paid on purchases. Thus, tax is getting paid on the value addition in
the hands of each intermediately vendor. Through the whole chain, State collects
6
INDIRECT TAX
tax on actual consumer price. The process covers whole chain of distribution i.e.
from manufacturers till retailers. Prior to 1-4-2005, the system for levy of tax in
Maharashtra was, in general, single point tax system. As a consequence to national
consensus for introduction of VAT, the earlier Bombay Sales Tax Act, 1959 is
replaced by Maharashtra Value Added Tax Act, 2002. The Act has come into force
with effect from 01/04/2005. Thus, from 1-4-2005, sales tax is being collected
under VAT system in Maharashtra. Salient features of this Act are mentioned
hereunder:
Definitions
Section 2 gives definitions of various terms. The definitions are almost at par with
earlier law i.e. Bombay Sales Tax Act, 1959. Some of the important definitions:
Section 2 (4) Business The definition of Business includes in its scope any
service, trade, commerce, manufacture or any adventure or concern in the nature of
such service, trade, commerce or manufacture, whether carried on with or without
profit motive and whether actual profit is earned or not. Further, it also includes
any transaction which is incidental or ancillary to such trade, commerce,
manufacture, adventure, concern or service and also includes any transaction
which is incidental or ancillary to commencement or closure of such trade,
commerce, manufacture, service etc. The purchase of any goods, the price of which
is debited to business is also be deemed to be the purchase effected in the course of
business. Similarly sale of any goods, the proceeds of which are credited to the
business is also deemed to be the sale effected in the course of business. Though
service is also included in the definition of business, as per Section 2(27) only
notified services are to be included in the scope of the definition. As on today no
7
INDIRECT TAX
such services are notified and as such at present no service gets covered under the
definition of business.
Section 2(12) Goods means every kind of movable property. The definition
specifically includes live stocks, growing crop, grass and tree, plants including
produce thereof under given circumstances. However, it excludes newspapers,
money, stocks, shares, securities, lottery tickets and actionable claims.
Section 2(30) Tax free goods Tax free goods means the goods against which
rate of sales is shown to be Nil in the Schedule and Taxable goods means goods
other than tax free goods. This means all goods at present covered by schedule A
are tax free.
Section 2(8) - Dealer - Definition of Dealer includes any person who buys or
sells goods in the state for commission, remuneration or otherwise. It also includes,
among others, by an Explaination, public charitable trust, government departments,
societies, State Government, Central Government, shipping companies, airlines,
advertising agencies etc.
Section 2 (13) - Importer means a dealer who brings any goods into the State or
to whom any goods are dispatched from outside the state, which will include
import out of India also.
Section 2 (24) Sale - Sale means a sale of goods made within the State for
cash or deferred payment or other valuable consideration but does not include a
mortgage, hypothecation, charge or pledge. Ordinarily sale means transfer of
property to buyer in goods for cash or deferred payment or other valuable
consideration. A sale within the State includes a sale determined to be inside the
State in accordance with the principles formulated in Section 4 of the Central Sales
8
INDIRECT TAX
Tax Act, 1956. Following types of transactions are also included in definition of
sale.
9
INDIRECT TAX
or the Customs Act, 1962 or the Bombay Prohibition Act, 1949, shall be
deemed to be part of the sale price of such goods, whether such duties are
paid or payable by or on behalf of, the seller or the purchaser or any other
person. However, the definition excludes the cost of insurance for transit
or of installation, when such cost is separately charged. Sales tax, if any,
charged separately shall not form part of sale price. Generally, freight and
octroi will be part of sale price if the sale is door delivery contract. If the
same is ex sellers place and the above expenses are received as
reimbursement or as per the separate contract for rendering services, then
it will not form part of sale price. The issue for discussion is whether
amount of service tax charged separately in sales invoice in case of works
contract transaction will form part of Sale Price or not. The
Maharashtra Sales Tax Tribunal in case of M/s. Nikhil Comfort (S. A.
No. 3 of 2010 dated 31/03/2012 held that service tax amount forms a part
of sales price. The assessee filed an appeal before the Bombay High
Court against the judgment of Tribunal.
10
INDIRECT TAX
CHAPTER-2
Section 3 of the Act provides for turnover limits for liability to pay tax as well as
for registration. The registration number, which used to be referred to as
Registration Certification No. (R.C. No.) has been changed to TIN (Tax Payers
Identification Number) and hence the R.C. No. is now referred to as VAT TIN. This
change is effective from 1.4.2006. The limits for registration are as under:
11
INDIRECT TAX
(iii) The dealer who is liable to pay tax is required to apply for registration
under the Act within 30 days from the date on which prescribed limit of
turnover exceeds. In case of change in ownership or constitution, an
application for new registration certificate (TIN certificate) is to be made
within 30 days from the date of such change. In case of death of a dealer, an
application for new registration for transfer or succession of business can be
made within 60 days from the date of death of dealer. If application for TIN
is made within the time as mentioned above, then registration certificate will
be granted from the date of liability, otherwise from the date of application.
One TIN number will be issued for the whole state of Maharashtra, which
will cover all the places of business of the dealer in Maharashtra. The VAT
TIN may be given retrospective effect by the concerned Joint Commissioner
of Sales Tax on making separate application for Administrative Relief.
(Refer Circular No. 33T of 2007 dated 18th April, 2007 and 36T of 2009
dated 24th December, 2009 issued by Commissioner of Sales Tax).
(iv) The dealer can also apply for voluntary registration by paying
registration fees of ` 5,000/-. Registration certificate in such case will be
granted with effect from the date of application. Apart from registration fee
of ` 5000/- , a dealer is also required to deposit `. 25,000/-. With effect from
12
INDIRECT TAX
1st May, 2011, it has been provided in Section 16(2A) that this deposit is in
the nature of security deposit and cannot be adjusted against the tax liability.
However, this security deposit will be refundable. As per Rule 60A, a person
or dealer will have to make an application to registering authority for refund
of security deposit after 36 months but before 48 months from the end of the
month containing date of effect of registration certificate. In case of
cancellation of registration certificate within 36 months, an application for
refund of deposit will have to be filed within 6 months from the date of
service of cancellation order. If application is not filed within the prescribed
time limit, then deposit will be forfeited. Subject to above, if all the returns
are filed and all taxes are paid then the registration officer will refund the
deposit within 90 days from the date of application.
(v) The application for registration (VAT TIN) is to be made in Form No.101
and in Form A for C.S.T TIN.
CHAPTER-3
13
INDIRECT TAX
VAT SET-OFF
Audit Report Requirements
Further, MVAT Auditor has to mention the difference in set-off
claimed and set-off as per audit.
PENAL PROVISIONS
14
INDIRECT TAX
15
INDIRECT TAX
16
INDIRECT TAX
ISSUE
But as per Section 50 (2), dealer may adjust excess refund against any
return for any period contained in year whereas as per rule 55, dealer
may adjust only against subsequent period. Under such overriding
provisions, Section will prevail over rules. Further, a dealer has to be
interpreted the provisions which are more beneficial to him.
For URD dealer setoff allowable only for the purchase made in the
F.Y. in which he gets registered. However Administrative relief can
be made for the other years. (rule 55)
18
INDIRECT TAX
CHAPTER-3
VAT Composition Scheme
The tax system in India is unnecessarily complicated. Absolutely true. It
requires the assessee to pay monthly, there are returns to be filed, and the rates
keep changing. But there are also some convenient schemes under which you can
escape all this. The Composition Scheme is one such scheme, applicable to all
traders in India with a turnover of between Rs. 10 lakh and Rs. 50 lakh.
19
INDIRECT TAX
Instead, you pay a fixed rate of tax that is lower than what others would pay, you
are not required to file monthly forms and can instead opt for a form that covers a
quarter or even a year, depending on the business youre in. You may wonder why,
of course, the government would offer such a scheme. The simple answer is that, in
its absence, many small-time traders would not bother registering at all.
VAT is collected at every step of the trading process. Lets say wholesaler A sells to
dealer B stock worth Rs. 1 lakh. On this amount, dealer B pays VAT of say Rs.
4000. Then dealer B sells to trader C for Rs. 1.2 lakh. Trader C now pays VAT of
Rs. 5000. Under the normal system, when trader C sells the goods, for say Rs. 2
lakh, he can collect VAT of Rs. 8000, but must only pay Rs. 3000 to the
government (as Rs. 5000 has already been paid to dealer B).
This is known as Input Tax Credit. Under the composition scheme, this is not
possible; the entire amount must be paid. After an examination, the money will be
returned. For this reason, traders that are purchasing from dealers or wholesalers
opt against the composition scheme.
20
INDIRECT TAX
CHAPTER-4
Procedure for the application
of vat refund application in
Maharashtra
In this we will discuss how to fill the Form 501,After downloading the form 501,
now open the form. It is in excel format. It contain total six sheets which are as
following:
Errors sheet
Form 501 sheet
Annexure A
Annexure B
Annexure C
Annexure D
Error sheet:
After filling the form 501 click on validate sheet option on upper side of each
sheet, if it contains errors then such errors are shown in this sheet (error sheet).
21
INDIRECT TAX
In this sheet you to fill the basic details like name of dealer, MVAT Tin, CST Tin
no., Address of place of business, Details of Bank account in which refund is
sought, refund amount, period for which refund is made, Name of authorized
person with contact details, place & other details.
Annexure A:
In this annexure you to fill the details relating to all the vat purchase made during
the refund period. For e.g. if you are claiming the vat refund for period 2014-15
then you have to enter the vat purchase details relating to period 2014-15. This
sheet contains seven columns i.e. Sr. No., Tax invoice/ credit note/debit note no.,
Tax invoice/ credit note/debit note date, TIN of supplier, Net taxable amount,
Input Vat amount, Gross total.This annexure is same as J-2 sheet in VAT audit
report form 704.
Annexure B:
Annexure C:
This annexure is same as Annexure I in Vat audit report form 704. In this annexure
you to fill the details relating to all the certificates (C Form, F Form, H Form etc.)
which are not received up to the date of vat refund application made (Form 501).
It Contain eleven columns i.e. Sr no., Name of the dealer who has not issued
declaration or certificates, CST tin if any, Declaration or certificate type, Invoice
no., Invoice date, Taxable amount, Tax amount, rate of tax applicable (local rate),
22
INDIRECT TAX
Amount of tax on applying the local rate tax rate, Differential tax liability (Tax by
local rate tax by CST rate). Only difference between this annexure & the
annexure I in Vat audit report form 704 is that in that annexure you have to enter
the details relating to all the certificates not received up to the date of Vat audit
report Form 704 & in this annexure you have to enter the details relating to all the
certificates not received up to the date of VAT refund application form 501 made.
Annexure D:
This annexure is same as Annexure G & H in Vat audit report form 704. In this
annexure you to fill the details relating to all the certificates (C Form, F Form, H
Form etc.) which are received up to the date of vat refund application made (Form
501). It Contain Nine columns i.e. Sr no., Name of the dealer who has issued
declaration or certificates, CST tin, Declaration or certificate type, Declaration or
certificate no., Invoice No., Invoice date, Taxable amount, Tax amount.Only
difference between this annexure & the annexure G & H in Vat audit report form
704 is that in that annexure you have to enter the details relating to all the
certificates received up to the date of Vat audit report Form 704 & in this annexure
you have to enter the details relating to all the certificates received up to the date of
VAT refund application form 501 made.
23
INDIRECT TAX
(3) The registering authority shall, within ninety days from the receipt
of the said application, refund the amount of security deposit if the
dealer has,-
24
INDIRECT TAX
(a) filed all the returns due up to the date of application for refund of
the security deposit, or up to the date of cancellation of
registration certificate, and
(b) paid the tax due as per the said returns, and
(c) made the application for refund within the period prescribed
under sub-rule (2) above.]
25
INDIRECT TAX
Explanation. For the purposes of this section, where the refund of tax,
1 whether in full or in part, includes any amount of refund on any
payment of tax made after the date prescribed for making the last
26
INDIRECT TAX
27
INDIRECT TAX
Subject to the other provisions of this Act and the rules made there under,
the Commissioner shall, by order refund to a person amount of tax, penalty,
interest, deposit and fee except when the fee is paid by way of court fee
stamp , if any, paid by such person in excess of the amount due from him.
The refund may be either by deduction of such excess from the amount of
tax, penalty, amount forfeited and interest due, if any, in respect of any other
period or in any other case, by cash payment: (In the above para for the
words the commissioner shall refund the words the Commissioner shall,
by order refund are substituted by the Maharashtra Act No. XXXII of 2006
Dt. 05.08.2006) Provided that, the Commissioner shall first apply such
excess towards the recovery of any amount due in respect of which a notice
under subsection (4) of section 32 has been issued, or, as the case may be.
any amount which is due as per any return or revised return but not paid and
shall then refund the balance, if any. (2) Where any refund is due to any
dealer according to the return or revised return furnished by him for any
period, then subject to the other provisions of 7 this Act including the
provisions regarding provisional refunds such refund may provisionally be
adjusted by him against the tax due and payable as per the returns or revised
return furnished under section 20 for any subsequent period: the word,
subsequent shall be deleted with effect from 20th June 2006. Provided
that, the amount of tax or penalty, interest or sum forfeited or all of them due
from, and payable by, the dealer on the date of such adjustment shall first be
deducted from such refund before making the adjustment. (For the above
subsection (2) the following subsection (2) is substituted by the Maharashtra
Act No. XXXII of 2006 Dt. 05.08.2006
28
INDIRECT TAX
If a registered dealer has filed any returns, fresh returns or revised returns in
respect of any period contained in any year and any amount is refundable to
the said dealer according to the return, fresh return or revised return, then
subject to rules, the dealer may adjust such refund against the amount due as
per any return, fresh return or revised return for any subsequent period
contained in the said year, filed under this Act or the Central Sales Tax Act,
1956 or the Maharashtra Tax on the Entry of Goods into Local Areas Act,
2002.
CHAPTER-5
29
INDIRECT TAX
VALUATION OF TAXABLE
SERVICES
Service tax is levied on various taxable services on the basis of value charged by
the service provider. Its valuation is governed by section 67 of the Finance Act,
1994 read with the rules under Service Tax (Determination of Value) Rules, 2006.
A few illustrative cases of short levy of service tax of Rs. 8.16 crore are mentioned
in the following paragraphs. These observations were communicated to the
Ministry through seven draft audit paragraphs. The Ministry/department has
accepted (till January 2010) the audit observations in six draft audit paragraphs
with total revenue implication of Rs. 7.38 crore, of which Rs. 1.66 crore has been
recovered.
30
INDIRECT TAX
Section 67 of the Finance Act, 1994, envisages that the value of taxable
service in relation to commissioning or installation services, is the amount
charged by the service provider for rendering such services. Further,
notifications dated 21 August 2003 and 1 March 2006 provide that, in the
cases of contracts involving provision of services along with supply of
materials, the service provider may pay the service tax on 33 per cent of the
gross contract amount. M/s CMC Ltd., in Kolkata service tax commission
rate, engaged in providing software and hardware solutions, executed jobs of
installation and commissioning along with supply of equipment under
composite price contracts without having any price breakup for supply,
installation and commissioning works. The assessee paid service tax at two
per cent instead of 33 per cent of the gross contract value which was
applicable for installation and commissioning work. Incorrect adoption of
value resulted in short payment of service tax of Rs. 5.62 corer during the
period from 2004-05 to 2007-08. On this being pointed out (May 2008), the
department accepted the audit observation and stated (May 2009) that a draft
show cause cum demand notice was being issued. The reply of the Ministry
has not been received (January 2010).
31
INDIRECT TAX
Section 67(2) of the Finance Act, 1994, read with rule 3(a) of the
Service Tax (Determination of Value) Rules, 2006, effective from 19 April
2004, stipulates that where provision of service is for a consideration not
wholly or partly consisting of money, the value of such taxable service shall
be equivalent to the gross amount charged by the service provider to provide
similar service to any other person in the ordinary course of trade and the
gross amount charged is the sole consideration. M/s Kandla Port Trust
(KPT), Vadinar, in Rajkot commission rate, provided port services to M/s
Essar Oil Ltd. (EOL), in connection withinstallation/creation of various new
ports, related facilities in the KPT water limits and also in land/road area of
KPT at Off-shore Oil Terminal (OOT), Vadinar. M/s EOL paid Rs. 6.68
crore between February 2007 and March 2008 to KPT as wharfage and
berthing charges at 51.43 per cent of the scale of rates (SORs) of KPT for
the products brought by EOL at the Vadinar Terminal. The assessee also paid
service tax of Rs. 82.38 lakh on this amount. Audit observed that payment of
service tax of 51.43 per cent of scale of rates was not correct because the
assessee and EOL had entered into an agreement by virtue of which KPT
had extended its facilities to be used and developed by the EOL and the
developed assets were to be repatriated to KPT free of cost on a future date.
In consideration thereof, the charges leviable were reduced to 51.43 per cent
of the actual scale of rates of KPT. In such cases, the service tax of Rs. 1.60
crore should have been paid on the full service charge of Rs. 12.98 crore
which was chargeable by KPT in normal circumstances from any other
assessee for providing similar services under rule 3 (a) of the aforesaid
Rules. This resulted in short payment of service tax of Rs. 77.87 lakh.
The matter was pointed out to the department/Ministry in August
2008/October 2009; its replies have not been received (January 2010).
32
INDIRECT TAX
M/s Saint Gobain Glass India Ltd. and M/s Mainetti (India) Pvt. Ltd.,
in Chennai service tax commission rate, engaged in the manufacture of
excisable goods availed of technical know-how from foreignn service
providers. The assessees paid royalty of Rs. 9.29 crore and Rs. 8.95 crore for
33
INDIRECT TAX
the period from January 2005 to December 2005 and from May 2005 to
December 2005 after deducting the TDS amount of Rs. 1.03 crore and Rs.
1.34 crore respectively. Service tax was paid on the value after excluding
TDS. The exclusion of TDS resulted in short payment of service tax of Rs.
28.82 lakh which was recoverable with interest.
On the above being pointed out (between March and December 2008),
the department admitted the audit observations (between July 2008 and
February 2009) and reported recovery of tax of Rs. 12.64 lakh and interest of
Rs. 5.21 lakh in June and July 2008 from M/s Saint Gobain Glass India Ltd.
and issued show cause notice to the other assessee.
The Ministry accepted (November 2009) the audit observation in the
case of M/s Mainetti (India ) Pvt. Ltd. and intimated that a show cause
notice for Rs. 16.50 lakh had been issued in September 2008.
BIBILOGRAPHY
34
INDIRECT TAX
35 | https://tr.scribd.com/document/340965778/Tax-Project | CC-MAIN-2019-30 | refinedweb | 3,975 | 59.33 |
IRC log of tagmem on 2005-11-08
Timestamps are in UTC.
17:59:15 [RRSAgent]
RRSAgent has joined #tagmem
17:59:15 [RRSAgent]
logging to
17:59:23 [Zakim]
TAG_Weekly()12:30PM has now started
17:59:30 [Zakim]
+??P0
17:59:52 [Zakim]
-??P0
17:59:53 [Zakim]
TAG_Weekly()12:30PM has ended
17:59:55 [Zakim]
Attendees were
18:00:38 [Zakim]
TAG_Weekly()12:30PM has now started
18:00:40 [Zakim]
+DanC
18:00:48 [Zakim]
+??P0
18:01:09 [DanC]
Zakim, ??P0 is Ed
18:01:09 [Zakim]
+Ed; got it
18:01:12 [Norm]
zakim, what's the passcode?
18:01:12 [Zakim]
the conference code is 0824 (tel:+1.617.761.6200), Norm
18:02:15 [DanC]
Topic: Administrative
18:02:23 [DanC]
Regrets: Tim, Henry, Vincent
18:02:30 [DanC]
Chair: Ed
18:02:32 [DanC]
Scribe: DanC
18:02:59 [Zakim]
+Norm
18:03:49 [Norm]
zakim, who's here?
18:03:49 [Zakim]
On the phone I see DanC, Ed, Norm
18:03:50 [Zakim]
On IRC I see RRSAgent, Ed, Zakim, noah_lunch, Norm, DanC
18:04:06 [DanC]
Regrets + DavidO
18:04:08 [Zakim]
+[IBMCambridge]
18:04:15 [Norm]
zakim, [IBM is oah
18:04:15 [Zakim]
+oah; got it
18:04:20 [DanC]
Zakim, IBMCambridge holds Noah
18:04:20 [Zakim]
sorry, DanC, I do not recognize a party named 'IBMCambridge'
18:04:22 [Norm]
zakim, oah is noah
18:04:22 [Zakim]
+noah; got it
18:06:35 [DanC]
Next teleconference: propose 15 November
18:06:44 [Norm]
Norm: regrets for 15 November
18:07:04 [noah]
I will chair next week.
18:07:10 [DanC]
regrets 15 Nov: Norm, Vincent; Ed; at risk: Tim, Dan
18:08:37 [DanC]
RESOLVED: to cancel 15 Nov and meet next 22 Nov, VQ to chair
18:09:02 [noah]
zakim, who is here?
18:09:02 [Zakim]
On the phone I see DanC, Ed, Norm, noah
18:09:03 [Zakim]
On IRC I see RRSAgent, Ed, Zakim, noah, Norm, DanC
18:09:07 [DanC]
agenda comments?
18:09:14 [DanC]
DanC: some auth stuff... and norm's ns8 stuff
18:09:40 [DanC]
agenda + Administrative
18:09:54 [DanC]
NM: without henry, endpointRefs might be tricky
18:10:09 [DanC]
agenda + # Preparing agenda for Dec. f2f at MIT
18:10:15 [DanC]
agenda + Issue endPointRefs-47
18:10:23 [DanC]
agenda + authentication
18:10:28 [DanC]
agenda + namespaceDocument-8
18:10:57 [DanC]
->
minutes 1 Nov
18:11:11 [DanC]
RESOLVED to accept
18:11:22 [DanC]
Zakim, next item
18:11:22 [Zakim]
agendum 1. "Administrative" taken up [from DanC]
18:11:27 [DanC]
Zakim, take up item 1
18:11:27 [Zakim]
agendum 1. "Administrative" taken up [from DanC]
18:11:28 [DanC]
Zakim, take up item 2
18:11:28 [Zakim]
agendum 2. "# Preparing agenda for Dec. f2f at MIT" taken up [from DanC]
18:11:35 [DanC]
Zakim, close item 1
18:11:37 [Zakim]
agendum 1, Administrative, closed
18:11:37 [Zakim]
I see 4 items remaining on the agenda; the next one is
18:11:38 [Zakim]
2. # Preparing agenda for Dec. f2f at MIT [from DanC]
18:11:58 [DanC]
CONFIRMED: 12-14 June
18:12:10 [DanC]
CONFIRMED: 12-14 June 2006 in Western MA
18:13:00 [DanC]
NM summarizes msg of Tue, 8 Nov 2005 12:27:32 -0500
18:13:02 .
18:13:20 [Ed]
(from Noah's email)
18:13:29 ).
18:14:42 [DanC]
q+ to note progress on versioning
18:15:24 [DanC]
ack danc
18:15:24 [Zakim]
DanC, you wanted to note progress on versioning
18:16:10 [DanC]
agenda + versioning
18:16:11 .
18:16:29 [DanC]
agenda 6 = versioning, esp.
18:17:21 [DanC]
DanC: Self-describing documents .. umm... yeah... worth some time... not sure how.
18:19:37 [DanC]
DanC: on schemeProtocols-49 ... I should get in touch with some of the proponents of http: synonyms...
18:19:44 [DanC]
NM: mms: is relevant there too
18:19:45 [Ed]
* Web Services and the Web: I think we're using endPointRefs-47 as an excuse for the TAG to come up to speed on Web Services and perhaps to discover new issues worth pursuing. Henry's worked example seems useful.
18:21:27 [DanC]
NM: perhaps "principle of least power" merits some time at the ftf too.
18:22:35 [DanC]
Ed: perhaps focus more on this bit of our charter: "when setting future directions, help establish criteria for starting new work at W3C, and help W3C coordinate its work with that of other organizations."
18:23:14 [DanC]
DanC: how would that be different from what we're doing now?
18:23:45 [DanC]
Ed: e.g. mobile web... perhaps put together something on content/presentation before mobile web inititative publishes something
18:24:49 [Zakim]
+Norm.a
18:24:52 [Zakim]
-Norm
18:25:12 [DanC]
DanC: I can see both sides of that, but I'm concerned about whether we're doing what we say we're going to do... long list of actions with slow progress... which things to preempt?
18:25:49 [Norm]
zakim, Norm.a is Norm
18:25:49 [Zakim]
+Norm; got it
18:25:54 [DanC]
NM: it occurs to me that there will [may] be some turn-over... we could think about what we're going to hand to the incoming folks
18:26:19 [DanC]
NM: "issue list and assignments"?
18:26:35 [DanC]
Ed: anything else? well, that's it for now; we can add more later
18:26:52 [DanC]
Ed: priorities?
18:27:12 [DanC]
DanC: prioritize stuff based on work that gets done before the meeting
18:28:29 [DanC]
Ed: hmm...
18:28:56 [DanC]
NM: agenda seems fluid enough to let incoming writing influence it
18:29:07 [DanC]
Zakim, close item 2
18:29:07 [Zakim]
agendum 2, # Preparing agenda for Dec. f2f at MIT, closed
18:29:08 [Zakim]
I see 4 items remaining on the agenda; the next one is
18:29:09 [Zakim]
3. Issue endPointRefs-47 [from DanC]
18:29:13 [Zakim]
+Roy_Fielding
18:29:36 [DanC]
postponed pending DO, HT's availability
18:29:58 [DanC]
Zakim, close item 3
18:29:58 [Zakim]
agendum 3, Issue endPointRefs-47, closed
18:29:59 [Zakim]
I see 3 items remaining on the agenda; the next one is
18:30:00 [Zakim]
4. authentication [from DanC]
18:30:08 [Norm]
Norm has joined #tagmem
18:30:11 [DanC]
agenda?
18:30:13 [Roy]
Roy has joined #tagmem
18:30:54 [DanC]
Zakim, take up item 5
18:30:54 [Zakim]
agendum 5. "namespaceDocument-8" taken up [from DanC]
18:31:00 [Norm]
18:31:17 [DanC]
->
Associating Resources with Namespaces 7 November 2005
18:32:44 [DanC]
NDW: pls excuse encoding noise... working on .htaccess
18:33:10 [DanC]
DNW: am I on the right track?
18:33:18 [DanC]
s/DNW/NDW/
18:33:32 [DanC]
DanC: it's not so much "instead of" RDDL... we're also endorsing RDDL as is, right?
18:33:37 [DanC]
NDW: right.
18:33:45 [noah]
Noah thinks that Norm's "We hope to: 1)... 2)... 3)..." is balanced on the status of RDDL.
18:34:50 [DanC]
DanC: I'm not in a good position to review... I'll just see what I want to see
18:34:56 [DanC]
Ed: I'm OK to review it.
18:35:15 [DanC]
ACTION NDW: fill in section 5 of
18:35:39 [DanC]
ACTION Ed: review
after NDW updates w.r.t section 5, perhaps by 22 Nov
18:36:07 [Norm]
Funny characters fixed
18:37:11 [DanC]
DanC: status... hmm... it's in the shape of a finding... we've been asked for a REC... a finding is OK by me, at least for now
18:37:20 [DanC]
NDW: let's shop it around as a finding for now
18:37:37 [DanC]
Zakim, close this item
18:37:37 [Zakim]
agendum 5 closed
18:37:38 [Zakim]
I see 2 items remaining on the agenda; the next one is
18:37:39 [Zakim]
4. authentication [from DanC]
18:37:53 [DanC]
->
Web Auth: state of the art
18:39:10 [noah]
If you mean Microsoft Infocard, Noah is surprised that it is purely WS-Trust based. I thought it was a general repository and associated "non scary to ordinary mortals" UI for all credentials.
18:40:38 [DanC]
[missed lots by danc]
18:40:46 [DanC]
NDW: that was SAML
18:42:03 [DanC]
NM: I got the impression InfoCard was more federated than just ws-trust
18:42:45 [DanC]
... sorta like web browsers mostly use http, though they can use other uri schemes
18:42:54 [DanC]
DC: I got the impression ws-trust played the pivotal role
18:43:41 [DanC]
DC: I hear a workshop might be in progress
18:43:51 [noah]
FWIW: the Microsoft Web Page about Infocard appears to be at:
18:43:57 [DanC]
Ed: perhaps we should recommend that W3C do a workshop?
18:44:49 [DanC]
... on web authentication
18:45:18 [Roy]
*shrug*
18:45:38 [noah]
Following up on scope of Infocard, see specifically:
18:45:47 [DanC]
DanC: I'm conflicted... I'm not sure I'll be able to go. If I could go, I'd like it to happen.
18:46:16 [DanC]
ACTION DanC: to write a report on the state of the art authentication in the web [continues]
18:46:20 [DanC]
Zakim, next item
18:46:20 [Zakim]
agendum 4. "authentication" taken up [from DanC]
18:46:26 [DanC]
Zakim, take up item 5
18:46:26 [Zakim]
agendum 5. "namespaceDocument-8" taken up [from DanC]
18:46:28 [DanC]
Zakim, take up item 6
18:46:28 [Zakim]
agendum 6. "versioning, esp.
" taken up
18:50:40 [noah]
q+ to say we're not in 100% total agreement on the use of the term "language"
18:52:50 [Zakim]
+Dave_Orchard
18:55:23 [DanC]
ACTION: DanC to derive RDF/RDFS/OWL version of terminology from whiteboard /
18:56:32 [DanC]
violet
18:56:35 [Norm]
ty
18:57:20 [Norm]
FYI:
18:57:30 [Ed]
ack noah
18:57:30 [Zakim]
noah, you wanted to say we're not in 100% total agreement on the use of the term "language"
18:58:37 [DanC]
I think stringset is what we got in
18:59:37 [Ed]
noah: I think we should set down instructions on how we should read that diagram we had on the white board
19:00:33 [Norm]
brb
19:00:45 [DanC]
DC: tx for feedback... this week and next are kinda shot, but the following week might work
19:01:02 [DanC]
meanwhile... ACTION DaveO to update finding with ext/vers
19:03:04 [DanC]
DC: I guess my target is to tell this "HTML 2 sublanguage of HTML4" story
19:06:01 [DanC]
DO: ok, let's hope that folds into the finding
19:06:30 [DanC]
Zakim, close item 4
19:06:30 [Zakim]
agendum 4, authentication, closed
19:06:31 [Zakim]
I see 1 item remaining on the agenda:
19:06:32 [Zakim]
6. versioning, esp.
19:06:35 [DanC]
Zakim, close item 6
19:06:35 [Zakim]
agendum 6, versioning, esp.,
closed
19:06:37 [Zakim]
I see nothing remaining on the agenda
19:06:37 [Ed]
19:07:22 [DanC]
Topic: action review
19:07:38 [DanC]
NM: I see my CDF review action marked done.
19:08:46 [DanC]
RF: made a little progress on mime/respect item. (RF to update Authoritative Metadata finding to include resolution of putMediaType-38)
19:09:00 [Zakim]
-Dave_Orchard
19:09:08 [DanC]
ADJOURN.
19:09:16 [Zakim]
-Norm
19:09:17 [Zakim]
-noah
19:09:18 [Zakim]
-DanC
19:09:22 [Zakim]
-Ed
19:35:00 [Zakim]
disconnecting the lone participant, Roy_Fielding, in TAG_Weekly()12:30PM
19:35:05 [Zakim]
TAG_Weekly()12:30PM has ended
19:35:06 [Zakim]
Attendees were DanC, Ed, Norm, [IBMCambridge], noah, Roy_Fielding, Dave_Orchard
21:31:20 [Zakim]
Zakim has left #tagmem
23:20:06 [Norm]
Norm has joined #tagmem | http://www.w3.org/2005/11/08-tagmem-irc | CC-MAIN-2021-43 | refinedweb | 2,053 | 68.2 |
Comment on Tutorial - Abstract classes in Java By Kamini
Comment Added by : Nilesh Chavan
Comment Added at : 2010-07-17 02:24:42
Comment on Tutorial : Abstract classes in Java By Kamini
yes it is good example for abstract classes.Thanks a lot this thing was very importent for me!
By : Nilesh chavan regarding this code,
if th
View Tutorial By: Yellappa at 2010-06-11 05:49:15
2. hi can any one help me how i can install javax.co
View Tutorial By: Jewel at 2008-02-26 21:52:17
3. if(a instanceof C)
System.out.println("
View Tutorial By: John at 2012-07-13 17:34:42
4. grt tutorial
View Tutorial By: sss at 2009-08-30 03:33:13
5. import java.util.*;
public class de
View Tutorial By: Virudada at 2012-05-05 06:27:22
6. hey pls tel me, what is the advantage of method ov
View Tutorial By: divya at 2013-08-27 14:20:22
7. The last output is not :John Doe's current balance
View Tutorial By: Santosh at 2010-06-01 22:54:21
8. Excellent Example...... To understand the concept
View Tutorial By: SUNNY at 2009-02-24 23:41:04
9. what is differences between application scope and
View Tutorial By: suresh at 2013-05-13 13:03:12
10. i want full information of java topics
View Tutorial By: mahesh at 2008-12-14 23:38:50 | https://java-samples.com/showcomment.php?commentid=35126 | CC-MAIN-2022-21 | refinedweb | 245 | 65.83 |
Source: Deep Learning on Medium
Parts list
Here’s the basic list of things we’ll need to create.
- input data— what is getting encoded and decoded?
- an encoding function — there needs to be a network that takes an input and encodes it.
- a decoding function — there needs to be a network that takes the encoded input and decodes it.
- loss function — The autoencoder is good when the output of the decoded version is very close to the original input data (loss is small), and bad when the decoded version looks nothing like the original input.
The Approach
The simplest autoencoder looks something like this: x → h → r, where the function f(x) results in h, and the function g(h) results in r. We’ll be using neural networks so we don’t need to calculate the actual functions.
Logically, step 1 will be to get some data. We’ll grab MNIST from the Keras dataset library. It’s comprised of 60,000 training examples and 10,000 test examples of digits 0–9. Next, we’ll do some basic data preparation so that we can feed it into our neural network as our input set, x.
Then in step 2, we’ll build the basic neural network model that gives us hidden layer h from x.
- We’ll put together a single dense hidden layer that takes in x as input with a ReLU activation layer.
- Next, we’ll pass the output of this layer into another dense layer, and run the output through a sigmoid activation layer.
Once we have a model, we’ll be able to train it in step 3, and then in step 4, we’ll visualize the output.
Let’s put it together:
First, let’s not forget the necessary imports to help us create our neural network (keras), do standard matrix mathematics (numpy), and plot our data (matplotlib). We’ll call this step 0.
# Importing modules to create our layers and model.
from keras.layers import Input, Dense
from keras.models import Model# Importing standard utils
import numpy as np
import matplotlib.pyplot as plt
Step 1. Import our data, and do some basic data preparation. Since we’re not going to use labels here, we only care about the x values.
from keras.datasets import mnist(train_xs, _), (test_xs, _) = mnist.load_data()
Next, we’ll normalize them between 0 and 1. Since they’re greyscale images, with values between 0 and 255, we’ll represent the input as float32’s and divide by 255. This means if the value is 255, it’ll be normalized to 255.0/255.0 or 1.0, and so on and so forth.
# Note the '.' after the 255, this is correct for the type we're dealing with. It means do not interpret 255 as an integer.
train_xs = train_xs.astype('float32') / 255.
test_xs = test_xs.astype('float32') / 255.
Now think about this, we have images that are 28 x 28, with values between 0 and 1, and we want to pass them into a neural network layer as an input vector. What should we do? We could use a convolutional neural network, but in this simple case, we’ll just use a dense layer. So how do we feed it in? We’ll flatten it into a single dimensional vector of 784 x 1 values (28 x 28).
train_xs = train_xs.reshape(len(train_xs), np.prod(np.prod(train_xs.shape[1:])))test_xs = test_xs.reshape(len(test_xs), np.prod(np.prod(test_xs.shape[1:])))
Step 2. Let’s put together a basic network. We’re simply going to create an encoding network, and a decoding network. We’ll put them together into a model called the autoencoder below. We’ll also decrease the size of the encoding so we can get some of that data compression. Here we’ll use 32 to keep it simple.
# Defining the level of compression of the hidden layer. Basically, as the input is passed through the encoding layer, it will come out smaller if you want it to find salient features. If I choose 784, there would be a compression factor of 1, or nothing.
encoding_dim = 32
input_img = Input(shape=(784, ))# This is the size of the output. We want to generate 28 x 28 pictures in the end, so this is the size we're looking for.
output_dim = 784encoded = Dense(encoding_dim, activation='relu')(input_img)decoded = decoded = Dense(output_dim, activation='sigmoid')(encoded)
Now create a model that accepts input_img as inputs and outputs the decoder layer. Then compile the model, in this case with adadelta as the optimizer and binary_crossentropy as the loss.
autoencoder = Model(input_img, decoded)autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
Step 3. Our model is ready to train. You’ll be able to run this without a GPU, it doesn’t take long. We’ll call fit on the autoencoder model we created, passing in the x values for both the inputs and outputs, for 50 epochs, with a relatively large batch size (256). This will help it train somewhat quickly. We’ll enable shuffle to prevent homogeneous data in each batch and then we’ll use the test values as validation data.
autoencoder.fit(train_xs, train_xs, epochs=50, batch_size=256, shuffle=True, validation_data=(test_xs, test_xs)
That’s it. Autoencoder done. You’ll see it should have a loss of about 0.69 meaning that the reconstruction we’ve created generally represents the input fairly well. But can’t we take a look at it for ourselves?
Step 4. For this, we’ll do some inference to grab our reconstructions from our input data, and then we’ll display them with matplotlib. For this we want to use the predict method.
Here’s the thought process: take our test inputs, run them through autoencoder.predict, then show the originals and the reconstructions.
# Run your predictions and store them in a decoded_images list.
decoded_images = autoencoder.predict(test_xs)
Here’s how you get that image above:
# We'll plot 10 images.
n = 10
plt.figure(figsize=(16, 3))
for i in range(n):
# Show the originals
ax = plt.subplot(2, n, i + 1)
plt.imshow(test_xs[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)# Show the reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)plt.show() | https://mc.ai/autoencoder-neural-networks-what-and-how/ | CC-MAIN-2020-05 | refinedweb | 1,065 | 67.04 |
If you have ever worked with Node js or Express you have properly realized how tedious and haunting it can be to maintain and scale your applications.
This is where Nest js js and even build a little CRUD application at the end.
So, without wasting any further time, let’s get started.
Now the question remains why you should use Nestjs in the first place. Here are some reasons why Node js developers should consider switching to Nestjs.
Nestjs is based upon Typescript which enables us developers to add types to our variables and provides compile errors and warnings based on them. Typescript also provides a lot of other great benefits to us javascript developers which you can find out more about in this crash course.
Dependency injection is a design pattern which is used to increase the efficiency and modularity of our applications. It’s often used to keep code clean and easy to read and use. Nestjs provides it out of the box and even makes it easy to use it for creating coupled components.
Nestjs projects have a predefined structure providing best practices for testability, scalability and maintainability. Nonetheless, it is still really flexible and can be changed if needed.
Nestjs provides a full Jest testing configuration out of the box but still allows us developers to use other testing tools as we see fit.
Now that we have an overview of why Nestjs is useful and where it can improve our development experience let’s take a look at the most important concepts and building blocks of this framework.
Modules are the basic building block of each Nestjs application and are used to group related features like controllers and services together. They are Typescript files decorated with the @Module() decorator.
Each application needs to have at least one module, the so called root module. The root module is the starting point of the application and is auto-generated when starting a project. In theory, we could write our whole application inside this module but it is advisable to break a large application down into multiple modules to help maintenance and readability.
It is recommended and normal practice to group each feature into their own module for example an UserModule and an ItemModule.
A simple module example:
@Module({
controllers: [ItemController],
providers: [ItemService],
})
export class ItemModule {}
In Nestjs controllers are responsible for handling incoming requests and returning responses to the client. They are defined using the @Controller() declarator which takes the path for the primary route as its argument.
Each function inside the controller can be annotated with the following declarators:
Here is an example of a simple controller with one get route:
@Controller('item')
export class ItemController {
@Get()
findAll(): string {
return 'Returns all items';
}
}
Note: After creating the controller it needs to be added to a module so Nestjs can recognize it (This happens automatically when you generate it using the Nest CLI).
Providers in Nestjs also referred to as services are used to encapsulate and abstract the logic of other classes like controllers. They can be injected into other classes using dependency injection.
A provider is a normal Typescript class with an @Injectable() declarator on the top.
For example, we can easily create a service which fetches all our items and use it in our ItemController.
@Injectable()
export class ItemService {
private readonly items: Item[] = [{ title: 'Great item', price: 10 }];
create(item: Item) {
this.items.push(item);
}
findAll(): Item[] {
return this.items;
}
}
Now that we have defined our service let’s use it in our controller:
@Controller('item')
export class ItemController {
constructor(private readonly itemService: ItemService) {}
@Get()
async findAll(): Promise<Item[]> {
return this.itemService.findAll();
}
}
Every Nest application element has its own lifecycle which is composed of a variety of lifecycle hooks that can be used to provide visibility of these key states and the ability to act when they occur.
Here are the four lifecycle sequences:
Each of these four lifecycle hooks is represented by an interface. That means that we just need to implement the interface in our component (class) and override the function.
Here is an simple example of the OnModuleInit interface:
import { Injectable, OnModuleInit } from '@nestjs/common';
@Injectable()
export class ItemService implements OnModuleInit {
onModuleInit() {
console.log(`The module has been initialized.`);
}
}
Pipes in Nestjs are used to operate on the arguments of the controller route handler. This gives them two typical use cases:
Pipes can be created by implementing the PipeTransform interface on our class and overriding the transform function. Let’s look at a simple example of a custom ValidationPipe:
import { PipeTransform, Injectable, ArgumentMetadata } from '@nestjs/common';
@Injectable()
export class CustomValidationPipe implements PipeTransform {
transform(value: any, metadata: ArgumentMetadata) {
const { metatype } = metadata;
if (!metatype) {
return value;
}
const convertedValue = plainToClass(metatype, value);
return convertedValue;
}
}
In this example we check if the metatag we provided isn’t empty and if so we converted the received data to the metatype we defined.
Nestjs provides us with a full setup of the Jest testing framework which makes it easy to get started with unit, integration and end-to-end tests.
Before you start testing I would recommend being familiar with the testing pyramid and other best practices like the KISS (Keep it simple stupid) technique.
Now let’s look at a simple unit test for the ItemService we defined above.
import { Test } from '@nestjs/testing';
import { ItemService } from './item.service';
describe('ItemService', () => {
let service: ItemService;
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [ItemService],
}).compile();
service = module.get<ItemService>(ItemService);
});
it('should be defined', () => {
expect(service).toBeDefined();
});
});
In this example we use the Test class provided by Nestjs to create and get our Service through the compile() and get() functions. After that we just write a simple test that checks if the service is defined.
Note: In order to mock a real instance, you need to override an existing provider with a custom provider.
End-to-end test helps us test the whole functionality of our API and how our small units work together. End-to-end testing makes use of the same setup we use for unit testing, but additionally takes advantages of the supertest library which allows us to simulate HTTP requests.
describe('Item Controller (e2e)', () => {
let app;
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
imports: [ItemModule],
}).compile();
app = module.createNestApplication();
await app.init();
});
it('/ (GET)', () => {
return request(app.getHttpServer())
.get('/item')
.expect(200)
.expect([{ title: 'Great item', price: 10 }]);
});
});
Here we send a HTTP request to the endpoint we created earlier and check if it returns the right response code and data.
Nestjs provides its own nice CLI (command line interface) which can be used to create projects, modules, services and more. We can install it using the node package manager(npm) and the following command.
npm i -g @nestjs/cli
After that, we should be able to create a new project using the new command.
nest new project-name
Now that we have the CLI installed let’s start building a simple CRUD application using Nestjs and MongoDB.
As stated earlier in this post, we will create a simple CRUD application using Nestjs and MongoDB as our database. This will help you to really get a good grasp of the core concepts of Nest.
First, let’s create the project using the command we talked about above.
nest new mongo-crud
After that let’s move into the generated directory and start our development server.
// Move into the directory
cd mongo-crud
// Start the development server
npm run start:dev
Npm run start:dev uses Nodemon to run the application which means that it automatically updates the page when you save the project.
Now that we have entered these commands we should see a “Hello World!” message on our.
Next up we need to create all the files needed for this project. Let’s start by generating the standard Nestjs files using the CLI.
nest generate module items
nest generate controller items
nest generate service items
After that, we just need to add some files for our database schema and access object. Here’s an image of my folder structure and files.
As you can see you just need to create the three missing folders and their files in our items directory.
Next, we will continue by setting up our MongoDB database in our Nest project. For that, you first need to have MongoDB installed on your computer. If you haven’t downloaded it yet you can do so by using this link.
After finishing the installation local installation we only need to install the needed dependencies in our project and than import them in our Modules.
npm install --save @nestjs/mongoose mongoose
Now let’s import Mongo in our application Module:
import { Module } from '@nestjs/common';
import { AppController } from './app.controller';
import { AppService } from './app.service';
import { MongooseModule } from '@nestjs/mongoose';
import { ItemsModule } from './items/items.module';
@Module({
imports: [MongooseModule.forRoot('mongodb://localhost/nest'), ItemsModule],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
As you can see we import the MongooseModule using the forRoot() method which accepts the same parameters as mongoose.connect().
We also need to setup Mongo in our ItemsModule and can do so like this:
import { Module } from '@nestjs/common';
import { MongooseModule } from '@nestjs/mongoose';
import { ItemsController } from './items.controller';
import { ItemsService } from './items.service';
import { ItemSchema } from './schemas/item.schema';
@Module({
imports: [MongooseModule.forFeature([{ name: 'Item', schema: ItemSchema }])],
controllers: [ItemsController],
providers: [ItemsService],
})
export class ItemsModule {}
Here we import the MongooseModule aswell but use the forFeature() method instead which defines what Model will be registered in the current scope. Thanks to that we will later be able to get access to our Model in our Service file using dependency injection.
Next up we will create the schema for our database. The schema defines how the data will be represented in the database. Let’s define it in our item.schema.ts file.
import * as mongoose from 'mongoose';
export const ItemSchema = new mongoose.Schema({
name: String,
qty: Number,
description: String,
});
As you can see we first need to import the mongoose dependency and then create a new schema using mongoose.Schema().
Next, we will create a Typescript interface which will be used for type-checking in our Service and Controller. To set up just paste the following code into your item.interface.ts file you created earlier.
export interface Item {
id?: string;
name: string;
description?: string;
qty: number;
}
The DTO (Data transfer object) is an object that defines how the data will be sent over the network. Its a basic class with the same variables as our Schema (in our case).
export class CreateItemDto {
readonly name: string;
readonly description: string;
readonly qty: number;
}
We are now done with the basic configuration of our database and can move on to writing the actuall CRUD functionality.
The service file will hold all the logic regarding the database interaction for our CRUD (Create, Read, Update, Delete) functionality.
import { Injectable } from '@nestjs/common';
import { Item } from './interfaces/item.interface';
import { Model } from 'mongoose';
import { InjectModel } from '@nestjs/mongoose';
import { CreateItemDto } from './dto/create-item.dto';
@Injectable()
export class ItemsService {
constructor(@InjectModel('Item') private readonly itemModel: Model<Item>) {}
async findAll(): Promise<Item[]> {
return await this.itemModel.find();
}
async findOne(id: string): Promise<Item> {
return await this.itemModel.findOne({ _id: id });
}
async create(item: CreateItemDto): Promise<Item> {
const newItem = new this.itemModel(item);
return await newItem.save();
}
async delete(id: string): Promise<Item> {
return await this.itemModel.findByIdAndRemove(id);
}
async update(id: string, item: Item): Promise<Item> {
return await this.itemModel.findByIdAndUpdate(id, item, { new: true });
}
}
Here we first import all the needed dependencies e.g. our item.interface, dto and so on.
Next, we need to inject our item model into our service so we can carry out our database related activities. For that, we use the @InjectModel() declarator in our constructor.
After that we finally create the functions which handle our CRUD functionality:
The controller is responsible for handling incoming request and providing the right responses to the client.
import {
Controller,
Get,
Put,
Delete,
Body,
Param,
} from '@nestjs/common';
import { CreateItemDto } from './dto/create-item.dto';
import { ItemsService } from './items.service';
import { Item } from './interfaces/item.interface';
@Controller('items')
export class ItemsController {
constructor(private readonly itemsService: ItemsService) {}
@Get()
findAll(): Promise<Item[]> {
return this.itemsService.findAll();
}
@Get(':id')
findOne(@Param('id') id): Promise<Item> {
return this.itemsService.findOne(id);
}
create(@Body() createItemDto: CreateItemDto): Promise<Item> {
return this.itemsService.create(createItemDto);
}
@Delete(':id')
delete(@Param('id') id): Promise<Item> {
return this.itemsService.delete(id);
}
@Put(':id')
update(@Body() updateItemDto: CreateItemDto, @Param('id') id): Promise<Item> {
return this.itemsService.update(id, updateItemDto);
}
}
Here we use the @Controller() declarator which is required to define any basic controller and takes the route path prefix as an optional parameter (in our example we use /item).
After that, we inject our ItemService in our constructor using dependency injection.
Now we just define our HTTP Endpoints using the HTTP request method declarators and call the method we defined in our service.
Now that we are finished with our application its time to test the functionality. For that, we need to start our server and then test it by sending HTTP requests to the endpoints (We can do so using programs like Postman or Insomnia)
npm run start
After starting the server you just need to test the application by sending HTTP requests to the endpoints we created above.
If you have any problems or questions feel free to leave them in the comments below. The whole code can also be found on my Github.
You made it all the way until the end! Hope that this article helped you understand the basics of Nest.js and why it is so useful to us backend developers.
If you have found this useful, please consider recommending and sharing it with other fellow developers. If you have any questions or feedback, let me know in the comments down below. | https://tkssharma.com/nestjs-handbook-for-developers/ | CC-MAIN-2022-40 | refinedweb | 2,316 | 55.44 |
How to handle authorization in Django and Vue js web app
I am currently in the process of creating a backend for a front end app I built. And while trying to understand how the whole system is going to work I stumbled with this kind of question. So I got really curious is Token-Based Auth good enough?
Let`s say that it is good enough and now I have my tokens that I check for every API request. But then how do I handle it front end wise.
For example, I would be creating dashboard where some tabs would be hidden for non-admins. Would I just manually hide it depending on the outcome of token? Or is there any other better way?
Couldn't really find this kind of information online. Thats why I`m asking..)
- how to do async task in django?
Suppose I need to request multiple servers to make a response
def view_or_viewset(request): d1 = request_a_server() # something like requests.get(url, data) d2 = request_b_server() d3 = request_c_server() d4 = do_something_with(d3) return Response({"foo1": d1, "foo2": d2, "foo3": d3, "foo4": d4})
I'm doing synchronous requests for each request, and I guess there must be a better way of handling this kind of scenario..
(I would use celery if it is long task, but it is not, still doing multiple synchronous requests doesn't seem right)
What's the recommended paradigm (?) of handling this?
- edit
I was expecting use of
asyncor
aioHttpand
yield(?)
and my question was flagged with possible duplicates and the answers there suggest using threading.. I think manually handling threading is something to avoid (from my past experience with multi threading in c++)
then I found request-future seems to be promising here..
- How to fix "react-dom.development.js:287 Uncaught TypeError: Cannot read property 'setState' of undefined"
I am trying to upload my react forms to the state so I can fetch an API with those credentials
I have tried creating a function that looks like this:
handleEmail(event) { this.setState({email: event.target.value}); } handlePassword(event) { this.setState({password: event.target.value}); } render() { return ( <div> <NavBar /> <form className="pt-6 container"> <div className="form-group"> <label className="mt-6" >Your email</label> <input name </div> {/* form-group// */} <div className="form-group"> <a className="float-right" href="#">Forgot?</a> <label>Your password</label> <input className="form-control" onChange={this.handlePassword} </div> {/* form-group// */}
- Blog Project: new error. Cant't find the post
Simple project and I believe its a trivial error, but can't find it. Getting the following error: Reverse for 'blog.views.post_detail' with keyword arguments '{'single_post': }' not found. 1 pattern(s) tried: ['blog/(?P[0-9]+)/$'] Seems like it cant't find the post. Is it a problem with my urls or anything else?
views.py:
def post_detail(request, post_id): single_post = get_object_or_404(BlogPost, pk=post_id) single_post.views += 1 single_post.save() form = BlogCommentForm(request.POST) if request.method == 'POST' and request.user.is_authenticated: if form.is_valid(): comment = form.save(commit=False) comment.post = single_post comment.save() return redirect(post_detail, single_post=single_post) else: form = BlogCommentForm() context = {'single_post': single_post, 'form': form} return render(request, 'post_detail.html', context) def add_comment(request, post_id): single_post = get_object_or_404(Post, pk=post_id) form = BlogCommentForm(request.POST) if request.method == 'POST' and request.user.is_authenticated: if form.is_valid(): comment = form.save(commit=False) comment.post = single_post comment.save() return redirect(post_detail, single_post=single_post) else: form = BlogCommentForm() context = {'form': form} return render(request, 'add_comment.html', context)
models.py
class BlogComment(models.Model): author = models.ForeignKey(User, null=True, on_delete=models.CASCADE) post = models.ForeignKey(BlogPost, related_name='comments') comment_text = models.TextField() date_commented = models.DateTimeField(default=timezone.now)
post_detail.html
{% block features %} <h1>{{ single_post.title }}</h1> <p>Posted by: {{ single_post.author }} on {{ single_post.date_posted }} </p> <p>{{ single_post.text }}</p> <p>{{ single_post.views }}</p> {% if user.is_authenticated %} <a href="{% url 'add_comment' single_post.id %}">Leave a comment</a> {% else %} <p><a href="/accounts/register">Register</a> or <a href="/accounts/login">Log-in</a> to comment </p> {% endif %} {% for comment in single_post.comment.all %} {{ comment.comment_text }} {% empty %} <p>There are no comments yet. Be the first</p> {% endfor %} <h3>Post a comment</h3> <form method='POST'> {% csrf_token %} {{ form | as_bootstrap }} <button type="submit">Submit</button> </form> {% endblock %}
urls.py
from django.conf.urls import url from .views import post_all, post_detail, new_post, add_comment urlpatterns = [ url(r'^$', post_all, name='post_all'), url(r'^(?P<post_id>[0-9]+)/$', post_detail, name='post_detail'), url(r'^(?P<post_id>[0-9]+)/add_comment/$', add_comment, name='add_comment'), url('new_post', new_post, name='new_post') ]
forms.py
class BlogCommentForm(forms.ModelForm): class Meta: model = BlogComment fields = ('comment_text',)
- How we can make relation between languages in microservice architecture
I have an application written by Golang, It is a service-based application. now I want to add another service with PHP. I want to know how I can use microservice architecture in this platform. we have authentication method in Golang which give a token to any user and it is working very well. how I can access to this token in PHP applications?
- .Net Core Firebase Google Email Authentication With JWT - 401
I'm trying to use
firebaseemail authentication on my
.net coreweb-backend via
JWT. I couldn't find detailed and clear example.
1- I get successful login on my
androidapp then i get the
IdToken.
2- I'm sending
IdTokenwith prefix
"Bearer"on PostMan(or app) to my Controller. But it gives
401
Whatever i tried i couldn't get the
200. Only
401..
My service Configuration :
services .AddAuthentication(o => { o.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; o.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; o.DefaultScheme = JwtBearerDefaults.AuthenticationScheme; }) .AddJwtBearer("Bearer", options => { options.Authority = ""; options.SaveToken = true; options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidIssuer = "", ValidateAudience = true, ValidAudience = "myapp-c1e32", ValidateLifetime = true }; });
My Controller :
[Authorize(AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)] public ActionResult GetAllRoomsWithOtherInfos(int id) { var rooms = _roomService.GetAllRoomsWithOtherInfos(id); return Ok(rooms); }
My Request :
- How can I add admin authentication to all pages?
I already have implemented admin login authentications to site. Whenever admin visits dashboard he is asked to put a log in credentials.
But when you visit some page from admin panel lets say .../admin/addblog.php in such cases it doesn't ask for authentication.
How can I redirect to the admin login page and once authenticated then direct back to the origional page again
- How to get internal links' URLs right with npm run build?
My app is hosted in a subfolder:
my_site.com/my_app/. Although I'm using a
vue.config.jsfile specifying the path of my app on the server (
publicPath: '/my_app/', the internal links in my app are still wrong:
Instead of pointing to
my_site/my_app/destinationthey point to
my_site/destination.
How to solve this problem?
- How to render the value of v-for based on the condition(v-if) provided in vue.js
I'm trying to implement the condition provided in the code. My last attempt was like in the one in the code.
<ul class = "details" v- <li v-Data not available</li> <li v-else>{{value}}</li> </ul>
How can the following be implemented: v-if="{{propertyName}} == 'IndustryIdentifiers'"
- How to add button and input element into drop down list
I've created a simple select menu (dropdown) using bootstrap Vue. My question is how do I insert a button and an input element inside the dropdown list. Below is an example of what I want to achieve, and my current code.
Update I am still trying to solve this issue, any help is greatly appreciated
Picture1: Button inserted at the bottom of the dropdown list
Picture2: When user clicks on the button, there will be an input field for them to enter a value. The value will automatically be inserted to the dropdown list
Current Code
<template> <b-container <b-row <b-col <b-form-select</b-form-select> </b-col> </b-row> </b-container> </template> <script> export default { data() { return { selected: null, options: [ { value: null, text: "Please select an option" }, { value: "a", text: "This is First option" }, { value: "b", text: "Selected Option" }, { value: { C: "3PO" }, text: "This is an option with object value" } ] }; } }; </script> <style> </style>
- Elastic Search Filter Bucket Values
My use case is as follows, I need to find out all the unique colors that had appeared in last 1 year but went disappearing in last 3 months. So my documents looks like this
{ doc_id: 1, color: "red", timestamp: epoch time here }, { doc_id: 2, color: "blue", timestamp: epoch time here }
So For example if any document with attribute color (from now referred to just as color) blue appeared in last year, but didn't appear in the last 3 months then we need to include blue in the result. On the other hand if documents with color red appeared in last year and also appeared in the last 3 months then we need to exclude red from the result.
The 1 year in the above example also includes the 3 months in it when computing. So if all the documents with Color blue happened only between May 2018 - Feb 2019, this means that documents with blue occurred in last year but went missing in last 3 months (March 2019 - May 2019), then blue should be in the result set. On the other hand if the documents with Color Red happened between May 2018 - Feb 2019 as well as March 2019 - May 2019, then we need to exclude this color red in the result set. I couldn't get this with terms query in Elastic search.
- What's the correct/standard practice to authenticate a user after registration?
I'm trying to authenticate a user after registration. What's the correct or standard way to go about it?
Using this method as the way to implement it, in step 3, how can I generate the random hash to send to the users email? I see two different options:
- crypto
- JWT token
I'm currently using JWT for login, so would it make sense to use the same token for user verification? Why or why not, and if not, what's the correct way?
- Send mail with nodejs
For a school project I have to build a website and all the stuff... I want to send an email when a certain button is pressed. For now I used an gmail address for the server BUT it needs authentification and all. How can I bypass the authentification ? Are there some other STMP servers that do not require authentification so I send an email easily ?
Thanks guys ! | http://quabr.com/53202568/how-to-handle-authorization-in-django-and-vue-js-web-app | CC-MAIN-2019-22 | refinedweb | 1,741 | 50.33 |
tag:blogger.com,1999:blog-7958828565254404797.post3526590583798057961..comments2017-03-22T15:22:42.329-07:00Comments on Listen Data: Linear Regression with RDeepanshu Bhallanoreply@blogger.comBlogger26125tag:blogger.com,1999:blog-7958828565254404797.post-55505171255656284032017-03-13T13:14:10.875-07:002017-03-13T13:14:10.875-07:00Linear Regression with SAS is unfortunately missin...Linear Regression with SAS is unfortunately missing. :-|<br /><br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7958828565254404797.post-22965648981199531792017-01-30T04:15:53.524-08:002017-01-30T04:15:53.524-08:00Nice work Deepak !! Can you please share the Line...Nice work Deepak !!<br /><br />Can you please share the Linear Regression with SAS article also. Thanks.kapil gupta work Deepanshu !!! If can add a model that y...Great work Deepanshu !!!<br />If can add a model that you have prepared and explain each point here nothing can beat that and using SAS ... Will be eagerly looking forward to it..<br />Thanks<br />SumitUnknown> library(caret) Error in loadNamespace(j <-...> library(caret)<br />Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : <br /> there is no package called ‘pbkrtest’<br />In addition: Warning message:<br />package ‘caret’ was built under R version 3.2.5 <br />Error: package or namespace load failed for ‘caret’<br />Unknown's the error you are getting? Thanks!What's the error you are getting? Thanks!Deepanshu Bhalla <- findCorrelation(descrCor, c...highlyCorrelated <- findCorrelation(descrCor, cutoff=0.6)<br /><br />when i execute the above command its trowing error. the function not found. then i install the caret package after that also i am not able to get this function and execute this command, please help me. Thanks, in advance.Venkatnoreply@blogger.comtag:blogger.com,1999:blog-7958828565254404797.post-10231397127412485922016-07-03T03:24:39.637-07:002016-07-03T03:24:39.637-07:00Are you seeing this tutorial from your workplace? ...Are you seeing this tutorial from your workplace? Most of the companies block photo sharing websites.Deepanshu Bhalla you for stopping by my blog and sharing your...Thank you for stopping by my blog and sharing your feedback!<br />Deepanshu Bhalla for your suggestion. I will add it to the c...Thanks for your suggestion. I will add it to the code.Deepanshu Bhalla Bhalla out this tutorial - out this tutorial - Bhalla you found it useful. Thanks!Glad you found it useful. Thanks!Deepanshu Bhalla you for stopping by my blog. Cheers!Thank you for stopping by my blog. Cheers!Deepanshu Bhalla you for your appreciation.Thank you for your appreciation.Deepanshu Bhalla good job. I think a partial residuals analysi...Very good job.<br />I think a partial residuals analysis is to be added in order to look at the type of relation between each independent variable and Y.jpgerbier cool and useful. ThanksVery cool and useful. ThanksJustin Chi the picture link broken (please check the sour...All the picture link broken (please check the source of images) but very useful tutorial indeedUnknown explanation! Thanks.Excelent explanation! Thanks.sdrb muchas gracias!!!! no habrá algo de re...excelente muchas gracias!!!!<br />no habrá algo de regresion logistica?<br />Alejandra Rodriguez feel my head somewhat heavy now...but its ...wow...i feel my head somewhat heavy now...but its a very good example<br />Keep the good work up..thanks..!!Shubham sir !thanks sir !anant patel sir !thanks sir !anant patel for the tips!Thanks for the tips!Unknown explanation. Thanks for sharing this.Great explanation.<br />Thanks for sharing this.Yogesh Pargaien is a very powerful tutorial to use. ThanksThis is a very powerful tutorial to use. ThanksJean Pierre Musabyimana | http://www.listendata.com/feeds/3526590583798057961/comments/default | CC-MAIN-2017-13 | refinedweb | 602 | 55.5 |
As you already know if you’ve read our last blogpost, we have updated our OCaml cheat sheets starting with the language and stdlib ones. We know some of you have students to initiate in September and we wanted these sheets to be ready for the start of the school year! We’re working on more sheets for OCaml tools like opam or Dune and important libraries such as Obj Lwt or Core. Keep an eye on our blog or the repo on GitHub to follow all the updates.
Going through the documentation was a!
10 thoughts on “A look back on OCaml since 2011”
For a blog-post from a company called OCaml PRO this seems like a rather tone-deaf PR action.
I wanted to read this and get hyped but instead I’m disappointed and I continue to feel like a chump advocating for this language.
Why? Because this is a rather underwhelming summary of *8 years* of language activity. Perhaps you guys didn’t intend for this to hit the front of Hacker News, and maybe this stuff is really exciting to programming language PhDs, but I don’t see how the average business OCaml developer would relate to many of these changes at all. It makes OCaml (still!) seem like an out-of-touch academic language where the major complaints about the language are ignored (multicore, Windows support, programming-in-the-large, debugging) while ivory tower people fiddle with really nailing type-based selection in GADTs.
I expect INRIA not to care about the business community but aren’t you guys called OCaml PRO? I thought you *liked* money.
You clearly just intended this to be an interesting summary of changes to your cheatsheet but it’s turned into a PR release for the language and leaves normals with the continued impression that this language is a joke.
Yes, latency can be frustrating even in the OCaml realm. Thanks for your comment, it is nice to see people caring about it and trying to remedy through contributions or comments.
Note that we only posted on discuss.ocaml.org expecting to get one or two comments. The reason for this post was that while updating the CS we were surprised to see how much the language had changed and decided to write about it.
You do raise some good points though. We did work on a full windows support back in the day. The project was discontinued because nobody was willing to buy it. We also worked on memory profiling for the debugging of memory leaks (before other alternatives existed). We did not maintain it because the project had no money input. I personally worked on compile-time detection of uncaught exception until the public funding of that project ran out. We also had a proposal for namespaces in the language that would have facilitated programming-in-the-large (no funding) and worked on multicore (funding for one man for one year). | http://www.ocamlpro.com/2019/09/20/a-look-back-on-ocaml/ | CC-MAIN-2020-16 | refinedweb | 496 | 69.82 |
(9)
Zoran Horvat(6)
Mahesh Chand(3)
Rajesh VS(3)
Mike Gold(3)
Abhimanyu K Vatsa(3)
Destin joy(3)
Amit Kumar Agrawal(2)
Anand Kumar Rao(2)
Leon Pereira(2)
Ezhilan Muniswaran(2)
Jay Tallamraju(2)
Dhananjay Kumar (2)
Jean Paul(2)
Nipun Tomar(2)
Aravind BS(2)
Sukesh Marla(2)
Ashish Banerjee(1)
Tran Khanh Hien(1)
Trevor Misfeldt(1)
K S Ganesh(1)
Sridhar Aagamuri(1)
Suprotim Agarwal(1)
Tim Kinslow(1)
Andrew Karasev(1)
Santhosh Kumar R V(1)
Martin Kropp(1)
Leon Pereira(1)
Liam McLennan(1)
Pietros Ghebremicael(1)
Bharadwaj Sridharan(1)
Faheem Iqbal(1)
Prashant Patil(1)
Anand Kumar(1)
Matthew Cochran(1)
Bhakeeswaran Thulasingam(1)
Nirosh (1)
Rehaman SK(1)
Santhosh Veeraraman(1)
Sriram Surapureddy(1)
Kalyan Bandarupalli(1)
Purushottam Rathore(1)
Kevin Rou(1)
Sainath Sherigar(1)
Tanima (1)
John Charles Olamendy(1)
Resco (1)
Sateesh Arveti(1)
sangeeta mishra(1)
Mohammad Elsheimy(1)
Jayachandran K(1)
Dave Richter(1)
Ankur (1)
Andrew Fenster(1)
Richard Tyler(1)
theLizard (1)
Suthish Nair(1)
Mahadesh Mahalingappa(1)
Amit Choudhary(1)
Vijay Prativadi(1)
Sapna (1)
Senthilkumar (1)
Shubham Srivastava(1)
Vulpes (1)
Suchit Khanna(1)
Shubham Saxena(1)
Praneet Rane(1)
Resources
No resource found
Naming Guidelines in .NET
Apr 20, 2001.
Commenting and following a uniform naming guidelines in your code is one of good programming practices to make code more useful..
Using WebRequest and WebResponse classes
Jul 31, 2001.
Downloading and uploading data from the web has been a very common programming practice these days.
Factory Patterns in C#
Dec 10, 2001.
The FACTORY METHOD PATTERN comes under the classification of Creational Patterns. The creational patterns deals with the best way to create.
The CodeLib Program
Aug 26, 2002.
Reusability of code is one of the common practice in a programmer's daily life.
Singleton Patterns in C# Revised
Dec 16, 2002.
I am coming from the Java world, where synchronizing the Singleton creation is common practice.
Extreme Programming (XP)
Feb 18, 2004.
Extreme Programming (XP) is a discipline of software development based on values of simplicity, communication & feedback. It works by bringing the whole team together in the presence of simple practices.
Generics in C#
Mar 08, 2004.
Parametric Polymorphism is a well-established programming language feature. Generics offers this feature to C#. The best way to understand generics is to study some C# code that would benefit from generics.
Building Applications with .NET Compact Framework
Jun 09, 2004.
In this article, author explains various components of Microsoft .NET Compact Framework and how to build compact device applications using .NET Compact Framework..
Moving to ASP.NET: Part 1
Aug 24, 2004.
The attached white paper, Moving to ASP.NET, examines trends in the adoption of ASP.NET, and provides guidelines for selecting the migration approach that best meets specific business needs..
Best Approach for Designing Interoperable Web Service
Mar 15, 2005.
This article will clarify and explain in detail the different Web Service Design Methodologies as defined by the Web Services Standardization Groups, clarify the terms, highlight their differences.
Best Practices of Coding
Apr 13, 2005.
This document covers a few recommendations to leverage the quality of the code in .NET using FXCop 1.30 and how to write custom rules through an introspection engine...
Agile Development Checklist
Feb 20, 2006.
The purpose of this article is to define a set of ideal practices for an agile software development project. The idea for this article came to me after discussing CMMI-type processes and realizing that there is no agile.
Prototype Design Pattern: Easy and Powerful way to copy objects
May 05, 2006.
This article mainly focuses on the Prototype design pattern along with advantages and possible practical scenarios where this pattern seems to be the best choice.
Best Practices of Compact Framework
May 17, 2006.
This shares a few recommendations for use in our day to day development of Compact Framework applications..
Best practices for .Net Performance - I
Aug 03, 2006.
This article gives you an overview of best practices to attain .Net performance..
A guide to ObjectDataSource control
Nov 17, 2006.
One of the best new features of the forthcoming ASP.NET 2.0 will be the ObjectDataSource control. However, as I found out in my experimentation what seems like a simple control has some complex behaviours to master to get it working properly. If (like me) you have an existing data access layer you may have to make changes to be able to use ObjectDataSource.
Leveraging the "using" keyword in C#
Jan 17, 2007.
The “using” keyword in C# is one of the best friends of programmers but many of us may not realize this. The “using” keyword is used in two cases – First when importing a namespace in your code and second in a code block..
Best Practices for handling exceptions
Sep 18, 2007.
This article shall explain some of the best practices when you deal with exceptions.
Best Practices for Data Transfer in SQL Server 2005
Jun 23, 2008.
This article talks about some best practices and the process of data transfer in SQL Server 2005.
Application Architecture for .NET Applications
Jan 08, 2009..
Make thumbnail image using ASP.Net
Feb 12, 2009.
This article explains the best ways to create thumbnail images dynamically in Asp.Net...
Faster Performance of Deployed ASP.Net Sites
May 14, 2009.
This article provides a few tips to ensure your deployed ASP.Net always runs with the best possible performance and no security information leakages....
.NET Best Practice No: 1:- Detecting High Memory consuming functions in .NET code
Aug 15, 2009.
This article discusses the best practices involved using CLR profiler for studying memory allocation.
.
Guide to Improving Code Performance in .NET: Part II
Sep 01, 2009.
This article explains about better Exception Handling practices in C#.
.NET Best Practice No: 3:- Using performance counters to gather performance data
Sep 02, 2009.
This article discusses how we can use performance counter to gather data from an application. So we will first understand the fundamentals and then we will see a simple example from which we will collect some performance data.
Best technique of sending bulk email.
Sep 10, 2009.
This article describe how to send bulk email.
Best Practices No 5: Detecting .NET application memory leaks
Sep 29, 2009.
In this article we are going to detect the .NET application memory leaks..
Bad Practices: Locking on Non-shared Objects in Multi-threaded Applications
Apr 24, 2010.
In this article we will see one of the bad practices developers always do.
Customized Exception Handling
May 05, 2010.
In this article you will learn how to use Customized Exception Handling Using Microsoft.Practices.EnterpriseLibrary.ExceptionHandling
Knowing When to Leave Your Programming Job - Part I
Jul 16, 2010.
This article is the first part in a series to guide you in deciding when it's best to leave your Programming Job. Part I addresses a host of myths that are associated with staying at a dead-end job, and how to combat them.
A Potentially Helpful C# Threading Manual
Jul 27, 2010.
The article will focus on threading constructs and as such, is meant for both the beginner and those who practice multithreading regularly.
How to Get a New Job in 20 Days
Sep 08, 2010.
Successfully implemented and proven processes and best practices to get your dream job quickly.
ReSharper - The Magic Bullet For Visual Studio
Nov 03, 2010.
If you are doing coding on a daily basis then ReSharper for Visual Studio is a life changing product. With ReSharper you will see a change in productivity and maintainability in your programming practices. Read on to see how ReSharper can help you.
Introducing Web Client Software Factory (WCSF)
Nov 25, 2010.
The Web Client Software Factory is a framework for developing ASP.NET and Ajax applications implementing proven patterns and practices. This framework is part of Microsoft Patterns & Practices..
WCF: Error Handling and FaultExceptions
Jan 12, 2011.
This article reviews WCF error handling: FaultExceptions, FaultCodes, FaultReasons and custom FaultExceptions and then discusses best practices for error handling...
Comparison of MVC implementation between J2EE and ASP.NET, Who is the best? Part 1
Mar 19, 2011.
This article is a comparison of MVC implementation between J2EE and ASP.NET.
CMMI (Capability Maturity Model Integration)
Mar 23, 2011.
CMMI defines practices that businesses have implemented on their way to success. Practices cover topics that include collecting and managing requirements, formal decision making, measuring performance, planning work, handling risks, and more.
Best SharePoint Upgrade Practices
Mar 24, 2011.
Before planning a SharePoint upgrade, certain key points carry importance for a successful implementation.
How do you convert numbers to words
Mar 29, 2011.
There are many solutions to converting numbers to words, the best one is a matter of choice, the bigger the number, the more you have to deal with, or do you!
Comparison of Who is the Best? MVC Implementation Between J2EE Struts 2 & ASP.NET MVC 2 - Part 2
May 06, 2011.
This article will compare the frameworks of Java and ASP.NET.
Efficient Implementation of Minimum and Maximum Functions with Application in GUI Design
May 18, 2011.
This article provides ready-to-use solutions to the problem and explains several examples where a proposed solution proves to be useful in practice.
Track Last Login of a WebSite Visitor
May 23, 2011.
In this article you will learn the best way to track and update the last login date and time of a site visitor.
Experiencing SQL Server 2008 Database Projects in Visual Studio 2010
May 31, 2011.
This article explains or gives a small introduction to the new project template available under .NET Framework 4. Here, I am trying my best to explain the template because I am also exploring and learning this new template..
Advances in .NET Event Handling
Aug 09, 2011.
This article covers several situations that occur in practice when coding event driven applications. Pitfalls and bad designs are outlined and examples of proper event handling are given..
Silverlight Chart Control - Part 1
Aug 28, 2011.
In this article we are going to see how we can use the Silverlight Chart Control to create Charts which are always the best way of data visualization.
Rarely used keywords in CSharp but Frequently asked in discussions [Beginners]
Sep 15, 2011.
There are a few words that we rarely use in day to day C# practices [I’m focusing readers who are beginners]. But I’ve seen them in either online exams or other IT quiz shows. So I came to write something about those untouched keywords.
ScaffoldColumn(bool value) vs HiddenInput(DisplayValue = bool value) in MVC
Nov 11, 2011.
In this article, we will see what the use of ScaffoldColumn and HiddenInput. We will also compare what the key differences between these two attribute and what scenario we should consider these attributes for usage. So, accordingly prior to my articles on MVC we will just add these attributes and we will see what the best we can produce.
Simple And Best Way of Implementing the Repository Pattern
Jan 02, 2012.
I will try to explain in a very simplest method to understand repository pattern..
Arrays in C
Feb 09, 2012.
Arrays are a linear data structure that stores the same type of data in contiguous memory locations. Arrays are best used to store data in contiguous memory locations.
Easiest and Best Way to Use WCF OData Services and Silverlight Client
Feb 10, 2012.
The Open Data Protocol (OData) is an open protocol for sharing data, based on Representational State Transfer (REST). In this article, I would like to explain OData using WCF..
Good Practices to Write Stored Procedures in SQL Server
Mar 01, 2012.
This explains the good practices for writing stored procedures in SQL Server and the advantages of writing stored procedures.
How to Configure Parental Controls in Windows 8
Apr 08, 2012.
Parental controls is one of the best tools available in Microsoft’s operating systems. Parental control is used to protect your children from using system and also restricting the period they use it.
Converting Cardinal Numbers to Ordinal using C#
Apr 09, 2012.
A problem which often arises in practice is how to convert a cardinal number to its ordinal equivalent.
How to Configure Best Bet in SharePoint 2010 Search
May 01, 2012.
In this article, I am showing you how to configure Best Bet in Search..
Test Driven Development Basic
Aug 04, 2012.
In this post we will see how we can go about Test Driven Development, there are many advantages practicing TDD, we’ll just cover basics to start with..
ASP.NET Best Practices
Sep 03, 2012.
In this article we will explore some of the best practices in ASP.NET development.
ASP.NET Performance Practices
Sep 03, 2012.
In this article we will explore some tips for improving ASP.NET performance..
Abstract Factory Pattern in VB.NET
Nov 10, 2012.
The abstract factory pattern comes under the classification of Creational Patterns. The creational patterns deals with the best way to create objects. The Abstract Factory provides an interface to create and return one of several families of related objects..
Commonly Asked SQL Queries
Mar 18, 2013.
Here I am sharing some of the SQL queries which are commonly asked. I was practicing with such queries and wanted to know more from people.
About Best-Practices. | http://www.c-sharpcorner.com/tags/Best-Practices | CC-MAIN-2016-36 | refinedweb | 2,233 | 58.69 |
#include <sys/ipc.h> #include <sys/shm.h>
shmctl() performs the
control operation specified by
cmd on the System, −1 is returned, and
errno is set appropriately.
IPC_STAT or
SHM_STAT is requested and
shm_perm.mode
does not allow read access for
shmid, and the calling
process does not have the
CAP_IPC_OWNER capability in the user
namespace that governs its IPC namespace.
The argument
cmd has value
IPC_SET or
IPC_STAT but the address pointed to
by
buf isn't
accessible.
shmid points
to a removed identifier.
shmid is not
a valid identifier, or
cmd is not a valid
command. Or: for a
SHM_STAT operation, the index value
specified in
shmid referred to an
array slot that is currently unused.
)).
IPC_STAT is attempted,
and the GID or UID value is too large to be stored in
the structure pointed to by
buf..)
mlock(2), setrlimit(2), shmget(2), shmop(2), capabilities(7), svipc(7) | http://manpages.courier-mta.org/htmlman2/shmctl.2.html | CC-MAIN-2019-18 | refinedweb | 152 | 57.27 |
It is 6 AM. I am awake summarizing the sequence of events leading to my way-too-early wake up call. As those stories start, my phone alarm went off. Sleepy and grumpy me checked the phone to see whether I was really crazy enough to set the wake-up alarm at 5AM. No, it was our monitoring system indicating that one of. When you compile and launch the following Java code snippet on Linux (I used the latest stable Ubuntu version):
package eu.plumbr.demo; public class OOM { public static void main(String[] args){ java.util.List l = new java.util.ArrayList(); for (int i = 10000; i < 100000; i++) { try { l.add(new int[100_000_000]); } catch (Throwable t) { t.printStackTrace(); } } } }
then you will face.
Interesting thing, thanks for sharing.
Was the amount of memory allocated for this application (Xmx) higher than amount of memory actually available on this host minus some 500-1000m for the OS itself? Otherwise, I don’t get how the JVM wouldn’t crash with OOM itself when trying to allocate another array. | https://www.javacodegeeks.com/2014/06/out-of-memory-kill-process-or-sacrifice-child.html/comment-page-1/ | CC-MAIN-2016-36 | refinedweb | 178 | 68.26 |
Need a little help please:
i have been trying to complete this for a couple days but i get more and more confused everytime i try.
When an object is falling because of gravity, the following formula can be used to determine the distance the object falls in a specific time period.
D = ½ gt^2
The variables in the formula are as follows: d is the distance in meters, g is 9.8, and t is the amount of time in second that the object has been falling.
Write though 10 as arguments, and displays the return value.
*Additional to this problem my prof wants two functions that calculate the falling distance.
function 1 passes arguments by value
function 2 passes argument by reference
outputs should be in form of a table
whats confusing is the prof established that in the main it should resemeble like this:
for(--------------)
call (by value)
output
for(---------------------)
call (by reference)
output
call by ref works fine but call by value only works when i put double d as global but then it only comes back as 0's in the output. its not reading the equation
my code so far:
Inline Code Example Here
# include <iostream> # include <cmath> # include <string> using namespace std; void fallingDistance2(double &); double fallingDistance1(double); const double g = 9.8; int t; double d; int main() { cout<<"calculated by passby values.\n"; cout<<"Time \t\t Distance\n"; cout<<"-------------------\n"; for (t=1;t<=10;t++) { fallingDistance1(d); cout<<t<<"\t\t"<<d<<endl; } cout<<"calculated by reference values.\n"; cout<<"Time \t\t Distance\n"; cout<<"-------------------\n"; for (t=1;t<=10;t++) { fallingDistance2(d); cout<<t<<"\t\t"<<d<<endl; } return 0; } double fallingDistance1(double d) { d=0.5*9.8*t*t; return d; } void fallingDistance2(double &refd) { refd=0.5*9.8*t*t; } | https://www.daniweb.com/programming/software-development/threads/423216/functions-call-and-ref-statements-with-for-loops-output-two-separt-table | CC-MAIN-2017-34 | refinedweb | 309 | 58.21 |
I'm borrowing this post from another forum, because the guy was having troubles very similar to what i'm having, and i'm doing a very similar project. the assignment is one of the Josephus problem, including, from a command line, have a user enter 2 values, 1 for the amount of people, and the other for the number of people to pass. the Josephus problem is:
N people, numbered 0 to N-1, are sitting in a circle. Starting at person 0, a hot potato is passed. After M people touch the potato, the person holding the potato is eliminated, the circle closes ranks, and the game continues with the person sitting after the eliminated person starting with the potato. The last person remaining wins.
Write a program to accept two command line inputs representing N (the number of people in the circle) and M (the number of people required to touch the hot-potato before a person is eliminated). Initialize a StringBuffer to consist of the characters numbered 0 through N-1 (note: since there are 216 = 65,536 Unicode characters your program will be capable of solving Josephus' Problem up to this value of N).
I've got my code to where I think it should at least work with figuring out the last person left after the entire Josephus problem is carried out, but I keep getting out of bounds compile-time errors whenever I run my code, like the following:
Exception in thread "main" java.lang.StringIndexOutOfBoundsException:String index out of range
Here is my code thus far:.Code:
import java.lang.StringBuffer;
public class Josephus {
public static void main (String[] args) {
//These two variables store the two values for the people and the passes.
int num = 0;
int pass = 0;
//This variable stores the value of the current index.
int index = 0;
//These two variables store the command line inputs.
String N = args[0];
String M = args[1];
//Output the command line inputs.
System.out.println("There will be " + N + " people.");
System.out.println("A person will be eliminated for every " + M + " touches.");
//Convert the String command line inputs to integers.
Integer tmpN = Integer.valueOf(N);
num = tmpN.intValue();
Integer tmpM = Integer.valueOf(M);
pass = tmpM.intValue();
//Array stores the deleted values.
int[] deleted = new int[num];
//Create a new StringBuffer for the program.
StringBuffer josephus = new StringBuffer(num);
//Create the values needed for the program.
for(int i=0; i<num; i++) {
josephus.append((char)(i+1));
}
//Test garbage.
System.out.println();
System.out.println((int)josephus.charAt(pass-2));
//Loop to solve the Josephus problem. Note the -1 may need changed.
for(int i=0; i><num-1; i++) {
if((index+pass) > (num)) {
index = (index + pass) % (num);
//Get the value from the StringBuffer, then delete the value.
deleted[i] = (int)josephus.charAt(index);
josephus.delete(index, index+1);
} else {
index += pass;
deleted[i] = (int)josephus.charAt(index);
josephus.delete(index, index+1);
}
}
System.out.println("The final remaining member is " + (int)josephus.charAt(0));
}
} | http://forums.devx.com/printthread.php?t=150127&pp=15&page=1 | CC-MAIN-2016-30 | refinedweb | 503 | 58.28 |
Revision history for Perl extension CGI::Application::Plugin::TT. 1.05 Fri Jun 4 14:25:49 EST 2010 - fix dev popup support by html encoding the data sent to the popup window (patch by Clayton L. Scott) - fix test failure on windows (patch by Alexandr Ciornii) 1.04 Wed Nov 1 07:08:50 EST 2006 - add TEMPLATE_PRECOMPILE_DIR option which can automatically compile all your templates on startup (patch by Michael Peters) - slightly refactored the default tt_template_name code - doc fix (Trammell Hudson/Robert Sedlacek) 1.03 Thu May 18 12:27:26 EDT 2006 - the default tt_template_name method now accepts a parameter that specifies how many caller levels we walk up (from the calling method) to find the method name to use as a base for the template name (defaults to 0) - a side effect of this change is that you can now pass any parameters you like to your custom TEMPLATE_NAME_GENERATOR method, when calling $self->tt_template_name(...). 1.02 Sun Feb 5 20:11:23 EST 2006 - Allow call to tt_process with no parameters (brad -at- footle.org) 1.01 Wed Jan 25 16:00:38 EST 2006 - Fix doc error in synopsis (Jonathan Anderson) - Before calling 'call_hook' make sure it exists - Update pod coverage tests 1.00 Wed Oct 19 14:11:22 EDT 2005 - added support for tt_include_path to return the current value of INCLUDE_PATH 0.10 Fri Sep 23 08:58:34 EDT 2005 - fix tests for DevPopup so it doesn't fail if it is not installed (Thanks to Jason Purdy and Rhesa Rozendaal) 0.09 Wed Sep 21 15:59:03 EDT 2005 - added support for the load_tmpl hook in CGI::App - added support for the DevPopup plugin - added pod coverage tests 0.08 Sun Jul 31 17:38:16 EDT 2005 - Made some small doc changes that I meant to put in the last release. 0.07 Sat Jul 30 9:18:46 EDT 2005 - fixed Windows path bug in test suite (Emanuele Zeppieri) - Simplify the pod tests according to Test::Pod docs - Support the new callback hooks in CGI::Application 4.0 - Automatically add { c => $self } to template params (see docs under DEFAULT PARAMETERS) - minor doc cleanups 0.06 Thu Feb 3 15:38:39 EST 2005 - Document use of tt_config as a class method for singleton support - Some other small documentation cleanups 0.05 Mon Jan 24 11:47:06 EST 2005 - add tt_template_name which autogenerates template filenames - tt_process will call tt_template_name if the template name is not provided as an arguement - add Singleton support for TT object 0.04 Fri Dec 3 12:02:56 EST 2004 - die if there is an error processing a template in tt_process 0.03 Sun Sep 19 18:13:03 EST 2004 - scrap CGI::Application::Plugin support for simple Exporter system. - Moved module to the CGI::Application::Plugin namespace. - module no longer depends on inheritance, so just use'ing the module will suffice to import the required methods into the current class. 0.02 Mon Jul 26 23:44:39 EST 2004 - add support for the new CGI::Application::Plugin base class. This means the usage has changed. Altering the inheritance tree is no longer necesary, as you only need to use the module and it will import the plugin methods into the callers namespace automatically. See the docs for more details... 0.01 Sun Feb 15 16:10:39 EST 2004 - original version | https://metacpan.org/changes/distribution/CGI-Application-Plugin-TT | CC-MAIN-2016-36 | refinedweb | 570 | 58.62 |
Java enum examples
Simple enum. The ; after the last element is optional, when this is the end of enum definition.
public enum Color { WHITE, BLACK, RED, YELLOW, BLUE; //; is optional }Enum embedded inside a class. Outside the enclosing class, elements are referenced as
Outter.Color.RED, Outter.Color.BLUE,etc.
public class Outter { public enum Color { WHITE, BLACK, RED, YELLOW, BLUE } }Enum that overrides toString method. A semicolon after the last element is required to be able to compile it. More details on overriding enum toString method can be found here.
public enum Color { WHITE, BLACK, RED, YELLOW, BLUE; //; is required here. @Override public String toString() { //only capitalize the first letter String s = super.toString(); return s.substring(0, 1) + s.substring(1).toLowerCase(); } }Enum with additional fields and custom constructor. Enum constructors must be either private or package default, and protected or public access modifier is not allowed. When custom constructor is declared, all elements declaration must match that constructor.
public enum Color { WHITE(21), BLACK(22), RED(23), YELLOW(24), BLUE(25); private int code; private Color(int c) { code = c; } public int getCode() { return code; }Enum that implements interfaces. Enum can implement any interfaces. All enum types implicitly implements
java.io.Serializable, and
java.lang.Comparable.
public enum Color implements Runnable { WHITE, BLACK, RED, YELLOW, BLUE; public void run() { System.out.println("name()=" + name() + ", toString()=" + toString()); } }A sample test program to invoke this run() method:
for(Color c : Color.values()) { c.run(); }Or,
for(Runnable r : Color.values()) { r.run(); }A more complete example with custom fields, constructors, getters, lookup method, and even a main method for quick testing:
import java.util.HashMap; import java.util.Map; public enum Status { PASSED(1, "Passed", "The test has passed."), FAILED(-1, "Failed", "The test was executed but failed."), DID_NOT_RUN(0, "Did not run", "The test did not start."); private int code; private String label; private String description; /** * A mapping between the integer code and its corresponding Status to facilitate lookup by code. */ private static Map<Integer, Status> codeToStatusMapping; private Status(int code, String label, String description) { this.code = code; this.label = label; this.description = description; } public static Status getStatus(int i) { if (codeToStatusMapping == null) { initMapping(); } return codeToStatusMapping.get(i); } private static void initMapping() { codeToStatusMapping = new HashMap<Integer, Status>(); for (Status s : values()) { codeToStatusMapping.put(s.code, s); } } public int getCode() { return code; } public String getLabel() { return label; } public String getDescription() { return description; } @Override public String toString() { final StringBuilder sb = new StringBuilder(); sb.append("Status"); sb.append("{code=").append(code); sb.append(", label='").append(label).append('\''); sb.append(", description='").append(description).append('\''); sb.append('}'); return sb.toString(); } public static void main(String[] args) { System.out.println(Status.PASSED); System.out.println(Status.getStatus(-1)); } }To run the above example:
java Status Status{code=1, label='Passed', description='The test has passed.'} Status{code=-1, label='Failed', description='The test was executed but failed.'}
573 comments:1 – 200 of 573 Newer› Newest»
Seriously, this is a great post ;)
Been looking for almost 30min all over google for different enum examples and your single post showed them all, thanks alot ^^
Good examples for understanding enum
This is an great INCORRECT post. Cannot do:
public enum Color {
WHITE(21), BLACK(22), RED(23), YELLOW(24), BLUE(25);
private int code;
private Color(int c) {
code = c;
}
public int getCode() {
return code;
}
}
THIS IS SO CORRECT. IT WORKS FINE.
public enum Color {
WHITE(21), BLACK(22), RED(23), YELLOW(24), BLUE(25);
private int code;
private Color(int c) {
code = c;
}
public int getCode() {
return code;
}
}
This is a great incorrect post. You cannot say 'anonymous'. instead say your real name here.
Great post on java enums. Can you explain overriding methods inside enum variables or emum method. I dont any examples on this.
This is absolutely correct and help full post .....thank you very much for the post
Overriding enum example
public enum Element{
EARTH, WIND,
FIRE {
public String info() {
return "HOT";
}
};
public String info() {
return "element";
};
}
Nice Info Thanks!!
The example of implement interface and implement runnable (This example doesn't has any relation with threading) just explain the enum can implement any interface and this correct.
Thanks
Its wrong you need to have default construstor for Enums. Hence the original post may not compile.
You do not need a default constructor, please see:
Enums are one of the few things that work with switch/case statements as well:
switch(db.getType()) // returns an enum element
{
case MYSQL: // no need to qualify names
...
case POSTGRES:
...
}
Thank you very much for this article!
It was owsumm :)
Very good post. Thanks.
Great post, thanks for it! =)
nice post .............
nice blog.
for java related material you can refer this link Click Here
Java is such a retarded language... like taking C++ and blowing its kneecaps off.
Very nice and self explanatory article, was reading quite a few article on it but this is best one.
@Ramraj
You can go over on these articles Enum Examples or Enum in java
Wonderful tutorial man. Thank you.
this is helpful
More about Enum.
----------------
You cannot create Object of Enum type. It's objects are fixed.
public enum Color {
WHITE(21), BLACK(22), RED(23), YELLOW(24), BLUE(25);
private int code;
private Color(int c) {
code = c;
}
public int getCode() {
return code;
}
}
In this example WHITE, BLACK,... are object of Enum type Color.
Its a good example for understanding about the Enum.
Thanks keep on posting like this.
It works great!
public enum Books {
MyBook, HisBook, YourBook;
};
class Test {
public static void main(String args[]) {
for(Books b : Books.values()) {
System.out.println(b);
System.out.println(b.ordinal());
}
}
}
This code compiles well. But when run generates "Exception in thread "main" java.lang.NoSuchMethodError: main".
What is the problem here?
The Test class should be
public class Test {...}
Thank you very much...
It's easier to understand each usage... Thank you again...
Very useful examples. Thank you :)
Can you suggest me in a case where if i have a condition which consists of the following code
enum CoffeeSize
{
BIG(8),
HUGE(10),
OVERWHELMING(16)
{
public String getLidCode()
{
return "A";
}
};
CoffeeSize(int ounces)
{
this.ounces = ounces;
}
private int ounces;
public int getOunces()
{
return ounces;
}
public String getLidCode()
{
return "B";
}
}
Now how can i override this getLidCode() & get the Output in the Class.
really a nice post..............
You know what this is missing: How do you use the numbers to initialize an enum?
What I mean is, I have a direction enum:
<i style="font-family: Courier>
public enum Direction {
NORTH (0), EAST (90), SOUTH (180), WEST (270);
private int degrees;
Direction(int arg1) {
degrees=arg1;
}
}
</i>
And I want to be able to set certain variable of type Direction equal to 180 (for SOUTH). Is this possible, or can you only do that when you have strings in parentheses?
Very good post.
In short, Enum can have constructor, methods and inbuilt objects.
enum Bear
{
KF(100),RC(150),FO(200),TEN(),Two(10,20);
int price,x,y;
Bear(int x,int y)
{
this.x=x;
this.y=y;
price=x+y;
}
Bear(int price)
{
this.price=price;
}
Bear()
{
price=500;
}
public int getPrice()
{
return price;
}
}
class EnumDemo1
{
public static void main(String[] args)
{
Bear b1=Bear.KF;
System.out.println(b1+"--"+b1.price);
Bear b2[]=Bear.values();
for(Bear b3:b2)//enhanced for loop
{
System.out.println(b3+"----"+b3.getPrice());
}
}
}
what is difference b/w these two:
RED;
RED();
The following are the compilation errors:
Enum types must not be declared abstract
To explicitly declare an enum type to be final.
To attempt to explicitly instantiate an enum type
To declare an abstract method within the enum constant body
Sankar.lp
Java Training
Thanks :)
great.helpful post
In this method:
public static Status getStatus(int i) {
if (codeToStatusMapping == null) {
initMapping();
}
Status result = null;
for (Status s : values()) {
result = codeToStatusMapping.get(i);
}
return result;
}
I do not see why you have a for loop. A hashmap is meant to facilitate the lookup in one go, and at the point where you do the get, it's supposedly filled correctly.
Am I missing something, or is the example imperfect?
The for loop in getStatus(int) method is indeed not needed. It does the same, repetitive lookup multiple times. I've fixed it. Thanks for spotting it.
Good Tuto, Thanks ;
Genius post!
I've been inspired to give a more efficient version of the getStatus() function in the complex example.
Here goes:
private static Status[] statusArr; // only required member variable
public static Status getStatus(int i) {
if (statusArr == null) {
statusArr = Status.values();
}
return statusArr[i];
}
Who's there ?
Great post!
Beautiful and elegant.
Thanks for posting this.
Pieter Malan.
One of the best article to learn Enum. I would also suggest to read through following comprehensive 10 Enum Examples in Java and 15 Java Enum Interview Questions. Both of them provide good overview of different enum features.
very much useful
Becks said...
June 21, 2013 8:40 AM
In response to Becks post about "more efficient method" -- just wanted to point out to others that this only works if you let the ENUM values be assigned in order 0,1,2,3,4,5... In the example provided by the author, Becks solution would not work because it would check into an array with a "-1" value for example.
In summary, Becks optimization is only useful in limited circumstances. Happy coding!
nice... for more java examples site
great post about enums, thanks a lot for the effort
great post about enums, thanks a lot
In the complex example, usage of Hashmap cache access is an anti-pattern
1. Hashmap create useless memory overhead (hashmap has bad footprint)
2. hashmap get() isn't really efficient vs values loop
3. please stop to make small inefficient optimization, jvm do it for us
4. values() is more faster than map, try it ;)
-----------
public static Status getStatusWithoutMap(int i) {
for (Status s : values) {
if (s.code == i) {
return s;
}
}
return null;
}
public static void main(String[] args) {
long start = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
Status.getStatusWithoutMap(25);
}
System.out.println((System.currentTimeMillis() - start) + "ms");
start = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
Status.getStatus(25);
}
System.out.println((System.currentTimeMillis() - start) + "ms");
}
Thanks bro...
awesome tips on java enum!
this example also help you
public enum Color {
WHITE(21), BLACK(22), RED(23), YELLOW(24), BLUE(25);
private int code;
private Color(int c) {
code = c;
}
public int getCode() {
return code;
}
}
Split only after comma 3 times appear in Java
Question asked by my friend :
I have a string that looks like this:
0,0,1,2,4,5,3,4,6
What I want returned is a string[] that was split after every 3rd comma, so it would look like this:
[ "0,0,1", "2,4,5", "3,4,6" ]
Thanks for sharing such informative article on Java technology. It is most preferred technology among developers to create stunning mobile application fast and easy. It also created massive career opportunity for aspiring professionals. JAVA Training Institutes in Chennai
Very Useful. For more examples visit
Java is one of the popular technologies with improved job opportunity for hopeful professionals. Java Training in Chennai helps you to study this technology in details.
Thanks for sharing this informative blog.. If anyone want to get HTML5 Training in Chennai please visit FITA academy located at Chennai, Velachery. Rated as No.1 training and placement academy in Chennai.
Very post about Java spring batch is very useful to me. I tried your database creation code and its working great. Thanks for your informative post. This article is very useful for java professionals and students undergoing java training in Chennai.
tried this enum coding its work good thank you for this post, and its very useful for the fresher who wants to take a Java Training In Chennai
the enum concept what you have explained here is crystal clear thanks for sharing your post
JAVA J2EE Training Institutes in Chennai
Thanks for sharing your view to our knowledge’s, its helps me plenty keep sharing…
JAVA Training in Chennai
Really a good post.Thank you very much.
Thanks for sharing good examples.
Thanks for sharing informative article on Node.js. In recent years, this technology is going to be the future of web design and development technology. I am going to include Node.js in my Web designing course in Chennai syllabus in educating my students.
Thanks for sharing informative article!!! Your article helped me a lot to understand the importance of Best PHP Training in Chennai to secure lucrative job in web design and development industry.
It was nice post, actually here i can learn the enumeration & its functions clearly, so keep sharing your updates regularly to improve my skills.
Regards,
Web designing course in chennai
The information you have given here are most worthy for me. I have implemented in my training program as well, thanks for sharing.
Hadoop Training Chennai
Hadoop Training in Chennai
Thank you for this wonderful tutorial. It was really helpful.
digital marketing institute
Thank you for this wonderful tutorial. It was really helpful.
digital marketing institute
Thanks of sharing this post…Python is the fastest growing language that helps to get your dream job in a best way, so if you wants to become a expertise in python get some training on that language.
Regards,
Python Training in Chennai|python training chennai|Python Course in Chennai
I am reading your post from the beginning, it was so interesting to read & I feel thanks to you for posting such a good blog, keep updates regularly.
Regards,
Python Training in Chennai|Node JS training in chennai|cognos Training Chennai
Thanks for taking a time to discussing about this worth topic, it helps me a lot to improve my skill set. I would like see your updates , so keep blogging.
Regards,
sap training in Chennai|SAP Course in Chennai|SAP Training|SAP training in chennai |SAP course in chennai
This data is magnificent. I am impressed with your writing style and how properly you define this topic, Thanks for sharing...
Regards,
ccna training in Chennai|ccna training institute in Chennai.
Thanks for sharing your innovative ideas to our vision. I have read your blog and I gathered some new information through your blog. Your blog is really very informative and unique. Keep posting like this. Awaiting for your further update.
Thanks & Regards
Big Data Training in Chennai | Big Data Training
Hi, I am really happy to found such a helpful and fascinating post that is written in well manner. Thanks for sharing such an informative post. keep update your blog.
Regards.
Big Data Training in Chennai
I am highly thankful for this post. I’ve been looking everywhere for this! Thank kindness I found it on bloging effort. You’ve made my day! Thank you again!
sap ps online training
sap hana development online training
sap isu billing&invoice online training
good blog...web design course
very useful information
Bank exam pattern questions and answers
very nice information
SBI clerk exam questions and answers
Ibps po exam questions and answers
Tnpsc exam questions and answers
Hi,I will be learning that the lot of new information.
oracle training institute
• Looking for real-time training institute
sas training institute
Excellent post!!!.
lenovo thinkpad service centers
lenovo ideapad service center
Wonderful blog.. Thanks for sharing informative blog.. its very useful to me..
iOS Training in Chennai
Nice blog.
Best Digital Marketing Course in Chennai
This article is really fantastic and thanks for sharing the valuable post.
descargar facebook gratis | descargar facebook | facebook descargar
Amazing blog post...I enjoyed reading this blog content ...awasome
Amazing work..keep sharing this awesome stuff
Finding the time and actual effort to create a superb article like this is great thing. I’ll learn many new stuff right here! Good luck for the next post buddy..
PHP training in chennai
Thanks for sharing this information and keep updating us. This is informatics and really useful to me.
Selenium Training in Chennai | Selenium Training | Selenium Course in Chennai
Such a wonderful post
Indias Fastyest Local Business Portal,
CALL360 Leading Business portal in India.
Wow, thanks for sharing this information. It has helped me to practice my Java programming skills n addition to learning ne syntax that I was not familiar with. I will also be recommending this site to our professional writers who offer professional writing and editing services to students. Check out some of the articles they have written by clicking on Advantages of Using Online Editing Services
Thank you very much for the sharing! COOL.. Pakistani Bridal Dresses
Nice it seems to be good post... It will get readers engagement on the article since readers engagement plays an vital role in every blog.. i am expecting more updated posts from your hands.
PHP training in chennai
Nice post!! Thanks for sharing.
JAVA Training in Delhi | ASP.NET Training Company in Delhi | Big Data Training In Delhi
Nice thought! Thanks for Sharing that Info.
Web Development Training In Gwalior
Java Training in Gwalior
ASp.Net
training in Gwalior
We'll have to share it marvelous posting.I like that your generous useful article.I read all your blog is humbled excellent blogger commenting.hadoop training in chennai | best selenium training in chennai | hadoop training institute
Our Digital Marketing Training is tailored for beginners who want to learn how to stand out digitally, whether it is for their own business or a personal brand.
digital marketing course
This information really worth saying, i think you are master of the content and thank you so much sharing that valuable information and get new skills after refer that post.
Websphere Training in Chennai
Nice post!!Traning Institute
This article is very much helpful and I hope this will be a useful information for the needed one. Keep on updating these kinds of informative things...
Hadoop Training in Chennai
Hadoop Training Institute in Chennai
You have provided an nice article, Thank you very much for this one. And i hope this will be useful for many people.. and i am waiting for your next post keep on updating these kinds of knowledgeable things...
Android App Development Company
I enjoyed on reading your blog post. Very interesting to read this article.I would like to thank you for the efforts you had made for writing this awesome article. Please visit my website, Friv 4000 Games is where all the free friv games.
Friv 4000
It is really a great work and the way in which u r sharing the knowledge is excellent.
Thanks a lot! You made a new blog entry to answer my question; I really appreciate your time and effort.
java training institutes in chennai | java j2ee training institutes in velachery
It a ridiculous blog post and I have to salute for your hard work.your share this idea is really great.Thank for very useful to us.In these interview question very useful for my job career .I really appreciate our marvelous blog article.If want become to learn for Java Training To reach us, Java Training in Chennai | Java Training Institute
Wow, the Java enum examples have helped me to polish my Java programming skills. Additionally, I have learned new facts and ideas from the shared examples. Thanks s lot for taking the time to write the programs and share with us I will be recommending this site to my colleagues who offer Undergraduate Papers Editing Service.
This article is very much helpful and i hope this will be an useful information for the needed one. Keep on updating these kinds of informative things...
Fitness SMS
Fitness Text
Salon SMS
Salon Text
Investor Relation SMS
Investor Relation Text
it is really amazing...thanks for sharing....provide more useful information...
Mobile app development company
Thanks for sharing!
Web design company in Hyderabad
Digital Marketing company in Hyderabad
Interesting article to read.. Concept explanation are very clear so easy to understand..
big data training in velachery | hadoop training institute in velachery
Thanks a lot! You made a new blog entry to answer my question; I really appreciate your time and effort.
java j2ee training in chennai |
Top 10 java training institute in chennai
Thank you for taking the time and sharing this information with us. It was indeed very helpful and insightful while being straight forward and to the point.
mcdonaldsgutscheine | startlr | saludlimp
here you get the detailed tutorial about enum.
Java is a high-level programming language.Due to its stability and scalability, you can find Java on mobiles, desktops, large scale applications etc. Java is also gaining big in the field of Internet of Things (IoT).
Best Java training institute in chennai
Thanks for sharing clear explanation for java using example program.
Java courses
good explanation thank you..
Java Training Institute in Chennai
Thank you very useful java training in chennai, core java training in chennaiThank you very useful seo training in chennai, struts training in chennai
me project centers
Thank you !! Amazing Write Up !!
Robotics training in chennai
Php training in chennai
Vmware Training in Chennai
CCNA Training in Chennai
Angularjs Training in Chennai
Google CLoud Training in Chennai
Red Hat Training in Chennai
Linux Training in Chennai
Rhce Training in Chennai
Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Anyway, I'll be subscribing to your feed and I hope you post again soon. Big thanks for the useful info.
QlikView Training in Chennai
Informatica Training in Chennai
Python Training in Chennai
Thank you so much for sharing... downloading Lucky Patcher app
Thank you !! Very usefull !!
android training in chennai
advance java training in chennai
Arduino training in chennai
Hey there, You have done a fantastic job. I’ll definitely digg it and personally suggest to my friends. I am confident they will be benefited from this web site.
PSD to Wordpress
wordpress website development
Nice Post.....Salesforce Training
Great Post..
Risk management consulting services
ROI consultant minnesota
consulting company minnesota
I have read your blog its very attractive and impressive. I like it your blog.
Abinitio Online Training
Hadoop Online Training
Cognos Online Training
Your blog is really interested.Please keep sharing.
PHP Online Training
Pega Online Training
Oracle Soa Online Training
It is so interesting to read and thanks a lot for posting such a good blog....
BEST JAVA TRAINING IN NOIDA
BEST DIGITAL MARKETING TRAINING IN NOIDA
BEST TRAINING IN NOIDA here we stitch clothes as per your design and selection
Wonderful post. I am learning so many things from your blog.keep posting.
UNIX Shell scripting training in chennai|ORACLE apps finance training in chennai|Informatica Online Training
Your new valuable key points imply much a person like me and extremely more to my office workers. With thanks.
UNIX Shell scripting training in chennai
ORACLE apps finance training in chennai
Informatica Online Training here we stitch clothes as per your design and selection here we stitch clothes as per your design and selection
Thank u for this information
Mainframe Training In Chennai | Hadoop Training In Chennai | ETL Testing Training In Chennai here we stitch clothes as per your design and selection
Your very own commitment to getting the message throughout came to be rather powerful and have consistently enabled employees just like me to arrive at their desired goals.
Best Java Training Institute Chennai
Fantastic Article ! Thanks for sharing this Lovely Post !! here we stitch clothes as per your design and selection
nice post here we stitch clothes as per your design and selection here we stitch clothes as per your design and selection here we stitch clothes as per your design and selection
Needed to compose you a very little word to thank you yet again regarding the nice suggestions you’ve contributed here. Best Java Training Institute in chennai here we stitch clothes as per your design and selection here we stitch clothes as per your design and selection
I have read your blog its very attractive and impressive. I like your blog ios app development online training Bangalore
Your new valuable key points imply much a person like me and extremely more to my office workers. With thanks; from every one of us.
Java Training Institute Bangalore
I believe there are many more pleasurable opportunities ahead for individuals that looked at your site.
amazon-web-services-training-institute-in-chennai
I feel really happy to have seen your webpage and look forward to so many more entertaining times reading here. Thanks once more for all the details.
uipath training in bangalore
Those guidelines additionally worked to become a good way to recognize that other people online have the identical fervor like mine to grasp great deal more around this condition.
Best Java Training Institute Chennai
Java Training Institute Bangalore
KVCH provides best JAVA.
Best JAVA Summer Internship
Best android Summer Training
Best Linux Summer Internship
Thanks for providing such a useful information shared here.
uipath training in bangalore
Very Nice Blog, I like this Blog thanks for sharing this blog , I have got lots of information from this Blog. Do u know about Dotnet developer
Dot Net Training in Bangalore
I believe there are many more pleasurable opportunities ahead for individuals that looked at your site.
Those guidelines additionally worked to become a good way to recognize that other people online have the identical fervor like mine to grasp great deal more around this condition.
Best Java Training Institute Chennai
Amazon Web Services Training in Chennai
The information which you have provided is very good. It is very useful who is looking for selenium Online Training
Your good knowledge and kindness in playing with all the pieces were very useful. I don’t know what I would have done if I had not encountered such a step like this.
Best selenium training Institute in chennai
I believe there are many more pleasurable opportunities ahead for individuals that looked at your site.
Best Java Training Institute Chennai
Amazon Web Services Training in Chennai
This is excellent information. It is amazing and wonderful to visit your site.Thanks for sharing this information,this is useful to me...
Embedded training in chennai | Embedded training centre in chennai | Embedded system training in chennai | PLC Training institute in chennai | IEEE final year projects in chennai | VLSI training institute
I simply wanted to write down a quick word to say thanks to you for those wonderful tips and hints you are showing on this site.
Best Python training Institute in chennai
I am really happy with your blog because your article is very unique and powerful for new reader.
Best Aws training Institute in chennai | https://javahowto.blogspot.com/2008/04/java-enum-examples.html | CC-MAIN-2020-24 | refinedweb | 4,464 | 57.16 |
i tried to make a code for a friend in C but i have no idea why it's not working, could someone please help me? (i want a simple version-fix if it's possible, because i've only started to study c a couple of weeks ago)
#include <stdio.h>
#include<string.h>
int main ()
{
char *name="alina";
char *input;
printf ("what's your name? \n");
scanf ("%s",&input);
if (input=="alina")
printf("your name is %s good job!\n ",&name);
if (input!="alina")
printf("are you sure? open the program again and insert the correct name");
while (1);
You did some errors. First, if you want to insert a string, you can use the
%s, but you have to use an array of
char in which you can store that string. That's, you have to write something like this.
char input[100]; scanf("%s", input);
and then it'll work. This snippet of code means: I need to insert (store) a string. So first I create a place in which I can store it (pay attention that the string must be maximum of 99 characters; my array has size 100, but the last character is used to represent the end of the string), and then I use the
scanf to write what I want.
The second error is that if you want to compare two strings, you can't simply use the
== or
!=, like when you do with numbers. The
string.h library lets you use the
strcmp function, like this:
if (strcmp(name, input) == 0) // this means the strings are equal ... else ...
remembering that
char* name = "Alina"; char input[100];
Eventually, here you're an inspected version of your code:
#include <stdio.h> #include <string.h> int main() { char* name = "Alina"; char input[100]; printf("What's your name?\n"); scanf("%s", input); if (strcmp(name, input) == 0) printf("Your name is %s good job!\n", name); else printf("Are you sure? Open the program again and insert the correct name\n"); return 0; }
The
while(1) at the end of your code is absolutely dangerous, because it starts an infinite loop that never ends, crashing your program. You want to definitely remove it! | https://codedump.io/share/guH71HMeF5MG/1/how-to-compare-words-on-c | CC-MAIN-2017-04 | refinedweb | 369 | 81.53 |
An Android app uses a virtual LED matrix to update a physical LED matrix directly from a mobile device.
In this project, we use an Android app that uses a virtual LED matrix so we can update the character seen on the LED matrix from the mobile device directly.
We will describe the circuit used to go from drawing a character on an Android App, sending it to an Arduino UNO via Bluetooth, then to a GreenPAK5 via I2C which fits data to display the character on an LED matrix.
The project consists of three stages:
- Building the Android application.
- Creating a GreenPAK design.
- Creating the Arduino code.
Circuit Schematic
How Does the LED Matrix Work?LED dot matrices are very popular because they are visible in a variety of ambient conditions. In a dot matrix display, multiple LEDs are wired together in rows and columns in order to minimize the number of pins required. For example, an 8×8 matrix of LEDs has the cathodes together in rows (R1 through R8) and anodes in columns (C1 through C8). Each LED is addressed by its row and column number. In this example, we are using what is referred to as a Common Row Cathodes LED Matrix. To illuminate an LED pixel in the matrix, a high signal is applied to the anode (column) and a low signal is sent to the cathode (row). When we want to show characters or symbols, we typically need to illuminate numerous pixels. In this case, we divide the picture into sections, and we illuminate every row in a fast loop separately.
The human eye can detect LEDs blinking at low frequency, but at more than 20Hz, the full character will appear with less flicker. In this design, we will control an 8x7 LED matrix, as the last column C8 will be omitted because of a limitation of IO pins.
Android ApplicationFor this first stage, an application for Android devices is built to send data to the Bluetooth module. The app will have the graphical interface that the user sees.
The app interface has 56 buttons. Each button represents a pixel in the LED matrix, hence, each one will be represented by a binary variable with two states (on=1, off=0).
Each LED's row is represented by one byte (bit for pixel). To find the byte's value, we multiply every button state by the significant bit and then we take the sum of the products.
p1 through p64 are binary variables hold buttons states (on =1, off=0).
B1= (P1 X 1)+(P2 X 2)+(P3 X 4)+(P4 X 8)+(P5 X16)+(P6 X 32)+(P7 X 64)+(0)
B8= (P57 X 1)+(P58 X 2)+(P59 X 4)+(P60 X 8)+(P61 X16)+(P62 X 32)+(P63 X 64)+(0)
After this calculation, the character will be represented by 8 bytes (B1 through B8) which will be sent to the Bluetooth module.
The app can be made using the MIT App Inventor with ease, no prior programming experience is required. The app Inventor lets you develop applications for Android phones using a web browser with programming blocks.
To create an Android Application, a new project has to be started and the visible components need to be removed from the designer screen.
Then 58 buttons need to be created: 56 buttons will be for the LED matrix, one will be for the list pickers for the Bluetooth devices (connect), and the last button will be used to send data via Bluetooth (print). We also need a Bluetooth client. Below you can see a screen capture of our Android Application’s user interface.
To start programming, the “Block” button needs to be clicked. By dragging and dropping, we can add components from the bar on the left side. The global variables will be added and on-click functions for each button have to be built which will be used to change colors and/or to save the button state (on/off).
For different buttons, the process can be repeated by changing the button number and the global variable each time.
Once done, the mathematical equation needs to be built, which will gather every row's bits in one byte (B1..B8 equations). The 56 button states will be formed in 8-byte variables.
Finally, the app will send the 8 bytes using Bluetooth as a list of bytes when the print button is clicked.
GreenPAK DesignThe Android application sends data to the Arduino, and then from Arduino to GreenPAK via I2C connection. The I2C interface in the GreenPAK SLG46537V is very powerful because it can read and write all its configuration bits (including output states and ASM RAM). The I2C write command begins with a start bit, followed by a control byte, word address byte, data byte, and stop bit.
The GreenPAK IC can be easily programmed by downloading the GreenPAK software to view the pre-made Drawing Characters on an Android App and Displaying it on LED Matrix Desing file. Connect the GreenPAK development kit to the computer, pop an unprogrammed GreenPAK IC into the development kit socket and hit program. The IC will automatically be programmed.
Once the IC is programmed, you can keep the IC in the development kit socket for easier access to the pins, or for volume production, you can create a tiny PCB board to access the chip.
With the GreenPAK IC programmed, now you can skip to the next step.
If you would like to better understand and modify the circuitry here is an overview of how the GreenPAK was programmed.
The Android application sends data to the Arduino, and then from the Arduino to the GreenPAK via I2C connection. The I2C interface in the Silego SLG46537V can read and write all its configuration bits (including output states and ASM RAM). The I2C write command begins with a start bit, followed by a control byte, word address byte, data byte and stop bit. In GreenPAK, we start designing the I2C block. This block is enabled from the properties bar and then the control code is chosen. Then two wires will be connected between pins 8, 9(SCL, SDA) and from the control code list, the device address is chosen (a number from 0 to 15). In our project, 0 (0b0000) is selected.
A controller that will show bytes in rows needs to be made. For that, a State Machine block (ASM) has to be used. There are 24 state transition inputs, 1 nRESET input, and 8 output lines. The ASM block is defined using state transitions and state outputs. Every state will represent one row and it will show the byte that is received from the Arduino. The movement from the current state to the next one happens by applying a high signal on the next state enable's pin and a low signal on the current state enable's pin.
Take State 0 for example, its box contains a state 1 arrow as the next state. This means that in order to transition from State 0 to 1, the input of the State 1 arrow needs to be high when the ASM is in state 0. A counter with pipe delay will be responsible for generating the pulse. CNT4 is used to generate a 0.32ms timer that is used as the one-shot pulse width for all the states. Since the ASM’s inputs are level triggered and not edge triggered, the CNT4’s output can’t simply be used as the trigger for all state transitions (because it would cause almost instantaneous transitions from one state to another instead of waiting for 0.32ms between transitions). To address this, the Pipe Delay macro-cell is used to generate 2 complementary outputs with 50% duty cycle. While one signal is used to transition from even to odd-numbered states, the other is used to transition from odd to even-numbered states.
In every state, the ASM output will be the row byte which we want to show, so we connect the ASM outputs directly to the GreenPAK output pins which are in turn connected to the LED matrix columns, (pins 14 to 20).
With every state, one row that is related to the state has to be activated and because the cathodes are wired together in rows, the active signal for rows is low; hence, a low signal is applied to the related row. A series of 8 DFF will give a low signal on one output (with every CLK) while giving a high signal for the other outputs. This low signal will move from one row to the next with every rising edge's CLK. To do that, the output of every DFF is connected with the next DFF's Dpin.
Arduino CodeThe Arduino, which is connected to the Bluetooth module, will receive the 8 bytes using the UART interface. It will perform data fitting and will send them to the GreenPAK5 using I2C.
The HC06 module used for Bluetooth communication uses the UART for communication protocol with a 9600 baud rate. That number will be used in the code.
The Arduino UNO has a serial interface connected to pins 0,1 (RX,TX) to make the UART connection with other components. So the Bluetooth module TX will be connected to the Arduino RX (pin 0). Bluetooth module TX ----> Arduino RX (pin 0). After receiving data bytes, the Arduino has to be used to send these bytes (using i2C protocol) to GreenPAK. To make this easier, we used Silego's Arduino Library.
The code can be seen here:
#include <Wire.h> #include "Silego.h" // Include Silego header file #include "macros/SLG46531.h" // Include macros for SLG46531 byte rows_byte[8]; //matrix to hold rows bytes int i=0 ; //counter boolean s=0; //state flag // Create an instance of Silego class called // "silego" with device address 0x00 Silego silego(0x00); void setup() { Serial.begin(9600); //Bluetooth module's baud rate } void loop() { while(Serial.available()>0) //receiving bytes coming from { //Bluetooth rows_byte[i]=Serial.read(); delay(5); i++; s=1; //set flag } if(s==1) // send bytes to GreenPAK using I2C { silego.writeI2C(ASM_STATE_0, rows_byte[0]); silego.writeI2C(ASM_STATE_1, rows_byte[1]); silego.writeI2C(ASM_STATE_2, rows_byte[2]); silego.writeI2C(ASM_STATE_3, rows_byte[3]); silego.writeI2C(ASM_STATE_4, rows_byte[4]); silego.writeI2C(ASM_STATE_5, rows_byte[5]); silego.writeI2C(ASM_STATE_6, rows_byte[6]); silego.writeI2C(ASM_STATE_7, rows_byte[7]); s=0; // Reset flag i=0; // Rest counter } }
An LED matrix driver has been created that can be controlled from a controller via the I2C protocol. An Android app has also been built with a virtual LED matrix to draw characters and to display them. The Silego GreenPAK Configurable Mixed signal IC (CMIC) has a variety of digital and analog components that facilitate the creation of moderately complex designs. It also has the I2C interface which allows control of GreenPAK registers from any I2C master. | https://maker.pro/arduino/projects/arduino-led-matrix-controlled-android-app-greenpaks-i2c | CC-MAIN-2018-26 | refinedweb | 1,822 | 62.58 |
You can lazily write a Python script that calls the system ping command-line tool, as follows:
import subprocess import shlex command_line = "ping -c 1" args = shlex.split(command_line) try: subprocess.check_call(args,stdout=subprocess.PIPE,\ stderr=subprocess.PIPE) print ("Google web server is up!") except subprocess.CalledProcessError: print ("Failed to get ping.")
However, in many circumstances, the system's ping executable may not be available or may be inaccessible. In this case, we need a pure Python script to do that ping. Note that this script needs to be run as a superuser or administrator.
Listing 3.2 shows the ICMP ping, as follows:
#!/usr/bin/env python # Python Network Programming Cookbook -- ... | https://www.oreilly.com/library/view/python-network-programming/9781786463999/5568bea5-d129-4431-a612-e706d814f7a7.xhtml | CC-MAIN-2019-47 | refinedweb | 113 | 60.41 |
Introducing AgFx
One of the things we’ve spent a fair amount of time on is working with various application writers, helping them build great Windows Phone 7 applications. Many of the top applications that you’ll find on Windows Phone 7 devices today spent some time in a debugger on my desktop, or Jeff’s, or another one of the folks around here.
In doing this process, we saw a lot of common trouble spots for developers looking to write Windows Phone 7 applications and as I started to think more about the problem.
I thought the same thing I always think: “hmmmm, how can I build a framework that will make these things easy for the developer so they can worry about other stuff!”
And so it was born, and I’m currently calling it AgFx. Fortunately I’m a lot better at building frameworks than I am at naming things, so I’ll leave it at that. But if you’re wondering what the “Ag” is about, “Ag” is the symbol for Silver, and this framework happens to work on desktop Silverlight as well, so you can use it to build the guts of applications that are shareable across phone and desktop.
AgFx is available via CodePlex here.
What does it do?
AgFx provides a set of base class helpers and a data management engine that allow you to write your View Models (or just models, if you’re so inclined) in a very simple and consistent way. It contains some other helpful stuff too, but the data management engine is the heart of it, which I’ll introduce in this post.
Most applications do some variant of the following sequence:
- Fetch data off of Internet
- Process said data into some data structures
- Bind said structures to some UI
- Cache structures on disk
- Next time a request comes in (say after tombstoning), check the cache
- If cache is valid, goto 2
- If cache is not valid, goto 1
- Repeat
And while the above sounds simple, it turns out it’s not. In fact, it’s a lot of work to get it right AND even when it works you have lots of opportunities to cause performance problems or other things that you’ll have to go out and figure out later.
But fortunately the patterns are consistent enough that we can build an infrastructure to automate most of the above. If you are thinking of writing an application that’s similar to the above pattern, this will make it MUCH easier.
When thinking about most data-connected applications, it turns out there are only two key pieces of information required from the developer. Consider a stock quote from a web service. In order to display that stock quote in an application, we need to know:
- How to go fetch the data. In this hypothetical case, it’s the URL to some service:
- How to process the data into an object that is consumable by my application. Typically it means parsing JSON or XML that comes back from the request.
Everything else can be managed by the system, off of the UI thread in most cases:
- Checking for cached data and/or requesting new data
- Processing/Parsing the data
- Creating objects from the data
- Caching the data back to disk
- Handling data updates (these must be on the UI thread)
And it turns out that this is exactly what AgFx does. It manages all of the above so you don’t have to.
Eh…code please.
Okay, let’s use a concrete example. The app that I’ll be including with the bits here is a simple app that goes against the NOAA XML web services for weather reports. Basically they take a US zip code and return a weather forecast.
Oh, the joys of winter in Seattle….
Anyway, as mentioned above, we need two pieces of information from the developer: how to go find the data, and how to deserlialize it. AgFx handles the rest. The vast majority of your code when writing with AgFx is building these view model objects and deserializing data into them.
Let’s start with some examples. First, AgFx view models usually look something like this:
[CachePolicy(CachePolicy.ValidCacheOnly, 60 * 15)] public class WeatherForecastVm : ModelItemBase<ZipCodeLoadContext> { public WeatherForecastVm() { } public WeatherForecastVm(string zipcode):
base(new ZipCodeLoadContext(zipcode)) { } //...properties, methods }
A few things to note there. First is the CachePolicyAttribute at the top. This tells the system how to handle the caching for this object type. “60 * 15” is 15 minutes – meaning these values are valid for 15 minutes. CachePolicy.ValidCacheOnly means that the system should only return cached values that are within that cache time window, otherwise, it should go fetch an updated version.
Now, you’ll notice that the above is a generic type, deriving from ModelItemBase<T>, and is referencing something called a “ZipCodeLoadContext”. Any object that AgFx is handling needs to have a LoadContext which is essentially the identifier for an instance of an item, as well as the place where you set extra state needed for loading. It will become clear why it’s called a LoadContext shortly. But the identifier should be unique for that type of item. On many services it might be the user id (for users) or the item id (for some data item). In this case the identifier is a zip code because zip codes map 1:1 with weather forecasts. Given a zip code, we’ll always get the right forecast data (we all know we might not get the right forecast!).
In this case, the ZipCodeLoadContext looks like the following:
public class ZipCodeLoadContext : LoadContext { public string ZipCode { get { return (string)Identity; } } public ZipCodeLoadContext(string zipcode) : base(zipcode) { } }
Given that this is a simple case, you’re just wrapping the zipcode string, really. The framework allows you to shortcut that if that’s the case, but I am including it here for completeness and so we get a nice strongly typed “ZipCode” property.
Now to the important part. The final piece is the DataLoader which is what holds this all together. Here’s the DataLoader for the WeatherForecastVm, as a nested class inside the WeatherForecastVm class itself:
public class WeatherForecastVm : ModelItemBase<ZipCodeLoadContext> { // ... VM Body removed /// <summary> /// Our loader, which knows how to do two things: /// 1. Build the URI for requesting data for a given zipcode /// 2. Parse the return value from that URI /// </summary> public class WeatherForecastVmLoader : IDataLoader<ZipCodeLoadContext> { const string NWS_Rest_Format = "{0}&format=12+hourly&startDate={1:yyyy-MM-dd}"; /// <summary> /// Build a LoadRequest that knows how to fetch new data for our object. /// In this case, it's just a URL so we construct the URL and then pass it to the /// default WebLoadRequest object, along with our LoadContext /// </summary> public LoadRequest GetLoadRequest(ZipCodeLoadContext lc, Type objectType) { string uri = String.Format(NWS_Rest_Format, lc.ZipCode, DateTime.Now.Date); return new WebLoadRequest(lc, new Uri(uri)); } /// <summary> /// Once our LoadRequest has executed, we'll be handed back a stream containing the response from the /// above URI, which we'll parse. /// /// Note this will execute in two cases: /// 1. When we fetch fresh data from the Internet /// 2. When we are deserializing cached data off the disk.
/// The operation is equivelent at this point. /// </summary> public object Deserialize(ZipCodeLoadContext lc,
Type objectType,
System.IO.Stream stream) { // Parse the XML out of the stream. var locs = NWSParser.ParseWeatherXml(
new string[] { lc.ZipCode }, stream); // make sure we got the right data var loc = locs.FirstOrDefault(); if (loc == null) { throw new FormatException("Didn't get any weather data."); } // Create our VM. Note this is the same type as our containing object var vm = new WeatherForecastVm(lc.ZipCode); // push in the weather periods foreach (var wp in loc.WeatherPeriods) { vm.WeatherPeriods.Add(wp); } return vm; } } }
Again, the loader does two things, loads the new value (GetLoadRequest) and then parses it (Deserialize). Note we never have to write any serialization code for caching, just deserialize and AgFx does the rest.
Using your data
All of your view models will follow the same pattern above – you’ll define the type, define it’s LoadContext (if necessary), then define it’s DataLoader. At that point, you’re pretty much done.
The way that all of these objects (viewmodels or strict models) are accessed is the same in AgFx, and that’s with the DataManager.
So, what’s the code to use the data in this application? It’s just this:
private void btnAddZipCode_Click(object sender, RoutedEventArgs e) { // Load up a new ViewModel based on the zip. // This will either fetch new data from the Internet, or load the cached data off disk // as appropriate. // this.DataContext = DataManager.Current.Load<WeatherForecastVm>(txtZipCode.Text); }
That’s really it. The rest of the code is databindings in the XAML, and some other code to save the zip code so it automatically loads again the next time.
What we are doing here is asking the DataManager to load an object of type WeatherForecastVm, with the given zip code as the identifier. The framework takes care of the rest.
So, again, what are the steps that happen for me automatically upon calling that one line of code?
- Look in the cache for data that a WeatherForecastVm can load, with the unique identifier of the specified zip code.
- If the data is there, check it’s “expiration date”, if it’s not expired, return the data.
- If it is expired, go get new data from the web, then save it to disk
- Deserialize data from (2) or (3)
- Create a WeatherForecastVm object and populate it from the deserialized data
- Return the WeatherForecastVm instance so it can be used for databinding.
Almost all of this happens off of the UI thread (basically everything up to step 6).
Furthermore, the DataManager tracks instances, so the instance that’s returned from the Load call will *always* be the same (for the given identity value) for your entire application. This means that as long as you use this Load call, and databind to that object, any future refreshes of that data will automatically be reflected in your UI, regardless of where it is in your application. You don’t need to worry about any of this, it just works. More on this below. AgFx does the caching and fast lookup for you, so don’t hold references to these values if you don’t absolutely have to.
Using DataManager.Load<> allows the framework to control when and where items are loaded. This also allows your app to do work only as it’s needed to populate your UI. And once you break up the work into discreet view model objects, you can also control their caching policy independently. If you look at the WeatherForecastVm in the sample, you’ll see the following property:
/// <summary> /// ZipCode info is the name of the city for the zipcode. This is a sepearate /// service lookup, so we treat it seperately. /// </summary> public ZipCodeVm ZipCodeInfo { get { return DataManager.Current.Load<ZipCodeVm>(LoadContext.ZipCode); } }
Note that this property results in a call to another VM class – “ZipCodeVm”. If you look at the image up above, you’ll see that the name “Redmond” is shown under the 98052 zip code. This information didn’t come down with the weather data, I had to fetch it from another service. But since it’s based off the same identifier (“98052”) as the weather forecast, we just pass that along.
When I want my UI to show the city, as above, I’ll databind a TextBlock like so:
<TextBlock Text="{Binding ZipCodeInfo.City}" FontWeight="Bold"/>
To which the databinding engine does the following steps:
- On the current DataContext, look for a property called “ZipCodeInfo”
- Fetch that value, and then (if not null) look for a property called “City”
- Fetch that value and set it as the Text property
So, back to our WeatherForecastVm object, when the databinding engine asks for the ZipCodeInfo property value,that request will be kicked off. But here’s the trick. It’s kicked off asynchronously. Execution won’t be held up while that value is fetched. So what does it return?
I mentioned above that the instance returned from then Load call will always been the same for a given instance*. So here’s what happens:
- The Load<> call is made
- A default instance of ZipCodeVm is created and returned. The UI will databind against this instance
- AgFx does the off-thread work of getting the value from the web service, deserializing it into a ZipCodeVm object, and caching it on disk.
- AgFx then takes and copies the updated values into the properties of the instance that was created. Since it is this instance that is being databound against, the UI will automatically update with these new values.
- Any future Load calls, if new data is fetched, will also update this instance.
The net result is that UI anywhere in the application that is bound to a value retrieved via the Load<> method will always be kept up to date. No code wiring needed.
Finally, another upside to breaking up object like this is that you can specify a different cache policy. If you remember, the WeatherForecastVm policy was ValidOnly, for 15 minutes.
Zip codes don’t change much so we’re doing this instead:
// this basically never changes, but we'll say it's valid for a year. [CachePolicy(CachePolicy.CacheThenRefresh, 3600 * 24 * 365)] public class ZipCodeVm : ModelItemBase<ZipCodeLoadContext>{...}
CacheThenRefresh means that if a request is made and there is a cached value that has expired, go ahead and return that cached value and automatically kick off a refresh. Just as in the case above, the refresh will update the instance that was created out of the cache and your UI will update when the refresh happens. A common case for this is an app that shows a Twitter feed. When you launch Twitter, you may want to see the feed as it was last time you opened the app, then have the new posts show as they come in off the network.
* this isn’t strictly true – if no one is holding the value, it can be garbage collected so it’s not using memory, and then a new instance will be created on demand at the next request. But since it’s not being held in memory, your code will never know that.
Take a look
That’s a quick introduction to AgFx, and there is a LOT more. We’ve been writing apps internally on top of it for a while now and we’re having great success and there are a bunch of more features that allow you to handle things your app might need to do quickly and efficiently. So for now, grab the code and run the app and get a feel for the framework. Ask questions in the comments and I’ll start digging into more advanced features in posts I’ll do shortly.
Looks very interesting, looking forward to trying it out. I do have one question though. Is there some way to retrieve whether loading of the data is in progress? Specifically I'd like to show a progressbar while the request is executing. I could just bind the visiblity of this progressbar to the count of the model with a converter (meaning the bar would show until there are no items returned), however in case of no results the progressbar will be there forever.
Another question that springs into mind is how does AgFx notify of errors (e.g. network errors)?
Thanks,
Gergely
Hi Gergely –
Yes, it does all of this and more, I was afraid of packing too much into the intro post. First, take a look at the Weather sample in the zip file. It has some examples of this stuff.
But, specifically:
1) Updating – yes you can get update notifications at a global level or at a per object level. Globally, there is a property called DataManager.IsLoading that flips to true whenever an object "live load" is happening. Cache loads don't flip this bit. The DataManager.Current instance implements INotifyPropertyChanged so you can databind to this property. Likewise, ModelItemBase also has an IsUpdating property, as well as a property called LastUpdated, which is a DateTime that tells you when the last time the data from an object was loaded, again "live load" only. If you load cached data, you'll still get the last live load time. In both of these cases, you can databind this property to a UI element's visibility property using the included VisibilityConverter:
<TextBlock Text="Updating…" Visibility="{Binding IsUpdating, Converter={StaticResource VisibilityConverter1}}" />
To see if network loads are happening across the app, bind to DataManager.Current.IsLoading rather than ModelItemBase.IsUpdating.
See NWSWeatherSample.MainPage.xaml for this usage, as well as the LastUpdated.
2) Load/Error notification – yes, DataManager.Current.Load<> has an overload of the following signature:
Load<T>(LoadContext loadContext, Action<T> success, Action<Exception> error)
that allows you to get notification when the load completes, or if an error of any kind happens in the process.
For example:
DataManager.Current.Load<MyViewModel>("1234", null, (ex) => MessageBox.Show("Error: " + ex.ToString()));
Finally, DataManager also has an UnhandledError event that gives you a chance to handle any errors not handled at the Load<> call site.
Hope that helps. Let me know if you have more questions.
Thanks,
Shawn
I am wanting to write an app, in version 1 to use Isolated Storage only. Then in version 2, to use a web service to share the data across multiple devices. this is a simple "make a list" kind of app.
what would the IDataLoader implementation look like? for example, the first time the app loads, I start with a ViewModel instance which has an empty list of items. main page is list of items, and a page for editing an item.
Thanks, Shawn for the detailed answer. This should have me up and running!
On a note – I'm surprised that this library hasn't gotten larger publicity so far considering that it seems to do all the essential things any data driven app needs. I'm sure that will change soon 🙂
Hi Shawn,
I recently downloaded your project from msdn. Whenever I am trying to open your project in VS2010 Ultimate I am getting error "Adding a reference to silverlight project might not work properly. Even i ignore this error and try to run your project, I am not able to start debugging even.
@Gergely – yes, thanks, let's see what happens!
@Jay – those are expected, and I made a clearer note above. It shouldn't affect your debugging, but make sure you right-click on the NWS Sample and choose "Set as startup project". The other projects are class libraries and can't be directly started.
@Burton – If you're writing a "Make a list" app, I think AgFx already does what you need. I assume the behavior you want is the following:
1) User starts app
2) App checks IsoStore for the list, displays that
3) App checks Webservice for an updated list, displays that if necessary
4) User changes items
5) App pushes changes to web service
Is this correct?
If so, your data loader would look pretty much like the above:
new WebLoadRequest(new Uri("));
And then you'd deserialize the list as normal.
The hitch is this. There is no direct way to write items to the cache, they need to come through a loader. So typically in your case what would happen is that you'd manually push the data to your service:
Then you'd call refresh on your loader to turn around and get the updated value again:
DataManager.Current.Refresh<ListVm>("4321");
This would then cache the value locally and update your UI. It does seem like extra work since you might have the value locally. HOWEVER, in your scenario it's critical, I think, that the app reflects the state of the service so you want to make sure it actually stuck. Otherwise, you'll have users who open it on another device and are suprised their new item isn't there.
You could do some more sophisticated queuing though to allow for adding items when not connected, etc.
Hope that helps!
For all of you, I just wrote a much more detailed post on the broader functionality. Take a look!
.NET Time Period Library for .NET .Net Perf – timing profiler for .Net Weak Event Handlers Other Don | https://blogs.msdn.microsoft.com/sburke/2011/03/11/build-great-windows-phone-applications-the-easy-way/ | CC-MAIN-2017-43 | refinedweb | 3,425 | 62.27 |
0
I am trying to refer to the class a nested class is 'in'... The 'this' keyword refers to the nested class, so I need something like super.this (except not at all like that) :/
I think the code below describes what I am trying to do:
public class Nest { private String var = "foo"; private MyOtherClass myOtherClass; private class Nested { public doSomething() { //Here I want to access the class this is nested in, //now this should be really easy (i can access the //variable foo) but 'this' is for the Nested instance myOtherClass = new MyOtherClass(######); //what do I need to put here: ###### to refer to the Nest? } } } public class MyOtherClass{ private Nest parentNest; public MyOtherClass(Nest nest){ parentNest = nest; } }
Can anybody help? ^_^
Edited by hanvyj: n/a | https://www.daniweb.com/programming/software-development/threads/337768/this-in-nested-classes | CC-MAIN-2017-34 | refinedweb | 128 | 71.48 |
Building Sudoku in Vue.js - Part 1
While sitting at my local airport yesterday, I decided to take advantage of a ninety minute delay by working on another Vue.js game - an implementation of Sudoku. No, not that guy...
But the game where you must fill in a puzzle grid. The grid consists of 9 rows of 9 cells. Each row must contain the numbers 1-9. Each column as well. And each "block" of 3x3 grids must always contain the numbers. Here's how a typical puzzle may look...
And here's the puzzle solved.
I am - shall we say - slightly addicted to this game. It's a great way to pass some time and I enjoy the feeling of completing the puzzle. I'll typically play one to two puzzles per day and I'm slowly getting better at it. I thought it would be fun to take a stab at building my own Sudoku puzzle game in Vue.
To be clear, I didn't want to write the code to build a puzzle or solve it. That's some high level algorithm stuff that I simply suck at. (Ask me sometime about how I failed these tests trying to get a developer advocate job at Google.) But I figured if I googled for "sudoku javascript" I'd find about a million results and I wasn't disappointed. I came across a great library at. It generates puzzles, solutions, even possible candidates for empty cells it had everything. It was a bit old, but I figured that just meant it had some experience and why hold that against it?
I've worked on this off and on over the past two days and I've gotten it about 70% done. I figured it was a good place to take a break, share what I've done so far, and then continue on to wrap the game later in the week. (And the good news is that when I couldn't sleep last night, I thought about another game I'm going to build in Vue later!)
So, let's take a look! First, what do I have working so far?
- I have the puzzle being generated and displayed.
- You can click an empty square to select it.
- You can type a number and it fills in.
What's left?
- See if you solved the puzzle
- Let you start a new game and select the difficulty
Honestly there isn't a lot left, but I really felt like I hit a milestone tonight, and I'm tired, so I figured it was a good place to stop and blog.
I'll start off with the
App.vue page. Right now it's pretty minimal.
<template> <div id="app"> <h1>Sudoku</h1> <Grid /> </div> </template> <script> import Grid from '@/components/Grid'; export default { name: 'app', components: { Grid }, created() { this.$store.commit('initGrid'); } } </script> <style> body { font-family: Arial, Helvetica, sans-serif; } </style>
Basically it just calls the
Grid component and then asks the grid to initialize itself. I'm using Vuex in this demo and most of the logic is there. Let's look at the Grid component.
<template> <div> <table> <tbody> <tr v- <td v- {{ grid[idx][idy].value }}</td> </tr> </tbody> </table> </div> </template> <script> import { mapState } from 'vuex'; export default { name: 'Grid', computed: mapState([ 'grid' ]), methods: { pickNumber(e) { let typed = parseInt(String.fromCharCode(e.keyCode),10); // if it was NaN, split out if(!typed) return; console.log(typed); this.$store.commit('setNumber', typed); }, setSelected(cell,x,y) { this.$store.commit('setSelected',{x,y}); } }, mounted() { window.addEventListener('keypress', this.pickNumber); }, destroyed() { window.removeEventListener('keypress', this.pickNumber); } } </script> <!-- Add "scoped" attribute to limit CSS to this component only --> <style scoped> table { border-collapse: collapse; border: 2px solid; } td { border: 1px solid; text-align: center; height: 40px; width: 40px; } table tbody tr td:nth-child(3), table tbody tr td:nth-child(6) { border-right: 2px solid; } table tbody tr:nth-child(3), table tbody tr:nth-child(6) { border-bottom: 2px solid; } td.locked { cursor: not-allowed; } td { cursor: pointer; } td.selected { background-color: bisque; } </style>
Let me start off by saying that I am DAMN PROUD OF MY CSS! I honestly didn't think I'd get the design right.
I am *incredibly* proud I was able to style this Sudoku table with CSS. It was just a few border commands, but I honestly thought I couldn't do it. pic.twitter.com/l8rzF2049E— Raymond Camden 🥑 (@raymondcamden) December 15, 2019
Outside of that my display just renders the table. I've got some basic keyboard support in (see my article) on that topic) as well as the ability to select a cell. You have to pick a cell before you can type in a number. But that's it. The real meat of the application is in my Vuex store.
import Vue from 'vue' import Vuex from 'vuex' import sudokuModule from '@/api/sudoku.js'; Vue.use(Vuex); /* difficulty: easy,medium,hard,very-hard,insane,inhuman */ export default new Vuex.Store({ state: { grid: null, origString:null, difficulty:'hard', selected:null }, mutations: { initGrid(state) { state.origString = sudokuModule.sudoku.generate(state.difficulty); let candidates = sudokuModule.sudoku.get_candidates(state.origString) state.grid = sudokuModule.sudoku.board_string_to_grid(state.origString); let solution = sudokuModule.sudoku.solve(state.origString); let solvedGrid = sudokuModule.sudoku.board_string_to_grid(solution); // change . to "", also store a ob instead of just numbers for(let i=0;i<state.grid.length;i++) { for(let x=0;x<state.grid[i].length;x++) { let newVal = { value:parseInt(state.grid[i][x],10), locked:true, candidates:candidates[i][x], selected:false, solution:parseInt(solvedGrid[i][x],10) }; if(state.grid[i][x] === '.') { newVal.value = ''; newVal.locked = false; } state.grid[i][x] = newVal; } } }, setNumber(state, x) { if(!state.selected) return; let row = state.grid[state.selected.x]; row[state.selected.y].value = x; Vue.set(state.grid, state.selected.x, row); }, setSelected(state, pos) { if(state.grid[pos.x][pos.y].locked) return; for(let i=0;i<state.grid.length;i++) { let row = state.grid[i]; for(let x=0;x<row.length;x++) { if((i !== pos.x || x !== pos.y) && row[x].selected) { row[x].selected = false; } if(i === pos.x && x === pos.y) { row[x].selected = true; state.selected = pos; } } Vue.set(state.grid, i, row); } } } })
This is somewhat large, so let me point out some interesting bits. First off, this line:
import sudokuModule from '@/api/sudoku.js';
I honestly guessed at this. The Sudoku code I used defines a sudoku object under
window and is typically loaded via a script tag. I was going to add the script tag to my
index.html but decided I'd try that. It worked, but I didn't know how to actually get to the methods. After some digging I found I could do it via
sudokuModule.sudoku.something(). Again, I was just guessing here and I really don't know if this is "best practice", but it worked.
initGrid does a lot of the setup work. I generate the puzzle, which is a string, and then convert it to a 2D array. The library has this baked in, but I made my own grid and store additional information - candidates, solution, and a locked value to represent numbers that were set when the game started (you can't change those).
setNumber simply sets a cell value, it doesn't validate if it's ok. I'm probably going to change that. When I play I like automatic alerts when I've picked the wrong value. That's probably cheating a bit, but I only guess when I'm frustrated with a hard puzzle and I'm fine with that.
Finally,
setSelected is how I select a cell. I also use this to deselect anything picked previous. Make note of
Vue.set. This is required when working with nested arrays/objects and it's probably something everyone using Vue runs into eventually. Check the docs on it for more details: Change Detection Caveats
That's it for the first part. You can see the code as it stands currently at. If you want to see it in your browser, visit.
Header photo by James Sutton on Unsplash | https://www.raymondcamden.com/2019/12/16/building-sudoku-in-vuejs-part-1 | CC-MAIN-2020-16 | refinedweb | 1,369 | 60.21 |
LSEEK(2) Linux Programmer's Manual LSEEK(2)
lseek - reposition read/write file offset
#include <sys/types.h> #include <unistd.h> off_t lseek(int fd, off_t offset, int whence);
lseek() repositions the file offset of the open file description associated with the file descriptor fd to the argument offset according to the directive whence as follows: SEEK_SET The file offset is set to offset bytes. SEEK_CUR The file offset is set to its current location plus offset bytes. SEEK_END The file offset is set to the size of the file plus offset bytes. lseek()(5) (since Linux 3.8) * NFS (since Linux 3.18) * FUSE (since Linux 4.5) * GFS2 (since Linux 4.15)
Upon successful completion, lseek() returns the resulting offset location as measured in bytes from the beginning of the file. On error, the value (off_t) -1 is returned and errno is set to indicate the error..
dup(2), fallocate(2), fork(2), open(2), fseek(3), lseek64(3), posix_fallocate(3)
This page is part of release 5.08 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2020-08-13 LSEEK(2)
Pages that refer to this page: copy_file_range(2), creat(2), dup2(2), dup(2), dup3(2), llseek(2), _llseek(2), open(2), openat(2), pread(2), pread64(2), preadv2(2), preadv(2), pwrite(2), pwrite64(2), pwritev2(2), pwritev(2), read(2), readahead(2), readv(2), syscalls(2), write(2), writev(2), fgetpos(3), fseek(3), fsetpos(3), ftell(3), getdirentries(3), lseek64(3), posix_fallocate(3), rewind(3), seekdir(3), stderr(3), stdin(3), stdout(3), cpuid(4), proc(5), procfs(5), pipe(7), signal-safety(7), spufs(7), user_namespaces(7), xfs_io(8) | https://man7.org/linux/man-pages/man2/lseek.2.html | CC-MAIN-2020-40 | refinedweb | 296 | 64.61 |
An essential tool especially developing larger applications or distributed firmware is to use logging. This article presents an open source logging framework I’m using. It is small and easy to use and can log to a console, to a file on the host or even to a file on an embedded file system as FatFS.
Outline
While it is possible to monitor with the debugger multiple targets, it gets very difficult if the system is distributed or runs for days and weeks continuously as in this system. If something goes wrong, logs will be invaluable. While logging framework for host development are common, I have not found much suitable for embedded system. What I have found close to what I wanted was by rxi or uLog () by Robert Poor. But I needed something more integrated and ready to use for my applications:
- Different log levels (trace, warning, information, error, fatal)
- Ability to set a log level (only log above a level)
- Color coding based on log level
- Open argument list (printf() style)
- Different ways to log: UART/Terminal, file on the host, embedded file system
- Command line/Shell interface
- Small footprint, depending on the features used
- Configurable, up to the point to completely disable it
- Reentrant and integrated with FreeRTOS
- Automatic and configurable date/timestamping
- Easy to integrate, written in C
Source Files
The McuLog consists of three files (see links to GitHub at the end of this article) only:
- McuLogconfig.h: configuration header file
- McuLog.h: interface file
- McuLog.c: imlementation file
The configuration header file is used to configure the log format (with or without date, colors, if file system or RTT file logging shall be supported, etc). At the time of this article the following configuration items are available:
/* * Copyright (c) 2020, Erich Styger * * SPDX-License-Identifier: BSD-3-Clause */ #ifndef MCULOGCONFIG_H_ #define MCULOGCONFIG_H_ #include "McuLib.h" #ifndef MCULOG_CONFIG_IS_ENABLED #define MCULOG_CONFIG_IS_ENABLED (1) /* 1: Logging is enabled; 0: Logging is disabled, not adding anything to the application code */ #endif #ifndef MCULOG_CONFIG_USE_MUTEX #define MCULOG_CONFIG_USE_MUTEX (1 && McuLib_CONFIG_SDK_USE_FREERTOS) /* 1: use a RTOS mutex for the logging module; 0: do not use a mutex */ #endif #ifndef MCULOG_CONFIG_USE_COLOR #define MCULOG_CONFIG_USE_COLOR (1) /* 1: use use ANSI color for terminal, 0: do not use color */ #endif #ifndef MCULOG_CONFIG_USE_FILE #define MCULOG_CONFIG_USE_FILE (0) /* 1: use use file for logging, 0: do not use file */ #endif #ifndef MCULOG_CONFIG_LOG_TIMESTAMP_DATE #define MCULOG_CONFIG_LOG_TIMESTAMP_DATE (1) /* 1: add date to timestamp, 0: do not date for timestamp */ #endif #ifndef MCULOG_CONFIG_USE_RTT_LOGGER #define MCULOG_CONFIG_USE_RTT_LOGGER (1) /* use use SEGGER RTT Logger (Channel 1), 0: do not use SEGGER RTT Logger */ #endif #ifndef MCULOG_CONFIG_PARSE_COMMAND_ENABLED #define MCULOG_CONFIG_PARSE_COMMAND_ENABLED (1 && MCULOG_CONFIG_IS_ENABLED) /* 1: shell command line parser enabled; 0: not enabled */ #endif #endif /* MCULOGCONFIG_H_ */
Settings can be turned on/off at runtime too, e.g.
void McuLog_set_color(bool enable);
can be used to turn on/off color coding.
Usage
To use the McuLog module, include its interface:
#include "McuLog.h"
Before using it, call its initialization routine:
McuLog_Init();
Next, register one or more logger for the output, for example:
McuLog_open_logfile("0:\log.txt"); /* logging to a file */ McuLog_set_console(&UART_stdio); /* logging to a console */ McuLog_set_rtt_logger(true); /* log to a file on the host using RTT Data Logging */
After that, you can log messages with one of the available logging functions, e.g.
McuLog_trace("This is a trace message"); McuLog_debug("Function called returned error code %d", returnValue); McuLog_info("Application started"); McuLog_warn("Log on memory, available bytes %d", HeapNofBytes); McuLog_error("Failed creating task '%s', error code %d", taskName, res); McuLog_fatal("Watchdog timed out, waiting for reset...");
Command Line Shell
McuLog includes an optional command line shell which is used to check the status and to configure it:
Logging to a Console
If logging to a console (or UART), it will print the messages on that connection. If the console is able to display colors (e.g. PuTTY or Segger RTT Console/Viewer), the messages are shown with different colors:
Logging to a File on the Host
Logging to a file on the host is implemented using RTT.
Start Data Logging on the host:
Then select the file where to log the data. Data is logged in a text file:
Logging to a file can be stopped any time.
Logging to a file on the target
The logger includes logging to an embedded file system using FatFS. It is possible to open and manage the file on the application side.
Or simply create or open the file with the McuLog:
McuLog_open_logfile("0:\myLogFile.txt");
The file is ‘synced’ for each log entry. To stop logging call
McuLog_close_logfile();
The messages will be present in the file on the embedded file system:
Summary
With the McuLog I have a small and versatile logging module which can log to console, host file system or to an embedded file system with small overhead. The module requires about 6 KByte flash (mostly because of printf() style), this without any code optimization. It uses simple callbacks and hooks, so if you need logging over I2C, USB, SPI, … or any other communication channel, this can be easily added or extended.
I hope you find this McuLog module as useful as I do.
Happy Logging 🙂
Links
- McuLog Source Files on GitHub:
- Implementation in McuOnEclipse McuLib:
- Example project on GitHub using McuLog and logging toFatFS (SD Card):, see “FatFS, MinIni, Shell and FreeRTOS for the NXP K22FN512“
- uLog by Robert Poor:
- log.c by rxi:
- Segger RTT: Using Segger Real Time Terminal (RTT) with Eclipse
What would be the requirements to run McuLog? For example, the cpu must be capable of running FreeRTOS and have 6KB of program space.. what about RAM? SEGGER RTT Logger?
You don’t need FreeRTOS, you can run it bare metal too.
As for RAM, see the struct ‘L’ around line 50 in.
With Console and RTT it needs 24 bytes only (for bare metal without the need for a lock/mutex it would be even lower).
So really not much at all (not counting in the normal RAM/Codes size used for the UART connection which I assume is used anyway), otherwise you have to add this too.
Hi Erich
Some of the code snippets seem to have issues with extraneous & etc.?
Cheers
Tommy
Hi Tommy,
thanks for the note: WordPress sometimes converts code in snippets into HTML code :-(. I have fixed it now, hopefully it stays like this.
Erich
Another C/C++ log library EasyLogger () may be also a good choice for MCU. :>
Indeed! Thanks for sharing!
I like this as a small simple logging module- thanks Erich.
I’ve extended the ‘trace’ logic to be enableable, via a bitmask- mainly because I have only a single UART to get any information over. This way I can group messages by logical function:
Trace_feature(TraceMaskBitUART, “A UART trace\n”);
This is only displayed when the TraceMaskBitUART bit is set, which I’d do with
traceon 400
for instance (also, ‘traceoff %x’)
I’ve found this really useful to be able to debug modules by logical function without excessive ‘noise’. In fact, it has been my primary debugging method for several years. Hopefully it’ll help someone out.
Cheers
Hi Rhys,
I really like that idea of having such a bit mask, and that you can turn on/off logging for a group of things! I think I have to add something like this too :-).
Glad if you can make some use of it. Happy to send you my source as a reference if you like, I’d just rather not post it verbatim publicly. Can you contact me offline? (and not post this comment 😛 )
Hi Rhys,
thank you for that offer, but I think it is not necessary, I guess I will find my way 😛.
Erich
Hi Erich,
I am looking for a simple embedded log which should:
1. Store the log in SRAM or flash memory, and the log has only 100 entries of 32 or 64-byte each, and organized as circular buffer manner.
2. No any specific SW needed but a terminal SW such as TeraTerm to access the log via a serial port, and it will display the log content only when a command is received.
Not sure if your log or the logs you mentioned can be the modified and configured with the requirements above.
Thanks
Hi Zhiqun,
1. is something you can be easily add/implement, and
2. is implemented already
Erich
Pingback: How to get Data off an Embedded System: FatFS with USB MSD Host and FreeRTOS Direct Task Notification | MCU on Eclipse
Pingback: How to Use Eclipse CDT Environment Variables in C/C++ Code | MCU on Eclipse
Hello @Erich, seems a very interesting library indeed! Allow me for a probably noob question, but this library is only implemented in MCULib (so I’d guess it only works for NXP Processor Expert projects), so if I would repurpose it for a STM32Cube project, I’d need to reimplement it?
Thank you for your attention
No, it can work with any microcontroller, as the library and McuLib is implemented in C. I’m mainly using it with NXP parts, but as well with microcontrollers from other vendors (see the list of processors in McuLibConfig.h).
Pingback: assert(), __FILE__, Path and other cool GNU gcc Tricks to be aware of | MCU on Eclipse
Hi Erich,
I’ve some questions about your logging framework. In this post is written, that it consist only of three files, McuLog.c/.h and McuLogconfig.h, but when i put those three files into my project i see there is missing McuShell_ConstStdIOType which is defined in McuShell.h. Are there other dependencies? And can i port McuLog to CC26x2 (from ti) chip in easy way?
Thanks, tm
From the time of the article, the logging framework has been enhanced a bit. While technically the logging is done with these 3 files, there are others needed for example to get a time stamp (McuTimeDate), using RTOS (mutex) (McuRTOS), writing to a file (FatFS) string utilities (McuUtility) or the McuShell for writing to an UART or similar.
As for porting to the CC26x2, you simply could remove the not needed parts and replace the McuShell with you own UART (or whatever) writing routines.
Thanks for reply, i’ll try it! 🙂
regards, tm
LikeLiked by 1 person | https://mcuoneclipse.com/2020/06/01/mculog-logging-framework-for-small-embedded-microcontroller-systems/ | CC-MAIN-2021-17 | refinedweb | 1,707 | 58.21 |
Hi All, We've been debugging a distributed app that uses a database server to accept connections and then queries from remote agents. Once a connection is established it is passed off to a thread, along with a local db connection from a pool, to handle the IO. We get to a point that seems to be a critical connection load where the number of active threads shoots up until the database server can no longer create a new thread after socket accept. That point is about 302 threads. In investigating the issue we found some previous posts regarding thread limits and modified a code snippet to iteratively try different thread stack sizes [1], as it stands below the script will create as many threads as possible for each thread stack size between 128KiB and 20MiB in 128KiB steps. What I don't understand and I'm hoping somebody can enlighten me about, is what the practical effect of altering this setting is on the memory allocation of the interpreter. Running this on different machines with different virtual memory sizes I expected to see different values but got the same results on a box with 2x the memory of the first. I also noted that python is clearly not trying to allocate stack_size memory for each thread because on a box with only 1.5G vmem I got: -- 150 threads before: can't start new thread Setting thread.stack_size(20352K) -- and 150 * 20352KiB = 2.9GiB, which is larger than the total virtual memory - so why was I even allowed to create this many threads in the first place? I guess I'm missing something about the details of underlying pthread implementation... what gives? ~Blair [1] threadTest.py: import thread, time class threadTest(object): def __init__(self): self.stop = False def tester(self): while not self.stop: time.sleep(1) def dotest(self): count = 0 try: while 1: thread.start_new_thread(self.tester,()) count = count + 1 except Exception,e: self.stop = True time.sleep(2) return (count,e) if __name__ == "__main__": page_size = 4096 # 4 KiB step = page_size * 8 * 4 # 128 KiB min_stack = 32768 * 4 # 128 KiB max_stack = 32768 * 640 # 20 MiB count = -1 for stack_size in xrange(min_stack,max_stack,step): print "Setting thread.stack_size(%sK)" % (stack_size/1024) try: thread.stack_size(stack_size) except Exception,e: print "Couldn't set stack size: %s" % e continue try: tester = threadTest() (count,e) = tester.dotest() del tester print "%s threads before:\n%s" % (count,e) except Exception,e: print "Thread count = ",count print e | https://mail.python.org/pipermail/python-list/2008-November/513297.html | CC-MAIN-2016-44 | refinedweb | 417 | 62.68 |
Give Up the Fight For Personal Privacy?
kdawson posted more than 5 years ago | from the they-know-anyway dept.
?"
Your privacy was eroded for you (5, Insightful)
beef curtains (792692) | more than 5 years ago | (#25292167).
You might have to join them just to control them. (5, Informative)
Benanov (583592) | more than 5 years ago | (#25292209)
I basically made a facebook account so I could remove tags.
I have no applications installed. Installing ONE removes your opt-out.
Re:Your privacy was eroded for you (4, Informative)
Moridineas (213502) | more than 5 years ago | (#25292283).
I agree, the helpful details etc are annoying as anything. You can, however, UNTAG yourself from photos! If you care about privacy (as you clearly do, and I do as well), I would highly recommend untagging yourself.
Take the opposite approach. (5, Funny)
khasim (1285) | more than 5 years ago | (#25292351).
Re:Take the opposite approach. (5, Funny)
beef curtains (792692) | more than 5 years ago | (#25292561)
:)
Re:Take the opposite approach. (3, Funny)
Mr. Bad Example (31092) | more than 5 years ago | (#25292581)
> Add photos that you aren't in and tag them as you.
>
> Then add backstory for them.
They'll still be able to tell those photos aren't you.
None of the people in them will have tinfoil hats on.
Re:Take the opposite approach. (5, Funny)
Anonymous Coward | more than 5 years ago | (#25292615)
Security by obscurity has never really worked. I predict it won't protect your privacy either.
--Sincerely, Anonymous Coward
Re:Take the opposite approach. (3, Insightful)
mpapet (761907) | more than 5 years ago | (#25292669)
I hate to break it to you, but the privacy you strive for is long gone. Even if you go to a cash-only, thriftstore lifestyle, there's still lots of data being collected on you and then resold.
The kind of privacy you are discussing, is the commercial kind. I don't consider it as important as the other stuff.
Just don't do anything meaningful on these social sites and you should be good to go.
I'm going to do exactly as suggested and be sure I'm recorded at multiple places at the same time doing all kinds of dumb things. I'll get knighted by the queen of Applestan and visit the Great Wall after that. I miss San Francisco. I think I'll go there next.
Re:Your privacy was eroded for you (4, Informative)
cayenne8 (626475) | more than 5 years ago | (#25292571).
Re:Your privacy was eroded for you (4, Informative)
KiahZero (610862) | more than 5 years ago | (#25292321)
You can control tags of you in your Facebook privacy settings.
Re:Your privacy was eroded for you (0)
Anonymous Coward | more than 5 years ago | (#25292349)
This is a "religious" argument. Tell them to take the picture down because you did not give them right to infer in your life. If that does not work, sue them.
Re:Your privacy was eroded for you (0)
Grishnakh (216268) | more than 5 years ago | (#25292459)
I'm a Linux user who only uses some Google things, like Maps and Earth. As far as I'm concerned, social networking sites are a total waste of time that are suited for teenagers. I do have a LinkedIn profile, but that's only for my professional career; I found that many other engineers I knew had profiles on there, so I put one on too, with only my professional info (nothing personal at all, not even a photo), so I can keep in touch with people I've worked with in case I need another job in the future. All my engineer coworkers on there seem to be exactly the same way: I don't see any personal info on there at all. LinkedIn seems to be set up much more for this type of use, rather than MySpace/Facebook which seem to be set up for teenagers and 20-somethings to post photos of themselves drunk and partying.
As for other friends, none of my friends have accounts on MySpace or Facebook. No one I work with, except one, ever talks about it. Maybe it's because I'm over 30, but most of the people I associate with who are my age and older (into 50s and even 70s), while very well-versed in internet things, have zero interest in the latest fads like MySpace, instant messaging, etc.
So if your privacy is gone, it's really your own fault for buying into this mass hysteria. It's really not hard at all to maintain your privacy online to a reasonable degree, though it can certainly benefit you to post up your professional information (which doesn't usually benefit you to keep private).
Re:Your privacy was eroded for you (-1, Offtopic)
Anonymous Coward | more than 5 years ago | (#25292621)
I are serious cat. This is serious thread. [roflcat.com]
Re:Your privacy was eroded for you (1)
HTH NE1 (675604) | more than 5 years ago | (#25292661)
it is sorta fun to make contact with old classmates and to laugh at ex-girlfriends who've really let themselves go.
But what if I never liked my old classmates and have no ex-girlfriends (yic)?
maybe (0, Funny)
Anonymous Coward | more than 5 years ago | (#25292179)
mod parent up! (3, Funny)
Reality Master 201 (578873) | more than 5 years ago | (#25292339)
Goddammit, we have to remember what matters!
Man are you on facebook? (0)
Anonymous Coward | more than 5 years ago | (#25292211)
Dude, you have to get on facebook.
Re:Man are you on facebook? (3, Insightful)
snowraver1 (1052510) | more than 5 years ago | (#25292305)
Re:Man are you on facebook? (4, Insightful)
0100010001010011 (652467) | more than 5 years ago | (#25292417).
Re:Man are you on facebook? (-1)
snowraver1 (1052510) | more than 5 years ago | (#25292483)
If you have the time to build a profile and upload pictures and send messages to people and add friends and maintain your "wall" and look at other people's "wall", you clearly have the time, just choose to waste it on facebook rather then calling the people that you care about and ignoring those you don't.
David Brin wrote about this years ago (5, Informative)
CRCulver (715279) | more than 5 years ago | (#25292217)
Re:David Brin wrote about this years ago (5, Insightful)
Anonymous Coward | more than 5 years ago | (#25292591).
Re:David Brin wrote about this years ago (1)
xant (99438) | more than 5 years ago | (#25292609)
An other great book on this subject is Clarke's The Light of Other Days. He posits that not only is privacy screwed now, but everything you've ever done in the past is also out in the open, you just don't know it yet. And he suggests that society will adapt just fine.
My position is that the powerful have more to lose from a breakdown of privacy than the "private" citizen has to fear. The loss of privacy is only a problem when the powerful get to keep theirs. As Sarah Palin's email accounts illustrate, that ain't gonna be the case for long. As the ease of copying information approaches zero, the difficulty of securing it approaches infinity. But nobody cares about you and your information, so you just need to keep from popping up in the anti-terrorist list for a few more decades until this works itself out.
Think of it as steganography... try not to stand out as long as there's only a few people in the database; but as they pile in more and more (1 million on the TSA's no-fly list), your individual exposure becomes less. Eventually your public information is just lost in the noise, the way it has always been.
And then it will be time for the powerful to answer for their secrets, and yours won't matter any more.
Transparent Society... (1)
argent (18001) | more than 5 years ago | (#25292671)
Welly welly welly, usually I'm the first and only one to mention Brin and The Transparent Society.
I'm not entirely convinced, but since I obviously gave up on keeping myself off the Internet long before it even had that name... I don't buy into the contrary argument either.
You need new friends and family (1)
SpaceLifeForm (228190) | more than 5 years ago | (#25292227)
Just do it (1)
Threni (635302) | more than 5 years ago | (#25292235)
Get a gmail/facebook etc account but use false info. Get a new account every few months or so. Don't worry about it - it's not real life.
If ignoring facebook disconnects you from friends. (-1, Troll)
plasmacutter (901737) | more than 5 years ago | (#25292245)
I think it does you a favor.
My experience is people who use social networking sites and people with an IQ over 40 are mutually exclusive.
Re:If ignoring facebook disconnects you from frien (2, Insightful)
stranger_to_himself (1132241) | more than 5 years ago | (#25292413)
Well there's at least two other people who don't use facebook, the parent post and the moderator who gave it an insightful.
If you want to protect your privacy, then fine, but do it for some actual reason, not just for the rather nebulous abstract concept of 'privacy' in itself, which is actually fairly meaningless if you think about your interactions with the rest of the world. It is necessary that people know stuff about you in order for you to function as a human being, it only becomes an invasion of your privacy when people are taking stuff you don't want them to and spreading it around for others to see.
Re:If ignoring facebook disconnects you from frien (2, Funny)
eln (21727) | more than 5 years ago | (#25292469)
Well there's at least two other people who don't use facebook, the parent post and the moderator who gave it an insightful.
You'd think so, but actually the moderator is a regular Facebook user who just didn't know what the word "Insightful" meant.
Re:If ignoring facebook disconnects you from frien (3, Insightful)
Anonymous Coward | more than 5 years ago | (#25292427)
And my experience is the opposite. I guess our anecdotes cancel.
The OP should get over it - Facebook became popular partly because it provides very fine grained privacy controls. I blocked photos of me being visible from my profile some time ago - friends can still tag me but there's no way to find those photos except through brute force search, and you have to be friends with my friends to see those photos.
Also, classifying GMail with Facebook is sort of a red herring, I think. Facebook exists to let you publish personal information. GMail does not. If you keep your email in GMail then chances are excellent you'll be the only one to ever read it. There are a handful of engineers at Google who can read peoples mail and they are busy guys. Having your data read by machines really isn't the same.
Re:If ignoring facebook disconnects you from frien (0)
Anonymous Coward | more than 5 years ago | (#25292489)
Back to grandma's basement you!
Ideals (2, Insightful)
Applekid (993327) | more than 5 years ago | (#25292257)
Sticking to your ideals isn't always easy. Sticking to them in hard times demonstrates how important it is.
The compomise is to not give in to everyone, just be selective. I'd much rather trust Google with how useful their stuff becomes when you do trust them than I would trust, say, Microsoft who would request your information (that old registration bit) which will use it exclusively for marketing and later BSA audits.
Re:Ideals (1)
Locklin (1074657) | more than 5 years ago | (#25292465)
Being selective with the content is probably more important than being selective with the company.
Considering Google seems to be going in the direction of data mining virtually everything, I don't know if I would trust them with data more than Microsoft.
Re:Ideals (0)
Anonymous Coward | more than 5 years ago | (#25292507)
Sticking to your ideals isn't always easy. Sticking to them in hard times demonstrates how important it is.
I'm certain Jack Thompson would agree with that wholeheartedly. And we all see where it's gotten him.
So the question is, do you really think it's important enough to be branded an outcast loony and be alone no matter how "right" you "know" you are, or is there compromise to be made?
do it (0)
Anonymous Coward | more than 5 years ago | (#25292261)
Web 2.0 yes, but pseudonymized (1)
bratgitarre (862529) | more than 5 years ago | (#25292263)
Re:Web 2.0 yes, but pseudonymized (2, Insightful)
stranger_to_himself (1132241) | more than 5 years ago | (#25292461)
I'm still using a credit card and say yes to pretty much every cell phone or application EULA, but I think these are less likely to hit me in the long run than publicly available and mineable personal information over which I essentially have no control.
In what way are they likely to 'hit' you?
Anonymous Coward | more than 5 years ago | (#25292279)
Join their groups, you'll make new friends that have a similar mindset to yourself.
Re:Learn about TOR (1)
larry bagina (561269) | more than 5 years ago | (#25292605)
How is this any different from the real world? (3, Insightful)
i_ate_god (899684) | more than 5 years ago | (#25292293)
So, instead of going to a bar to discuss things where I can overhear them, you lay it all out on your facebook profile instead, where I can overread them.
So what? Who cares if your likes or dislikes are posted for all to see?
I LIKE JUNO REACTOR AND SEX
See? Was that so hard? Has my life become worse now that you know this? Facebook isn't going to make your life any less private than when your girlfriend talks to her girlfriends about your impotence. Stop being so paranoid. This isn't a new world of TOTAL INFORMATION AWARENESS.
Re:How is this any different from the real world? (0)
Anonymous Coward | more than 5 years ago | (#25292485)
I agree. I just don't see the problem with this loss of "privacy". Why is it so important to keep this information out of the public eye? Who cares if some stranger knows you were at a wedding on a certain date? What are you doing that's so embarrassing?
BTW my name is Dave Owen, I always use my real name online and I don't care who sees it. Never had a problem in 15 years online.
Re:How is this any different from the real world? (3, Insightful)
Anonymous Coward | more than 5 years ago | (#25292495)
Ten minutes later, you won't remember the bar discussion anymore. Ten years later, the database storing the facebook profile information is still around, and all manner of government agencies and/or advertising companies will be happily querying through it.
Re:How is this any different from the real world? (0)
Anonymous Coward | more than 5 years ago | (#25292623)
HAY I LIKE JUNO REACTOR AND SEX TOO! WE CAN BEEN FRIENDS? LOL
hay look me in pink hair gay sex show. But LOL I tagged you in it! LOLOL.
I don't get it... (5, Insightful)
Otter (3800) | more than 5 years ago | (#25292303)
If you're asking whether I personally am impressed by someone bragging about how he refuses to use Facebook or GMail: it impresses me about as much as someone who brags about not having heard of some television show.
Re:I don't get it... (4, Insightful)
Bogtha (906264) | more than 5 years ago | (#25292593)
In fact, the entire submission reads like a pastiche of Area Man Constantly Mentioning He Doesn't Own A Television [theonion.com]. I understand wanting to protect your privacy, but this guy really does seem to treasure the fact that he is clueless about Facebook etc. Whenever I've ever heard anybody say anything like "their Facebook 'wall' (whatever that is), it's always been with a condescending "I'm too good for crap like that" tone. This guy doesn't want privacy, he wants to feel better than everybody else.
Re:I don't get it... (0)
Anonymous Coward | more than 5 years ago | (#25292663)
Well, get ready to be blown away: I do not even have a TV!
It's the people, stupid! (0)
Anonymous Coward | more than 5 years ago | (#25292307)
I don't have a telly. My dad will NOT use email.
I post stuff to him. I may have done more and more frequently if he'd had email, but this is in some ways better, because I'm not mailing any old shite to him.
I still listen to friends who talk about the telly: it's partly so they can relive the experience and partly, if there's a bit of a story to it, I get it anyway.
It would be the same with Facebook / AIM / GoogleWhanger. You don't turn off when they talk about it, just listen to them tell the story. It's not as if you're required to have been there. If you'd been elsewhere, you'd still have missed it and they would still have told you.
So just let them talk about what amusing thing was on YouTube or whatever. Listen and imagine what it COULD have looked like and see if you enjoy the thought. Or just enjoy them remembering what it was like.
"Being included" doesn't mean you have to join them. Just that you'll enjoy listening.
Run your own forums... (1, Insightful)
Anonymous Coward | more than 5 years ago | (#25292311)
Set up your own private web forums that you have privacy control over and get your friends/family to use it. This works like a charm and is basically how I stay in touch with all my friends dispersed all over the place.
Anonymous Coward | more than 5 years ago | (#25292313)
You will always be "out" of the loop as you will be signed up to the "wrong" website. Yesterday MySpace (RupertSpace), today Facebook, tommorrow ??.
You will be signing up to a life of chasing the next "IN" thing and worrying about your online profile. Forget about it all. Keep your relationships face-to-face with the people in your life and enjoy life.
Stick to your guns (2, Insightful)
hojo52 (1380525) | more than 5 years ago | (#25292315)
Re:Stick to your guns (2, Informative)
Ethanol-fueled (1125189) | more than 5 years ago | (#25292391)
I don't know about facebook but MySpace has decent privacy options and controls on who sees what of yours. I don't have a facebook page but I do have a MySpace page and everybody has one or the other if not both. My MySpace is set up thusly:
- My profile and my pictures are set so that only my friends may view them
- I don't have any incriminating pictures or words on my page anyway
- I use some of these [slashdot.org] codes to hide my friends list from everybody(including my friends) to prevent gossip. Comments may also be hidden. If you can't figure out how to do that then you shouldn't be here!
Use a browser with privacy options and plugins and set it to not remember anything except cookies and to delete everything everytime it closes. Don't click on the ducks or the monkeys. Don't run e-mail attachments. Use a hardware firewall: iptables works very well. Never use your real information when filling out ANYTHING except for financial or employment purposes.
Reverse (4, Insightful)
Rinisari (521266) | more than 5 years ago | (#25292317).
"I'm not doing anything illegal" (5, Insightful)
maillemaker (924053) | more than 5 years ago | (#25292563)
.".
Err.. (4, Insightful)
TheSpoom (715771) | more than 5 years ago | (#25292327).
secret identity (3, Funny)
OglinTatas (710589) | more than 5 years ago | (#25292333)
appropriate to this topic:
cat and girl [catandgirl.com]
Resistance is futile (4, Insightful)
fiannaFailMan (702447) | more than 5 years ago | (#25292337).
Maintain privacy, except on Slashdot (4, Insightful)
totallygeek (263191) | more than 5 years ago | (#25292341)
So, you don't want anything posted on places like Facebook, showing a list of your friends along with articles you have written, journal entries, ties to items you have posted about, etc. But, you have no problem with the same on Slashdot?
Four friends listed
A page filled with your posts to submitted articles
Three journal entries
Three fans
I know some people on Facebook that maintain some privacy: one never fills in all the fields or puts in erroneous information, one puts her middle name as her last name and posts an avatar instead of a photo.
Choose wisely (0)
Anonymous Coward | more than 5 years ago | (#25292343)
"And the smoke of their torment rises for ever and ever. There is no rest day or night for those who worship the beast and his image, or for anyone who receives the mark of his name." (Rev 14:11)
Seriously? Get over yourself. (0)
onion2k (203094) | more than 5 years ago | (#25292357)
You aren't important enough for Facebook/Google/the government/anyone to bother invading your privacy in any meaningful way. Very few people are. The companies that gather huge amounts of data about us want exactly that - huge amounts of data. That's when it becomes useful (and more importantly, valuable). Stuff about any one individual is next to useless. You can splatter your entire life history all over the internet and on the whole noone will care, or even notice (with the obvious exception of your bank details - they are useful to the more nefarious members of society).
So yeah, carry on being a "private citizen" and withhold all your data. The 'man' will have as much on you as they do on me; and I have a Facebook page, MySpace page, and accounts on dozens of forums. Because we are completely unremarkable. The only difference is that I have accepted it. Nay, embraced it!
Welcome, Slashbot (0)
Anonymous Coward | more than 5 years ago | (#25292361)
You represent the
.00001% of society that holds the views of this beloved blog.
Amateur (1, Funny)
Anonymous Coward | more than 5 years ago | (#25292363)
I have removed my fingerprints with acid and have had facial reconstruction surgery. I dye my hair. I uninstalled windows and then burned my computer. I cancelled my phone then dug up the phone line on my property. I cancelled all other utilities and dug up the mains on my property. I moved my mailbox and house number to the neighbor's property. I pay the neighbor to act as my mail/home address firewall. I regularly kill my neighbor and take back the money. Inside my exterior house is another tinier house in which I live. Inside that house is another even smaller house in which I actually live. I also never agree to EULAs.
use gmail for select few (1)
hansoloaf (668609) | more than 5 years ago | (#25292365)
Give a hoot - do pollute! (1)
DavidHumus (725117) | more than 5 years ago | (#25292367)
For instance, something I've done for years is to subscribe to magazines, etc. with slightly different versions of my own name. As others have also done, I started by using a different middle initial for different subscriptions. As the namespace became more crowded, I branched out to using dual middle initials and variant spellings of my name and address.
Similarly, sign up for different online services with variants of your name, birthdays off by days or years, etc.
If enough of us do this for long enough, the waters will be hopelessly muddied.
Still anonymous online here... (1)
g0bshiTe (596213) | more than 5 years ago | (#25292371)
Websites I sign up for require usernames, real names etc. I usually hit these with a moniker of some sort that has nothing to do with my true identity.
What's that you say? They require an email address that can be traced to me? Well no they don't. I have a hotmail address that I registered back in 1995 that I still use. Ah the days of Snap being my search engine of choice, I digress. That hotmail account was registered with a valid email account at the time linking me to my true identity, but I have since stopped that account and I'm sure that no ISP retains user data 10+ years. Can I still be traced? I'm sure some hardcore digging could turn up my identity, but as for my MySpace and Facebook, if I don't already know you then you aren't on my friends list. You don't post me messages, I don't circulate those cutesy little, When was your first kiss, questionaires they get deleted. I don't even read them. I don't browse many profiles of people I don't know. So I think in all, I'm relatively unknown as far as online identity presence.
Fake info (1)
Xaemyl (88001) | more than 5 years ago | (#25292373)
So, just make up fake information to feed into these sites. That way, you stay connected to those you want to stay connected to, and whatever private information they have is fake. Same concept with spam-dump emails
...
Some of us are too old/uncool... (0)
Zordak (123132) | more than 5 years ago | (#25292379)
I've been trying to take the privacy back... (1)
Yaddoshi (997885) | more than 5 years ago | (#25292381)
There came a point where I was sick of depending on "free" services such as Yahoo! and Google, and as a result I established my own Drupal web and Squirrel e-mail server (and I'm getting ready to embed chat into my main website to take care of that little nuisance). So...I've somewhat weened myself off the system.
That being said, I still have accounts on LiveJournal and MySpace (though I am resisting Facebook), I have a Gmail, a Yahoo! Mail and a Hotmail account - and all of this was set up so that I could keep in touch with friends via IM and also as a way to divert spam from my real e-mail account. I've been trying to ween myself off of these "free" services while simultaneously inviting my friends and family to come on board my own equivalent packages with some success, but not as much as I had hoped.
Wish I had a better answer for you, but I'm still trying to figure it out for myself. One of these days I'll probably snap and delete every account on services that I don't own.
Make sure you use an alias tho, that does make a big difference.
Participate! (2, Interesting)
sneakyimp (1161443) | more than 5 years ago | (#25292395)
I share a lot of your concerns but I think you might be going so far as to be antisocial. If you have nothing to hide, there's no reason to be hidden. Don't be afraid to participate in society.
On the other hand, I do worry about Orwellian tendencies among government and business. E.g., If I buy cigarettes for my friend using my bank card, will my health care be canceled?
I have found a hosts file () to be very useful in protecting myself from malware and nosey ad tracking stuff.
I have signed up on facebook.com. It's nice to hear from old friends. I don't spend any time there though. I have never once been to twitter.
Re:Participate! (1, Interesting)
Anonymous Coward | more than 5 years ago | (#25292673)
If you have nothing to hide, there's no reason to be hidden. Don't be afraid to participate in society.
I'm a homosexual atheist living here in Atlanta (The deep God fear'n Republican stronghold evangelical south - for those of you reading from other countries.). I vote Democratic.
I would like to have a Slashdot account but I really like Microsoft's products and I really dislike Linux, GNU, and anything F/OSS for various reasons.
So, you're saying that if I identify myself, I would be alright and nothing bad would happen to me? I won't be flamed and modded down into negative territory if I opened an account here on Slashdot? Meaning, after expressing my views I wouldn't be an outcast forever posting at "-1" ?
Just making a point here.
Its about control (1)
JustNiz (692889) | more than 5 years ago | (#25292399)
Its OK to be a member of such things as Facebook as you can directly control/limit what you make available just by not putting it up ther ein the first place.
But would not ever have my personal documents stored on some remote server for example. This is one reason why I for one will never be a customer for this whole "software as a service" model Microsoft et al are chasing.
You have no privacy. Get used to it. (1)
jjohnson (62583) | more than 5 years ago | (#25292405)
You say you preserved your privacy by eschewing the variety of hosted/social networking sites that your friends and family use. Have you also eschewed credit cards? Driver's license? All airline travel? Property ownership?
What you identify as the frontier of privacy is just the most visible loss of your privacy, the publicizing of yourself. You already exist in hundreds of government and corporate databases, both your vitals and your histories, in ways that are badly protected. Your only safety is that you're part of a gigantic herd that exists there with you, making your odds of being singled out lower.
This is what Larry Ellison meant when he said "You have no privacy. Get over it." Staying off gmail and facebook and LinkedIn is a hair shirt exercise in futility. That doesn't mean you have no privacy, but that what privacy you have is the privacy of politeness--what you and others choose to discuss (or avoid discussing) in public.
Not Black or White. (2, Insightful)
JustinOpinion (1246824) | more than 5 years ago | (#25292423)
The question is phrased in a sort of black/white manner: either you fight tooth-and-nail to maintain maximum privacy, or you give up and sign up for every crazy privacy-eroding service.
The obvious answer is "all things in moderation." I consider myself privacy-conscious. I don't run Windows. I do use Facebook and Gmail. However I use them with privacy in mind. So my Facebook profile has very little information, has privacy options set quite high, and I only accept friend invites from people that I reasonably trust. (So many people seem to get sucked into the "I need my friend count to be higher" game--which invariable means accepting invites from strangers.)
My strategy works, more or less. There are times when friends reveal information about me online I would rather they didn't (e.g. tagging me in photos on Facebook). But you can't completely prevent these kinds of things. In the same way that friends can give out your phone number or gossip about you in real-life, there will be some privacy loss online. The goal should be to keep things private without it becoming a burden to do so.
It sounds like you're taking the privacy thing to far--to the point that it's harder for you to socialize and enjoy life. So loosen your rules a little bit. Remember that every company (the power company, the cable company, your bank, etc.) has tons of privacy-eroding data on you. Online companies will also get some privacy-eroding data. But as long as you keep it within reasonable bounds, then it won't cause a problem.
Remember, privacy isn't really something that has to be maintained for its own sake. Privacy is a means for you to enjoy your life free from bother, and to prevent people harming/taking advantage of you. Calibrate accordingly.
A small loss of privacy is okay if it achieves the greater objective of making you happy.
Stop being contrary. (1)
Natty (51284) | more than 5 years ago | (#25292425)
While it can be cute to "eschew" everything mainstream, really, you're just being contrary. Facebook is cool. It keeps your friends in the know about you and you in the know about your friends. Yeah, you give up "privacy", but any sort of sincere interaction with another person is going to entail that. Whether you're on the cell phone, typing out an email, or shooting the shit in person, you've got to reveal some of yourself. The perfectly private human being is an opaque and lonely one.
What's Windows got to do with it? (0)
Anonymous Coward | more than 5 years ago | (#25292431)
I don't really see what Windows has to do with losing privacy, it's perfectly possible to maintain privacy on a Windows based system. For all the stories about how insecure Windows is it's not some privacy risk unless you allow it to be one. No amount of Windows flaws will force you to enter your most personal details, and there's no reason your system need be rootable if you use common sense and don't run things you shouldn't, this may or may not include things such as Javascripts depending on your browser and the sites you visit.
When bringing anti-Windows propaganda into it it sounds more like what this person cares about is actually fighting the system, rather than simply privacy and so I think the answer to the question needs to be a question in itself - what are you really trying to achieve? If you want to be different and fight the norm then carry on, you're doing fine as is. If however it's simply privacy you care about then some issues you raise aren't related to privacy so folding on them wont cause you any loss of privacy but will allow you to join in those more mainstream activities you talk about.
I work on the simple principle that if I've entered personal information onto the internet, I can't realistically trust that it's limited to that particular site and must assume that it's in the hands of anybody. I have made the concious decision that I am happy for my name and address to be on the internet, and whilst I don't want it posted left right and centre I do at least accept that if someone wanted it they could likely now get it easy enough. I even accept that having purchased things online my credit card could be available left right and centre too, however I ensure that I am covered should this ever be an issue. Similarly I accept my e-mail address is probably fairly widely distributed, well, one of them because I have a public and a private one, the public one I will use with forums and will never make the assumption it's unknown to anyone, I assume anyone might have it. My private one however I keep limited to a small trusted set of people, I accept that this could leak but I feel it is more unlikely to.
The real question is how much privacy you're trying to maintain, it's possible to enjoy many features of the modern technological world without instantly becoming a victim of identity theft or by giving away your lifes intimate details to any number of secret services around the world. Privacy doesn't have to be a none or all game.
What about Windows? (4, Insightful)
Wee (17189) | more than 5 years ago | (#25292439)
Anonymous Coward (1, Insightful)
Anonymous Coward | more than 5 years ago | (#25292477)
I personally will never give up the fight. When people begin conversations about something on their 'wall' or 'myspace', I question them on their need to be connected to everyone and everything at all times. Usually I get a 'Dude, everyone's doing it'.
Everyone lies too, don't make it right. My parents raised me to not be part of the crowd. To think above the common accepted norm.
All my friends that have face books or their spaces know I don't want pictures of me posted. They respect my wishes, that's just common courtesy. --Well for people with IQ's over 40 anyway.
Lastly, HANG UP THE FUCKIN PHONE AND DRIVE!!..
Let me get this straight (5, Funny)
blitzkrieg3 (995849) | more than 5 years ago | (#25292487)
It can go two ways (2, Interesting)
MLCT (1148749) | more than 5 years ago | (#25292493)
In ten years time either all of the "facebook" stuff will be seen as a fad, and joked about as a fad - forgotten and irrelevant. Or it will still be "big" and they will know and capitalise on every single aspect of every single person's private data.
Personally I suspect it will be the former scenario - the "2.0", "social-networking" stuff is just a buzz - a hyper money fuelled fad. The whole thing is an attempt to generate a self-fulfilling prophecy. Facebook worth fifteen billion dollars? Give me a break. The entire bubble has been fuelled on speculative hot air - "if I say it is valuable and the next big thing, then it is". As the stock market has so ably proven over the last few weeks - fads and self-fulfilling prophecies never last.
There was an analogy that was doing the rounds on the "privacy-less age" that we are supposed to be living in. It drew comparisons between the nineteenth century reluctance people had to put money into banks and today's reluctance to protect your private details. We now deposit most of our assets with banks and think nothing of it, the analogy being that in the future the same will be with our private information. Of course like most analogies it is fundamentally flawed to compare the two things - but I couldn't help but smile when, over the last month, I see people questioning to withdraw their money from banks that are on their knees.
Hiding isn't such a good idea (2, Insightful)
PPH (736903) | more than 5 years ago | (#25292497)
If someone wants to find you, or find out about you, they'll keep looking until they've found you. Or until they think they have.
Get a GMail account, a Facebook page and otherwise conduct yourself as the typical clueless user with a wife, 2.1 kids, a dog and a house with a white picket fence. When 'they' go looking for you, that's what they'll find. Then , they'll go away.
Conduct your clandestine activity anonymously, or using some manufactured identities. Leave your cell phone at home and don't drive your own car (or at least switch plates). Bury bodies in someone else's back yard.
Privacy is Lost, Focus on Responsibility (1)
steve_thatguy (690298) | more than 5 years ago | (#25292515)
I came to the conclusion within the last year that privacy is a product of a bygone era. The fact is technology has made it too easy to erode privacy, corporations have made it too profitable, and governments have made it too desired. I read an article that within five years we'll be able to carry enough storage on our person to record every waking second of our lives for a year. It's only a matter of time before people no longer have to blog, they'll just have to live and technology will allow us to record it all as it happens.
Privacy as a concept will not exist for our children's children, if not for our children directly. They'll know the word and the meaning but they won't understand at a deeper level what it is like to go outside and not have to wonder whether or not they're being watched.
The most concerning part of this for myself is and has always been the potential for abuse this has by governments and law enforcement. However now that I've accepted there's no avoiding our future as a surveillance society I've realized the solution. We must make sure that the surviellance and lack of privacy is extended and in fact *led* by government and law enforcement. If governments and law enforcement would be willing to sacrifice their own privacy first it would help (albeit mildly) make the sacrifice of privacy by citizens a little easier to swallow. Also it introduces accountability to the people which is essential. Hopefully the experience of their own loss of privacy will temper their judgment with how to use their ability to invade the privacy of others.
I only hope that within the next ten years we see a strong movement toward transparency and accountability in public officials and public servants. It's the only way to avoid 1984.
Linux is not secure either. (0)
Anonymous Coward | more than 5 years ago | (#25292527)
Remember the Debian SSH key scandal.
Re:Linux is not secure either. (1)
gujo-odori (473191) | more than 5 years ago | (#25292617)
I realize this is feeding a troll, but first of all, it was not a scandal; it was security vulnerability. A scandal is when a vendor is aware of a vulnerability and, while failing to fix it and release a patch, seeks instead to just conceal the information from the public and customers alike for as long as possible.
Debian, like most FOSS vendors, fixed the problem, with full disclosure, very quickly. The scandals typically affect proprietary vendors, and you'd have to look long and hard to find one without some of those skeletons in its closet.
Is Linux secure? Sure. Very secure, generally. Absolutely secure? No. No computer is, not even in its original shipping material and powered off with no OS installed. Even then, the machine could be stolen. But on the security continuum, Linux is more secure than Windows or OS X.
Don't add any personal info (1)
ensignyu (417022) | more than 5 years ago | (#25292529)
Just give them as little information as possible. I think only your name and email address are required, and any personal info fields you fill out (which are pretty much all optional) can be restricted so they're only visible to your friends.
The wall is a little annoying privacy-wise because anyone you give access to your wall can see what everyone else has posted on your wall. You could still disable your wall and rely on private messaging though.
Essentially, if you keep an empty or locked-down profile, it's like having an entry in the phone book, except you don't have to give out your phone number. Of course, Facebook encourages totally random people you haven't seen in decades to try to "friend" you, so I guess if you'd rather not have any contact at all you might want to stay off Facebook. But otherwise it's not too bad.
I have several identities. (1)
Colin Smith (2679) | more than 5 years ago | (#25292541)
I can manage my privacy at the press of a button. Wipe my cookies and become another identity. I can define my privacy as I like.
Re:I have several identities. (1)
argent (18001) | more than 5 years ago | (#25292597)
Mister... Smith. It seems you are living
... two lives...
Only one of these... has a future...
what a drama queen (5, Insightful)
circletimessquare (444983) | more than 5 years ago | (#25292545)?
Re:what a drama queen (0)
Anonymous Coward | more than 5 years ago | (#25292655)
"If I don't care about it, it can't possibly be important!"
Yes, that IS what you are saying. And you ARE a moron for saying it.
Better one than none (1)
meist3r (1061628) | more than 5 years ago | (#25292553)
Privacy is a sliding scale. Make a choice. (1)
SecurityGuy (217807) | more than 5 years ago | (#25292567)
It's not all or nothing. Live in a cave or put your entire life on facebook. You can keep some of your life private, and still connect with friends and coworkers in social media.
It's all about a little discretion. Just step back 100 years and this conversation is about talking. Do you blurt out everything about yourself to everyone, or take a vow of silence?
They're both a bad answer. Talk some. Set reasonable limits, but don't be a digital hermit. There are going to be bumps, like some of my friends who post their religious and political affiliations may learn, but opening up a little and admitting we disagree about things, but can rationally disagree and still find value in each other is a really good thing. Today, your future boss may look you up and not hire you because you're a rabid McCain or Obama supporter. Hopefully someday soon, they'll just see that your friends think you're a good programmer and not really care about your politics, because hey, everybody has some opinion about politics and it's a plus that you care at all.
Can they track us all... (1)
colinbg (757240) | more than 5 years ago | (#25292573)
Is it worth it? (4, Interesting)
jibjibjib (889679) | more than 5 years ago | (#25292575).
Lost in the crowd (4, Interesting)
harl (84412) | more than 5 years ago | (#25292611).
Decide what's private (1)
Todd Knarr (15451) | more than 5 years ago | (#25292613)
Decide what you consider "private". I'm not worried about things like my name, address and phone number appearing on FaceBook. I'm in the phone book, anybody who can read and has any interest can find them trivially. Given that, merely having a FaceBook account isn't a privacy problem. What's problematic is the tracking the various FaceBook gadgets can do even when you're not on FaceBook. Some configuration of my browser eliminates that problem (as long as I remember to keep FaceBook in it's own browser session so it can't see anything from my non-FaceBook browsing). Detailed information on my social life? I simply don't post that on FaceBook. I've other places to put that kind of stuff, places that give me more control over who sees it. Photos? That's a decades-old problem, and I deal with it on FaceBook with the same rule I've used since college: if it's something I wouldn't want widely published, I make sure either I get control of all prints and the negatives or I don't allow the photo to include me.
And finally, I keep track of what my friends are doing. If they're in the habit of making things about me public that I've asked them not to, I reconsider just how good of a friend they are. I'm a grown adult, I'm fully capable of making friends with people with a modicum of discretion.
Orkut (1)
Andr T. (1006215) | more than 5 years ago | (#25292619)
I'm just not that interesting (0)
Anonymous Coward | more than 5 years ago | (#25292631)
There isn't anything about me I care to broadcast to the world so I don't need a profile on anything or a blog.
The important people in my life I have in my cell phone directory. Anyone who I would want to contact me has my number, anyone who doesn't already isn't that important.
I have gmail and I use it for everything as well as my friends and it is even under my real name. But I use SneakEmail and mailinator for and the sadly crippled bugmenot for websites that require an email confirmation to get in the door.
I finally got my dad to stop forwarding the latest funny thing he found. If only I could keep my idiot friends from using my email on evite or web cards or whatever spam harvester is the flavor of the month....
Decide what is important to you, and if you are "left out" because you don't join the latest fad then chances are you were never "in" to begin with.
Have you considered that you are drawing attention (1)
Maxo-Texas (864189) | more than 5 years ago | (#25292643)
By being so different from the masses?
Use Gmail. (1)
Ortega-Starfire (930563) | more than 5 years ago | (#25292645)
Options (1)
SlashDev (627697) | more than 5 years ago | (#25292647)
Garbage in, garbage out. (2, Insightful)
CrAlt (3208) | more than 5 years ago | (#25292651)
I have a facebook. Its just a nickname with a false real name. Very generic and no photo's. It keeps me in the loop with people who insist on using if for everything. It always blows my mind some stuff that gets posted. Both Images and information. People who post real names with real photo's are just ASKING to be burnt. Does your boss really need to know you went out and got drunk and stoned last weekend? Does everyone in your office need to know who you are screwing this week?
My email is with my ISP. You can still email(for now) gmail users.
Any other type of online service i need to use I just put bullshit information in.
Who cares who sees that garbage.
Basically, We're Doomed (5, Funny)
mkcmkc (197982) | more than 5 years ago | (#25292667). | http://beta.slashdot.org/story/108241 | CC-MAIN-2014-15 | refinedweb | 8,292 | 71.14 |
> On Dec 1, 2016, at 12:13 PM, Jean-Paul Calderone <exar...@twistedmatrix.com> > wrote: > > On Thu, Dec 1, 2016 at 2:14 PM, Glyph Lefkowitz <gl...@twistedmatrix.com > <mailto:gl...@twistedmatrix.com>> wrote: > >> On Dec 1, 2016, at 10:51 AM, Jean-Paul Calderone <exar...@twistedmatrix.com >> <mailto:exar...@twistedmatrix.com>> wrote: >> >> Hi, >> >> In the last couple days I've noticed that there are a bunch of spurious >> changes being made to tickets in the issue tracker. These come from commit >> messages that reference a GitHub PR that happens to match a ticket number in >> trac. >> >> For example, >> <> >> >> I guess this doesn't really hurt anything ... except it's dumping a constant >> low level of garbage into the issue tracker and generating some annoying >> emails (that end up having nothing to do with what the subject suggests). > > This is, unfortunately, going to keep happening more frequently as the PR > numbers get higher and the corresponding Trac tickets get less sparse. > > The way I'd like to address it is to change the format of our commit message > to namespace Trac tickets differently; instead of just "#", using a URL, like > "Fixes <>". I wouldn't even mind if we > just had to use the Trac wiki syntax for this, i.e. "Fixes [ticket:1234]" as > long as we could turn off the "#" syntax which Github also uses. > > However, this involves surgery within Trac's code, and for me personally, the > work required to find the relevant regexes and modify them is worse than > continuing to deal with the annoyance. However, I would very much appreciate > it if someone else would take this on :-). > > Where's the source for Twisted's trac deployment?
Advertising > Is it actually possible to deploy modifications? There's probably still an undocumented setup step or two that we've missed - but after following <> 'fab config.production trac.upgrade' ought to do the trick. Allegedly it's even possible to set up a test development environment as per <> :-). I haven't made any major changes since all these docs were added so I'm just following them from the beginning for the first time myself now. but certainly the prod-deploy process has worked fine for me many times on various services. > I'll take a look, if so. Please be vocal about any roadblocks you hit. The ops situation has improved a ton since the last time you looked, but (accordingly) it's also changed almost completely. Good luck - and hopefully you'll need a lot less of it than previously ;-). -glyph
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com | https://www.mail-archive.com/twisted-python@twistedmatrix.com/msg11984.html | CC-MAIN-2017-04 | refinedweb | 434 | 64 |
We’ll start with a basic overview to see what working with GHCi is like: how to run it, how to use commands, and how to read its output.
Starting GHCi
Assuming you already have a working installation of GHC on your machine, you should be able to open a GHCi session by typing
ghci on the command line from any directory that can access GHC.Both of the major build tools,
cabal and
stack, have their own commands to open project-aware REPLs, but using those will not be our focus here. Further documentation is available here for
cabal users and here for
stack users. One benefit of using these is that opening a GHCi session from within a project directory using, e.g.,
stack repl, may load your
Main module automatically and also make the GHCi session aware of the project’s dependencies.
$ ghci
Haskell expressions can then be typed directly at the prompt and immediately evaluated.
λ> 5 + 5 10 λ> "hello" ++ " world" "hello world" λ> let x = 5+5 ; y = 7 in (x * y)70
GHCi will interpret the line as a complete expression. If you wish to enter multi-line expressions, you can do so with some special syntax.
λ> :{ > let x = 5+5; y = 7 > in > (x * y) > :} 70
Invoking GHCi with options
You can open GHCi with a file loaded from the command line by passing the filename as an argument. For example, this command loads the file
fractal.hs into a new GHCi session:
$ ghci fractal.hs
If that module has dependencies other than the
base library, though, they won’t be loaded automatically. We’ll cover bringing those into scope in a separate section, below.
You can also open GHCi with a language extension, for example, already turned on. For example,
$ ghci <elided opening text> λ> :type "julie""julie" :: [Char]
But if you pass it the
-XOverloadedStrings flag, then that language extension will be enabled for that session.
$ ghci -XOverloadedStrings <elided opening text> λ> :type "julie""julie" :: Data.String.IsString p => p
A great number of other GHC flags can be passed as arguments to
ghci in this fashion. For the most part, we will cover those as they come up in other contexts, rather than attempting to list them all here.
Your GHCi configuration, if you have one, will be loaded by default when you invoke
ghci. You can disable that with a flag:
$ ghci -ignore-dot-ghci
Packages
There are a few ways to bring modules and packages into scope in GHCi. One is using
stack or
cabal to open a project-aware REPL, which usually works well to bring the appropriate dependencies into scope. However, there are a few other options.
You can import modules directly in GHCi using
import, just as you do at the top of a file if your GHCi is already aware of the package the module comes from.
λ> :type bool <interactive>:1:1: error: Variable not in scope: bool λ> import Data.Bool λ> :type boolbool :: a -> a -> Bool -> a
All modules in
base are fair game for importing, as are modules in a project-aware GHCi session or a GHCi session that has been invoked with the
-package flag, thus loading the package into the session.
All the same import syntax is available for this, such as
hiding and
qualified.
The
base package is always loaded by default into a GHCi session (as is the
Prelude module, unless you have disabled that), but that’s not, of course, the case for many packages. However, your GHC installation came with a few packages that are available but not automatically loaded into new GHCi sessions. You can find out what you have available by running
$ ghc-pkg list
on the command line. You should see a list of all the packages that are installed and available. Modules from any of those listed packages can be directly imported, just as if they were in
base. So, assuming your list includes
containers, you can type
import Data.Map or the like directly into your GHCi session, regardless of whether it’s one of your project’s dependencies – or if you even have a project going.
If you are using a
stack or
cabal REPL, then there may be many more packages in their local package lists that are available to new GHCi sessions. If you have previously opened a GHCi session with something like
$ stack repl --package QuickCheck
then
stack will have installed
QuickCheck and, if in the future you open a
stack repl session but forget to pass the
--package flag and then suddenly you realize you want to make
QuickCheck available in this GHCi session, you can
:set -package within GHCi to bring it into scope:
λ> :type property <interactive>:1:1: error: Variable not in scope: property λ> :set -package QuickCheck package flags have changed, resetting and loading new packages... λ> import Test.QuickCheck λ> :type propertyproperty :: Testable prop => prop -> Property
It’s really handy not to have to restart a GHCi session just to load a package you use frequently!
Commands
First let’s start with a couple of basics: you can use the up-arrow, down-arrow, and tab-complete in GHCi, so if you are already comfortable with these from your bash shell or what have you, you’ll enjoy this.
Furthermore, if you are in a GHCi session, shell commands can be made available using the
:! GHCi command. For example, let’s say we’ve forgotten what directory we’re in and what files are in this directory, but we don’t want to quit GHCi to find out. No problem!
λ> :! pwd /home/jmo λ> :cd /home/jmo/haskell-projects λ> :! lsemily fractal-sets haskell-fractal life shu-thing web-lesson4
Notice the
:cd command doesn’t need the
:!.
To quit a GHCi session, use
:quit or
:q.
GHCi commandsMuch of this course will be about GHCi commands. List of commands gives an overview of all of them, and several other lessons elaborate on specific commands of particular importance. all start with a colon (except
import). They may all be abbreviated to just their first letter; however, if there is more than one command that starts with the same letter, such as
:main and
:module, it will default to reading that as whichever is more commonly used. When in doubt, type it out.
You can type
:? for a complete listing of the GHCi commands.
What is
it
GHCi assigns the name
it to the last-evaluated expression. If you aren’t using
:set +tWe will see
:set +t and other uses of
:set later in the page on the GHCi :set command. to automatically display types for expressions entered into or evaluated in GHCi, then you might not notice
it until you see an error message that mentions
it such as this one:
λ> max 5 _ <interactive>:76:7: error: • Found hole: _ :: a Where: ‘a’ is a rigid type variable bound by the inferred type of it :: (Ord a, Num a) => a ----------------------------------- ^^ at <interactive>:76:1-7 • In the second argument of ‘max’, namely ‘_’ In the expression: max 5 _ In an equation for ‘it’: it = max 5 _------------------------- ^^
It isn’t always important or useful to recognize that GHCi has named the expression, but there’s at least one interesting thing about
it that you may find useful. Here’s a clue:
λ> max 5 <interactive>:75:1: error: • No instance for (Show (Integer -> Integer)) arising from a use of ‘print’ (maybe you haven't applied a function to enough arguments?) • In a stmt of an interactive GHCi command: print it---------------------------------------------- ^^^^^ ^^
GHCi is always implicitly running the
it. But it’s not only GHCi that can pass
it as an argument to functions – you can, too!
λ> sum [1..500] 125250 λ> it / 15 8350.0 λ> it * 216700.0 | https://typeclasses.com/ghci/intro | CC-MAIN-2021-43 | refinedweb | 1,317 | 67.28 |
RadialBlur QML Type
Applies directional blur in a circular direction around the items center point. More...
Properties
- angle : real
- cached : bool
- horizontalOffset : real
- samples : int
- source : variant
- transparentBorder : bool
- verticalOffset : real
Detailed Description
Effect creates perceived impression that the source item appears to be rotating to the direction of the blur.
Other available motionblur effects are ZoomBlur and DirectionalBlur.
Example Usage
The following example shows how to apply the effect.
import QtQuick 2.0 import QtGraphicalEffects 1.0 Item { width: 300 height: 300 Image { id: bug source: "images/bug.jpg" sourceSize: Qt.size(parent.width, parent.height) smooth: true visible: false } RadialBlur { anchors.fill: bug source: bug samples: 24 angle: 30 } }
Property Documentation
This property defines the direction for the blur and at the same time the level of blurring. The larger the angle, the more the result becomes blurred. The quality of the blur depends on samples property. If angle value is large, more samples are needed to keep the visual quality at high level.
Allowed values are between 0.0 and 360.
These properties define the offset in pixels for the perceived center point of the rotation.
Allowed values are between -inf and inf. By default these properties are set to
0..
These properties define the offset in pixels for the perceived center point of the rotation.
Allowed values are between -inf and inf. By default these properties are set. | http://doc.qt.io/qt-5/qml-qtgraphicaleffects-radialblur.html | CC-MAIN-2017-04 | refinedweb | 232 | 52.26 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
Hi, This is the failure of gcc.c-torture/compile/930621-1.c on i386 at -O3, a regression present on 3.4 branch and mainline. The compiler aborts on the sanity check in get_loop_body that tests whether the number of BBs in the the loop counted through backward depth-first search from the latch is equal to the number of BBs previously recorded. The difference is 1, the former method giving the lower (correct) result. This means that the BB count was not correctly updated at some point. The big loop in the testcase gets unswitched 3 times. Then, in the process of unswitching it a 4th time, the compiler remarks that one of the edges of the branch it is going to split is always executed. So it simply calls remove_path on the other edge. Remove_path does it job and eventually calls fix_bb_placements on the BB source of the removed edge to fix up its placement in the loop tree. But fix_bb_placements works only locally, by recursively propagating the changes to the predecessors of the BB if necessary. In the present case, there is nothing to fix for the BB so fix_bb_placements does nothing. Then fix_loop_placements is called to fix up the placement of the loop to which BB belongs (the base loop) and its parents. Removing the edge has introduced a global modification in the CFG: the base loop is not the child of its parent loop anymore. So fix_loop_placement detects it and reparents the base loop, executing for (act = loop->outer; act != father; act = act->outer) act->num_nodes -= loop->num_nodes; to update the BB count of its former parents. The problem is that the formula doesn't take into account the (empty) preheader introduced by unswitch_loop: it is not counted in loop->num_nodes so is not removed from the parent's BB count. But it is of course not backward reachable anymore from the parent loop's latch, so it is not counted in get_loop_body. Patching fix_loop_placement to include the preheader (if any) in the count cures the ICE in get_loop_body, but only to stumble upon another ICE in verify_loop_structure: the preheader is not counted as belonging to the right loop. Therefore I think the problem is that fix_bb_placements can't fix up the global changes introduced by remove_path in the CFG. Since visiting every BB would be wasteful, I think a good fix is to use the information returned by fix_loop_placement to call fix_bb_placement on the preheader of loops that have been reparented because of the changes. Bootstrapped/regtested on i586-redhat-linux (3.4 branch). OK for mainline and 3.4 branch? 2004-03-07 Eric Botcazou <ebotcazou@libertysurf.fr> PR optimization/13985 * cfgloopmanip.c (fix_loop_placements): New prototype. Call fix_bb_placements on the preheader of loops that have been reparented. (remove_path): Adjust call to fix_loop_placements. 2004-03-07 Eric Botcazou <ebotcazou@libertysurf.fr> * gcc.dg/loop-3.c: New test. -- Eric Botcazou
/* PR optimization/13985 */ /* Copied from gcc.c-torture/compile/930621-1.c */ /* { dg-do compile } */ /* { dg-options "-O3" } */ /* { dg-options "-O3 -mtune=i386" { target i?86-*-* x86_64-*-* } } */ #if defined(STACK_SIZE) && (STACK_SIZE < 65536) # define BYTEMEM_SIZE 10000L #endif #ifndef BYTEMEM_SIZE # define BYTEMEM_SIZE 45000L #endif int bytestart[5000 + 1]; unsigned char modtext[400 + 1]; unsigned char bytemem[2][BYTEMEM_SIZE + 1]; long modlookup (int l) { signed char c; long j; long k; signed char w; long p; while (p != 0) { while ((k < bytestart[p + 2]) && (j <= l) && (modtext[j] == bytemem[w][k])) { k = k + 1; j = j + 1; } if (k == bytestart[p + 2]) if (j > l) c = 1; else c = 4; else if (j > l) c = 3; else if (modtext[j] < bytemem[w][k]) c = 0; else c = 2; } }
Index: cfgloopmanip.c =================================================================== RCS file: /cvs/gcc/gcc/gcc/cfgloopmanip.c,v retrieving revision 1.19 diff -u -p -r1.19 cfgloopmanip.c --- cfgloopmanip.c 30 Dec 2003 10:40:51 -0000 1.19 +++ cfgloopmanip.c 7 Mar 2004 09:48:24 -0000 @@ -41,7 +41,7 @@ static bool rpe_enum_p (basic_block, voi static int find_path (edge, basic_block **); static bool alp_enum_p (basic_block, void *); static void add_loop (struct loops *, struct loop *); -static void fix_loop_placements (struct loop *); +static void fix_loop_placements (struct loops *, struct loop *); static bool fix_bb_placement (struct loops *, basic_block); static void fix_bb_placements (struct loops *, basic_block); static void place_new_loop (struct loops *, struct loop *); @@ -417,7 +417,7 @@ remove_path (struct loops *loops, edge e /* Fix placements of basic blocks inside loops and the placement of loops in the loop tree. */ fix_bb_placements (loops, from); - fix_loop_placements (from->loop_father); + fix_loop_placements (loops, from->loop_father); return true; } @@ -668,7 +668,7 @@ fix_loop_placement (struct loop *loop) It is used in case when we removed some edges coming out of LOOP, which may cause the right placement of LOOP inside loop tree to change. */ static void -fix_loop_placements (struct loop *loop) +fix_loop_placements (struct loops *loops, struct loop *loop) { struct loop *outer; @@ -677,6 +677,10 @@ fix_loop_placements (struct loop *loop) outer = loop->outer; if (!fix_loop_placement (loop)) break; + /* Changing the placement of a loop in the loop tree may have an + effect on its preheader with regard to the condition stated in + the description of fix_bb_placement. */ + fix_bb_placements (loops, loop_preheader_edge (loop)->src); loop = outer; } } | https://gcc.gnu.org/legacy-ml/gcc-patches/2004-03/msg00672.html | CC-MAIN-2022-33 | refinedweb | 869 | 62.48 |
C# has a language feature called 'delegates' which makes it easy to detach the originator of an event from the ultimate handler. They perform essentially the same role as function pointers in C and pointers to member functions in C++, but they are much more flexible. In particular they can be used to point to any function on any object, as long as it has the appropriate signature.
This article explains my approach for providing the same functionality using only standard C++. There are many worthy alternatives which you can find easily by Googling with "C++ delegates". The focus of my effort was to obtain a syntax very similar to that in Managed C++ and C#.
If you already know everything you want to know about delegates, please skip this section.
Delegates are not a new idea. Borland's Delphi and C++ Builder products have used them from the outset to support the Visual Component Library, though they are called 'method pointers' in Delphi, and 'closures' in Builder (they are the same thing as far as I know). Basically a closure is an OO function pointer. Internally it simply holds the address of the function to be called plus the address of the object on which it is being called (i.e. the hidden 'this' parameter which is passed to the function).
The important point is this: being agnostic about which other class is going to handle the events a class generates is a key factor in making Delphi's wonderful component-based visual development framework possible. Reduced coupling between classes is a Good Thing.
Now C#-style delegates provide the same service to .NET languages but, sadly, Standard C++ does not have them. Pointers to member functions are very restricted in comparison, though they have been used in conjunction with macros in libraries such as Borland's OWL in the past, and (I think) with Trolltech's Qt today. And that's why I'm writing this article.
With the .NET delegates, you can even attach several handlers to a single event. They will all be called (sequentially) when the delegate is invoked. This means you can safely attach your handler to an event without breaking someone else's connection. I haven't used this feature yet, but I recognize the potential. However, I do question whether such a feature can be implemented efficiently. Invoking Borland's single-cast closures boils down to a couple of opcodes which push 'this' and call the function, so they are powerful but still cheap to use. [It's a pity they're not in the Standard...] But once you start maintaining a dynamic collection of targets, life gets more complicated. Ideally, .NET should have really efficient single-cast delegates, and implement multicast delegates in terms of those - best of both worlds. For all I know, that's what it does.
.NET distinguishes between 'delegates' and 'events'. A delegate is a glorified function pointer, as I have said; an event is an application of a delegate - a member of a class to which you can assign the addresses of handlers which will be called when the class invokes the delegate. I admit to finding the distinction unhelpful - how else would delegates be used, anyway? I sometimes use the terms interchangeably.
C# and Managed C++ have a rather tidy syntax for assigning handlers to a delegate/event:
mnuPopup->Popup += new System::EventHandler(this, mnuPopup_Popup);
When the context menu, mnuPopup, is displayed by the user, it will invoke its Popup event/delegate. This will then call the handler I have implemented in my form, mnuPopup_Popup. I have tried to preserve something like this syntax in my code.
mnuPopup
mnuPopup_Popup
Enough waffling!! Let's talk about the code. If you couldn't care less about how it works, just skip this section and head straight for 'Using the code'
The design is intended to satisfy the following constraints:
There are four parts to my solution:
operator()
The base class looks like this (Return and Arg1 are types from the outer class):
class Base
{
public:
virtual ~Base() { }
virtual Return operator()(Arg1) = 0;
};
The derived class for non-static functions looks like this (Return and Arg1 are types from the outer class):
template <typename Class>
class T : public Base
{
// Signature applied to a pointer-to-member for target class.
typedef Return (Class::*Func)(Arg1);
private:
Class* mThis; // Pointer to the object we are delegating to.
Func mFunc; // Address of the function on the delegate object.
public:
T(Class* aThis, Func aFunc) : mThis(aThis), mFunc(aFunc) { }
virtual Return operator()(Arg1 arg1)
{
return (mThis->*mFunc)(arg1);
}
};
The derived class for static and free functions looks like this (Return and Arg1 are types from the outer class):
class S : public Base
{
typedef Return (*Func)(Arg1);
private:
Func mFunc;
public:
S(Func aFunc) : mFunc(aFunc) { }
virtual Return operator()(Arg1 arg1)
{
return mFunc(arg1);
}
};
The outer class looks like this (with many details missing):
template <typename Return, typename Arg1>
class Event
{
private:
std::vector<Base*> mPtrs;
class Base { ... };
public:
template <typename Class>
class T : public Base { ... }; // Non-static
class S : public Base { ... }; // Static
// Add a new target (callee) to our list.
Event& operator+=(Base* aPtr)
{
mPtrs.push_back(aPtr);
return *this;
}
// Call all the targets - there will be horrible undefined behaviour
// if the callee object no longer exists. You have been warned!
Return operator()(Arg1 arg1)
{
// There are problems here:
// 1. Which result should the multicast return?
// For now I say the last called item.
// 2. We need to can't store a temporary when Return is void.
typename std::vector<Base*>::iterator end = mPtrs.end();
for (typename std::vector<Base*>::iterator i = mPtrs.begin();
i != end; ++i)
{
// Probably a specialisation for Return == void would be better.
if ((i + 1) == end)
return (*(*i))(arg1);
else
(*(*i))(arg1);
}
}
};
There is still some work to do. I want to make it safe to copy these objects, and I need to come up with something for multicast delegates with a signature that returns are value. Perhaps I could return a vector of results. I also need to duplicate the template to cope with two or more arguments to the signature.
No doubt this implementation is very slow compared to function pointers but the usual application of events is GUI work, so speed is not so critical. I'd be very interested to see what the underlying implementation of .NET delegates and events looks like...
It seems that my Event class would be most useful as a public member of any class wishing to expose an event. This breaks encapsulation, but I think the syntax for adding targets would get pretty hairy otherwise. We want to prevent clients of a class using Event from invoking operator() on it. Perhaps a simple adapter would do the job - it would be a public member passing calls on to a private Event member, but not exposing operator().
To try this out, download the demo code. Everything is contained in a single file to make life easy. Just compile it and run it from the command prompt. This code was developed on g++ 3.2. I'm keen to know which other compilers like it, and which ones don't, so please let me know.
struct TShapes
{
virtual void Square(int i)
{ std::cout << "TShapes::Square: " << i << std::endl; }
void Triangle(int i)
{ std::cout << "TShapes::Triangle: " << i << std::endl; }
};
struct TDerivedShapes : TShapes
{
virtual void Square(int i)
{ std::cout << "TDerivedShapes::Square: " << i << std::endl; }
};
struct TThings
{
void Thing1(int i)
{ std::cout << "TThings::Thing1: " << i << std::endl; }
static void Thing2(int i)
{ std::cout << "TThings::Thing2: " << i << std::endl; }
};
void Free(int i)
{ std::cout << "Free: " << i << std::endl; }
int main()
{
// As usual, a typedef makes life easier when using templates.
typedef Event<void, int> MyEvent;
MyEvent event;
TShapes shapes;
TDerivedShapes shapes2;
TThings things;
// These items get the ball roling.
event += new MyEvent::T<TShapes>(&shapes, &TShapes::Square);
event += new MyEvent::T<TShapes>(&shapes, &TShapes::Triangle);
// This item shows that virtual functions are handled correctly.
event += new MyEvent::T<TShapes>((TShapes*)&shapes2, &TShapes::Square);
// This item shows that inherited functions are handled correctly.
event += new MyEvent::T<TDerivedShapes>(
&shapes2, &TDerivedShapes::Triangle);
// This item shows that the Event object
// can hold a truly heterogeneous
// collection of targets.
event += new MyEvent::T<TThings>(&things, &TThings::Thing1);
// This item shows that static functions are handled correctly.
event += new MyEvent::S(&TThings::Thing2);
// This item shows that free functions are handled correctly.
event += new MyEvent::S(&Free);
// Invoke the multicast event
std::cout << "<multicast>" << std::endl;
event(100);
std::cout << "</multicast>" << std::endl;
return 0;
}
This is what you should see when you run the program:
<multicast>
TShapes::Square: 100
TShapes::Triangle: 100
TDerivedShapes::Square: 100
TShapes::Triangle: 100
TThings::Thing1: 100
TThings::Thing2: 100
Free: 100
</multic | http://www.codeproject.com/Articles/6197/Emulating-C-delegates-in-Standard-C?msg=1509958 | CC-MAIN-2013-20 | refinedweb | 1,463 | 62.48 |
This java program finds the smallest of three numbers using ternary operator. Lets see what is a ternary operator:
This operator evaluates a boolean expression and assign the value based on the result.
variable num1 = (expression) ? value if true : value if false
If the expression results true then the first value before the colon (:) is assigned to the variable num1 else the second value is assigned to the num1.
Example: Program to find the smallest of three numbers using ternary operator
We have used ternary operator twice to get the final output because we have done the comparison in two steps:
First Step: Compared the num1 and num2 and stored the smallest of these two into a temporary variable temp.
Second Step: Compared the num3 and temp to get the smallest of three.
If you want, you can do that in a single statement like this:
result = num3 < (num1 < num2 ? num1:num2) ? num3:(num1 < num2 ? num1:num2);
Here is the complete program:
import java.util.Scanner; public class JavaExample { public static void main(String[] args) { int num1, num2, num3, result, temp; /* Scanner is used for getting user input. * The nextInt() method of scanner reads the * integer entered by user. */ Scanner scanner = new Scanner(System.in); System.out.println("Enter First Number:"); num1 = scanner.nextInt(); System.out.println("Enter Second Number:"); num2 = scanner.nextInt(); System.out.println("Enter Third Number:"); num3 = scanner.nextInt(); scanner.close(); /* In first step we are comparing only num1 and * num2 and storing the smallest number into the * temp variable and then comparing the temp and * num3 to get final result. */ temp = num1 < num2 ? num1:num2; result = num3 < temp ? num3:temp; System.out.println("Smallest Number is:"+result); } }
Output:
Enter First Number: 67 Enter Second Number: 7 Enter Third Number: 9 Smallest Number is:7 | https://beginnersbook.com/2017/09/java-program-to-find-the-smallest-of-three-numbers-using-ternary-operator/ | CC-MAIN-2018-05 | refinedweb | 298 | 56.25 |
So I have a pprogram, and everything is ok except that the lists should be printed without the square brackets included.
so my program asks people what shape they are interested in, then asks them what lengths they want each side of the shapes, then it calculates the volume of each of the three shapes. I have a def function for lengths of different shapes, which inputs them into a list, so for example:
This is an example of what it looks like:
I have them in a list like
cubeVolumes = []
def calcCubeVolumes(length):
volume1 = int(length) ** 3
cubeVolumes.append(volume1)
return volume1
print(*cubeVolumes, sep=', ')
This is the same as passing the elements of the list into print as arguments.
sep is the seperator character
print(54, 32, 12, sep=', ')
To do it with some other text:
print("Cube Volumes: ", end='') #No line break after this print print(*cubeVolumes, sep=', ')
Ouput:
Cube Volumes: 54, 32, 12 | https://codedump.io/share/x2tsZkmRJCTI/1/i-was-just-wondering-how-can-i-print-a-list-in-python-using-def-more-info-on-description | CC-MAIN-2017-51 | refinedweb | 157 | 58.15 |
Steffen, Please find answers in-line. On 6/27/07, Steffen Grunewald <address@hidden > wrote:
I'm looking for a Howto document (or two): How to convert a simple pool of storage servers into a glusterfs pool?
(Suppose you've got a collection of data, which someone has stored on a bunch of storage servers, each with its own RAID. Now we want to unify the namespace, and access all of the files through glusterfs.) How to do that with "poor man's RAID-1" (suppose the storage servers come in pairs, storing the same set of data on each side of the mirror, while the access is read-only[!])? (I *know* that the ideal way would be to start with an empty filesystem, but with more than a dozen TB of data I cannot afford that right now.)
Glusterfs now has self-heal. It does not force users to start with empty filesystems. BTW, what's the state of 1.3.0? I know that there's a glusterfs-1.3.0-pre4.2
in qa-releases/, how does this relate to the current SVN? Would I be better off building from SVN? Is the debian/ tree up-to-date enough to dpkg-buildpackage?
-- Gowda (benki) | http://lists.gnu.org/archive/html/gluster-devel/2007-06/msg00171.html | CC-MAIN-2016-50 | refinedweb | 207 | 82.44 |
In a previous tip, we used a Microsoft utility to enable access to Java objects from a COM-aware development tool. Sun provides a similar tool but you must package everything in a jar file and use the Beans technology. The tool is called packager, written in Java, you execute it from the sun.beans.ole package. The Java Plug-in 1.2 and the JDK1.2 must be installed on the system (for download, see Java Sun Web site).
Let's try it with this simple class :
package JavaCom;
public class JavaBeanSays {
private String _hello = "Hello World!";
public String getHello() {
return _hello ;
}
public void setHello(String s) {
_hello = s;
}
}
NOTE: This is not really a Bean but let's keep it simple!
The next step is to build a manifest file to identify the bean in the jar. Here it is (manifest.txt):
Name: JavaCom/JavaBeanSays
Java-Bean: True
NOTE: If no manifest is present all classes in the jar are treated as beans.
The JavaBeanSays class is in the directory JavaCom, the manifest.txt is the directory under it. From the directory under (the one containing manifest.txt), we built the jar with :
jar cfm JavaCom.jar manifest.txt JavaCom\JavaBeanSays.class
NOTE: You can download my JavaCom.jar if you to proceed more rapidly.
The next step is to run the packager. You run it from the JDK installation directory. If the JDK is installed in c:\dev\java\jdk1.2.1\ for example , you go there. And you start the packager with
bin\java.exe -cp jre\lib\rt.jar;jre\lib\jaws.jar sun.beans.ole.Packager
A wizard is started, you follow the 5 steps to create the "JavaBeans bridge for ActiveX" for the JavabeanSays component.
The first step is to specify where is located the JavaCom.jar file. When selected, the wizard should list the JavaCom.JavaBeanSays bean, press Next. The "ActiveX" name under which the beans will be seen is shown, press Next (in VbScript, the beans suffix must be added to this name).
An output directory is needed, be careful because this directory name will be hard-coded in the generated files (REG and TLB), you need to specify a valid directory name. The packager assume that a subdirectory bin is present with the file beans.ocx in it. You can create it and then copy beans.ocx from the JRE\bin into it or edit the REG file to specify the original JRE\bin and update the registry with the good location.
The Bean is now registered and ready to be used as a COM object.
NOTE: There is a command-line interface available in the packager if you want to bypass the wizard.
To test it, try this VbScript (TestJavaBeansSays.vbs)
' VBSCRIPT connect to a Java Bean
Dim objJava
Set objJava = WScript.CreateObject("JavaBeanSays.Bean")
strFromJava = objJava.getHello
MsgBox strFromJava, _
0, _
"JAVA BEAN OUTPUT"
objJava.setHello("Bonjour le monde!")
strFromJava = objJava.getHello
MsgBox strFromJava, _
0, _
"JAVA BEAN OUTPUT"
You can share your information about this topic using the form below!
Please do not post your questions with this form! Thanks. | http://www.java-tips.org/other-api-tips/jni/wrap-a-java-beans-in-a-com-object-3.html | CC-MAIN-2014-42 | refinedweb | 525 | 60.01 |
This chapter provides a reference to Oracle external datatypes used by OCI applications. It also provides a general discussion of Oracle datatypes, including special datatypes new in the latest Oracle release. The information in this chapter is useful for understanding the conversions between internal and external representations that occur when you transfer data between your program and Oracle.
INSERT,
UPDATE, or
DELETE statements.
Inside a database, values are stored in columns in tables. Internally, Oracle represents data in particular formats known as internal datatypes. Examples of internal datatypes include
NUMBER,
CHAR, and
DATE.
In general, OCI applications do not work with internal datatype representations of data. OCI applications work with host language datatypes which are predefined by the language in which they are written. When data is transferred between an OCI client application and a database table, the OCI libraries convert the data between internal datatypes and external datatypes.
External datatypes are host language types that have been defined in the OCI header files. When an OCI application binds input variables, one of the bind parameters is an indication of the external datatype code (or SQLT code) of the variable. Similarly, when output variables are specified in a define call, the external representation of the retrieved data must be specified.
In some cases, external datatypes are similar to internal types. External types provide a convenience for the programmer by making it possible to work with host language types instead of proprietary data formats.
The OCI is capable of performing a wide range of datatype conversions when transferring data between Oracle and an OCI application. There are more OCI external datatypes than Oracle internal datatypes. In some cases a single external type maps to an internal type; in other cases multiple external types map to an single internal type.
The many-to-one mappings for some datatypes provide flexibility for the OCI programmer. For example, if you are processing the SQL statement
SELECT sal FROM emp WHERE empno = :employee_number
and you want the salary to come back as character data, rather than in a binary floating-point format, specify an Oracle external string datatype, such as
VARCHAR2 (code = 1) or
CHAR (code = 96) for the
dty parameter in the
OCIDefineByPos() call for the
sal column. You also need to declare a string variable in your program and specify its address in the
valuep parameter.
If you want the salary information to be returned as a binary floating-point value, however, specify the
FLOAT (code = 4) external datatype. You also need to define a variable of the appropriate type for the
valuep parameter.
Oracle performs most data conversions transparently. The ability to specify almost any external datatype provides a lot of power for performing specialized tasks. For example, you can input and output
DATE values in pure binary format, with no character conversion involved, by using the
DATE external datatype (code = 12).'s type management system to represent datatypes of object type attributes. There is a set of predefined constants which can be used to represent these typecodes. The constants each contain the prefix
OCI_TYPECODE.
In summary, the OCI programmer must be aware of the following different datatypes or data representations:
Information about a column's internal datatype is conveyed to your application in the form of an internal datatype code. Once your application knows what type of data will be returned, it can make appropriate decisions about how to convert and format the output data. The Oracle internal datatype codes are listed in the section "Internal Datatypes".
An external datatype code indicates to Oracle how a host variable represents data in your program. This determines how the data is converted when returned to output variables in your program, or how it is converted from input (bind) variables to Oracle column values. For example, if you want to convert a
NUMBER in an Oracle column to a variable-length character array, you specify the VARCHAR2 external datatype code in the
OCIDefineByPos() call that defines the output variable.
To convert a bind variable to a value in an Oracle column, specify the external datatype code that corresponds to the type of the bind variable. For example, if you want to input a character string such as 02-FEB-65 to a
DATE column, specify the datatype as a character string and set the length parameter to nine.
It is always the programmer's responsibility to make sure that values are convertible. If you try to insert the string MY BIRTHDAY into a
DATE column, you will get an error when you execute the statement.
The following table lists the Oracle internal (also known as built-in) datatypes, along with each type's maximum internal length and datatype code. these types.
You can use five Oracle internal datatypes to specify columns that contain characters or arrays of bytes:
CHAR,
VARCHAR2,
RAW,
LONG, and
LONG
RAW.
CHAR,
VARCHAR2, and
LONG columns normally hold character data.
RAW and
LONG
RAW hold bytes that are not interpreted as characters, for example, pixel values in a bit-mapped graphics image. Character data can be transformed when passed through a gateway between networks. For example, character data passed between machines using different languages (where single characters may be represented by differing numbers of bytes) can be significantly changed in length. Raw data is never converted in this way.
It is the responsibility of the database designer to choose the appropriate Oracle internal datatype for each column in the table. The OCI programmer must be aware of the many possible ways that character and byte-array data can be represented and converted between variables in the OCI program and Oracle tables.
When an array holds characters, the length parameter for the array in an OCI call is always passed in and returned in bytes, not characters.
The Universal
ROWID (
UROWID) is a datatype that can store both logical and physical rowids of Oracle tables, and rowids of the foreign tables, such as DB2 tables accessed by a gateway.)
CHAR)
CHARZ)
ROWIDdescriptor)
Table 3-2 lists datatype codes for external datatypes. For each datatype, the table lists the program variable types for C from or to which Oracle internal data is normally converted.
Each of the external datatypes is described below. Datatypes that are new as of release 8.0 or later are described in the section "New External Datatypes".
The following three types are internal to PL/SQL and cannot be returned as values by OCI:
The
VARCHAR2 datatype is a variable-length string of characters with a maximum length of 4000 bytes.
The
value_sz parameter determines the length in the
OCIBindByName() or
OCIBindByPos() call.
If the
value_sz parameter is greater than zero, Oracle obtains the bind variable value by reading exactly that many bytes, starting at the buffer address in your program. Trailing blanks are stripped, and the resulting value is used in the SQL statement or PL/SQL block. If, in the case of an
INSERT statement, the resulting value is longer than the defined length of the database column, the
INSERT fails, and an error is returned. the row is not inserted.
When the Oracle internal (column) datatype is
NUMBER, input from a character string that contains the character representation of a number is legal. Input character strings are converted to internal numeric format. If the
VARCHAR2 string contains an illegal conversion character, Oracle on the error will return ORA-1405.
You can also request output to a character string from an internal
NUMBER datatype. Number conversion follows the conventions established by National Language Support for your system. For example, your system might be configured to recognize a comma rather than period as the decimal point. Oracle
NUMBER.
If you specify the datatype need to know the number of bytes returned, use the
VARNUM external datatype instead of
NUMBER. See the description of
VARNUM for examples of the Oracle internal number format.
The
INTEGER datatype converts numbers. An external integer is a signed binary number; the size in bytes is system dependent. The host system architecture determines the order of the bytes in the variable. A length specification is required for input and output. If the number being returned from Oracle is not an integer, the fractional part is discarded, and no error or other indication is returned. If the number to be returned exceeds the capacity of a signed integer for the system, Oracle returns an "overflow on conversion" error.
The
FLOAT datatype processes numbers that have fractional parts or that exceed the capacity of an integer. The number is represented in the host system's floating-point format. Normally the length is either four or eight bytes. The length specification is required for both input and output.
The internal format of an Oracle number is decimal, and most floating-point implementations are binary; therefore Oracle can represent numbers with greater precision than floating-point representations.
The null-terminated
STRING format behaves like the VARCHAR2 format (datatype code 1),
If the length is not specified in the bind call, the OCI uses an implied maximum string length of 4000.
The minimum string length is two bytes. If the first character is a null terminator and the length is specified as two, a null is inserted in the column, if permitted. Unlike types 1 and 96, a string containing all blanks is not treated as a null on input; it is inserted as is. possible, as well.
The
VARNUM datatype is like the external
NUMBER datatype, except that the first byte contains the length of the number representation. This length does not include the length byte itself. Reserve 22 bytes to receive the longest possible
VARNUM. Set the length byte when you send a
VARNUM value to Oracle.
The following table shows several examples of the
VARNUM values returned for numbers in an Oracle table.
The
LONG datatype stores character strings longer than 4000 bytes. You can store up to two gigabytes (2^31-1 bytes) in a
LONG column. Columns of this type are used only for storage and retrieval of long strings. They cannot be used in functions, expressions, or
WHERE clauses.
LONG column values are generally converted to and from character strings.
The
VARCHAR datatype stores character strings of varying length. The first two. For converting longer strings, use the
LONG
VARCHAR external datatype.
The
DATE datatype can update, insert, or retrieve a date value using the Oracle internal date binary format. A date in binary format contains seven datatype, the database does not do consistency or range checking. All data in this format must be carefully validated before input..
When
RAW data in an Oracle table is converted to a character string in a program, the data is represented in hexadecimal character code. Each byte of the
RAW data is returned as two characters that indicate the value of the byte, from '00' to 'FF'. If you want to input a character string in your program to a
RAW column in an Oracle.
The
VARRAW datatype is similar to the
RAW datatype. However, the first two bytes contain the length of the data. The specified length of the string in a bind or a define call must include the two length bytes. So the largest
VARRAW string that can be received or sent is 65533 bytes long, not 65535. For converting longer strings, use the
LONG
VARRAW external datatype.
The
LONG
RAW datatype is similar to the
RAW datatype, except that it stores raw data with a length up to two gigabytes (2^31-1 bytes).
The
UNSIGNED datatype is used for unsigned binary integers. The size in bytes is system dependent. The host system architecture determines the order of the bytes in a word. A length specification is required for input and output. If the number being output from Oracle is not an integer, the fractional part is discarded, and no error or other indication is returned. If the number to be returned exceeds the capacity of an unsigned integer for the system, Oracle returns an "overflow on conversion" error.
The
LONG
VARCHAR datatype stores data from and into an Oracle
LONG column. The first four bytes of a
LONG
VARCHAR contain the length of the item. So, the maximum length of a stored item is 2^31-5 bytes.
The
LONG
VARRAW datatype is used to store data from and into an Oracle
LONG
RAW column. The length is contained in the first four bytes. The maximum length is 2^31-5 bytes.
The
CHAR datatype is a string of characters, with a maximum length of 2000.
CHAR strings are compared using blank-padded comparison semantics
The length is determined by the
value_sz parameter in the
OCIBindByName() or
OCIBindByPos() call. does not insert the row.
Negative values for the
value_sz parameter are not allowed for CHARs.
When the Oracle internal (column) datatype is
NUMBER, input from a character string that contains the character representation of a number is legal. Input character strings are converted to internal numeric format. If the
CHAR string contains an illegal conversion character, Oracle returns an error and does not insert the value. Number conversion follows the conventions established by National Language if character truncation has occurred,.
You can also request output to a character string from an internal
NUMBER datatype. Number conversion follows the conventions established by the National Language Support settings for your system. For example, your system might use a comma (,) rather than a period (.) as the decimal point.
The
CHARZ external datatype
char my_num[] = "123.45";
then the length parameter when you bind
my_num must be seven. Any other value would return an error for this example.
The following new external datatypes were introduced with or after release 8.0. These datatypes are not supported when you connect to an Oracle release 7 server..
This is a reference to a named data type. The C language representation of a
REF is a variable declared to be of type OCIRef *. The SQLT_REF 4 gigabytes database server. A database table stores a LOB locator which points to the LOB value which may be in a different storage space.
When an OCI application issues a SQL query which includes a LOB column or attribute in its select-list, fetching the result(s) of the query returns the locator, rather than the actual LOB value. In.
The
BFILE datatype provides access to file LOBs that are stored in file systems outside an Oracle database. Oracle currently only supports access to binary files, or BFILEs.
A
BFILE column or attribute stores a file LOB locator, which serves as a pointer to a binary file on the server's file system. The locator maintains the directory alias and the filename. through Oracle. Oracle provides APIs to access file data.
The datatype code available for binding or defining FILEs is: varying-width character data.
CLOBs can store up to 4 gigabytes of character data.
CLOBs have full transactional support; changes made through the OCI participate fully in the transaction. The
CLOB value manipulations can be committed or rolled back. You cannot save a
CLOB locator in a variable in one transaction and then use it in another transaction or session.
NCLOB. An
NCLOB is a national character version of a
CLOB. It stores fixed-width, single-byte or multibyte national character set (
NCHAR) data, or varying-width character set data.
NCLOBs can store up to 4 gigabytes of character text data.
NCLOBs have full transactional support; changes made through the OCI participate fully in the transaction.
NCLOB value manipulations can be committed or rolled back. You cannot save a
NCLOB locator in a variable in one transaction and then use it in another transaction or session.
You cannot create an object with
NCLOB attributes, but you can specify
NCLOB parameters in methods.
The datetime and interval datatype descriptors are briefly summarized here.
The
ANSI
DATE is based on the
DATE, but contains no time portion. (Therefore,
fractional_seconds_precision (which is optional) datatype has the form:
TIMESTAMP(fractional_seconds_precision) WITH TIME ZONE
where users retrieve the data, Oracle returns it in the users' number of digits in the
DAYdatetime field. It is optional. Accepted values are 0 to 9. The default is 2.
fractional_seconds_precision is the number of digits in the fractional part of the
SECOND datetime field. It is optional. Accepted values are 0 to 9. The default is 6.
The OCI supports Oracle-defined C datatypes used to map user-defined datatypes to C representations (e.g. OCINumber, OCIArray). The OCI provides a set of calls to operate on these datatypes, and to use these datatypes in bind and define operations, in conjunction with OCI external datatype codes.
Table 3-5 and Table 3-6 show the supported conversions from internal datatypes to external datatypes, and from external datatypes into internal column representations, for all datatypes available through release 7.3. Information about data conversions for data types newer than release 7.3 is listed here:
LOBs are shown in a separate table that follows, because of the width limitation.
You can also use one of the character data types for the host variable used in a fetch or insert operation from or to a datetime or interval column. Oracle will do the conversion between the character data type and datetime/interval data type for you.
might.
There is a unique typecode associated with each Oracle type, whether scalar, collection, reference, or object type. This typecode identifies the type, and is used by Oracle to manage information about object type attributes. This typecode system is designed to be generic and extensible, and is not tied to a direct one-to-one mapping to Oracle datatypes. Consider the following SQL statements:
CREATE TYPE my_type AS OBJECT ( attr1 NUMBER, attr2 INTEGER, attr3 SMALLINT); CREATE TABLE my_table AS TABLE OF my_type;
These statements create an object type and an object table. When it is created,
my_table will have three columns, all of which are of Oracle
NUMBER type, because
SMALLINT and
INTEGER map internally to
NUMBER. The internal representation of the attributes of
my_type, however, maintains the distinction between the datatypes datatype of the typecode. The typecode is used by some OCI functions, like
OCIObjectNew() (where it helps determine what type of object is created). It is also returned as the value of some attributes when an object is described; e.g., querying the OCI_ATTR_TYPECODE attribute of a type returns an OCITypeCode value.
Table 3-8 lists the possible values for an OCITypeCode. There is a value corresponding to each Oracle datatype.
Oracle recognizes two different sets of datatype code values. One set is distinguished by the
SQLT_ prefix, the other by the
OCI_TYPECODE_ prefix.
The SQLT typecodes are used by OCI to specify a datatype in a bind or define operation. In this way, the SQL typecodes help to control data conversions between Oracle16, OCI_TYPECODE_SIGNED32, OCI_TYPECODE_INTEGER, OCI_TYPECODE_OCTET, and OCI_TYPECODE_SMALLINT are all mapped to the SQLT_INT type.ATYPES # define ORATYPES # define SX_ORACLE # define SX3_ORACLE #ifndef ORASTDDEF # include <stddef.h> # define ORASTDDEF #endif #ifndef ORALIMITS # include <limits.h> # define ORALIMITS #endif unsigned char ub1; typedef signed char sb1; #else #define ub1 unsigned char #define sb1 signed char #endif #define UB1MAXVAL ((ub1)UCHAR_MAX) #define UB1MINVAL ((ub1) 0) #define SB1MAXVAL ((sb1)SCHAR_MAX) #define SB1MINVAL ((sb1)SCHAR_MIN) #define MINUB1MAXVAL ((ub1) 255) #define MAXUB1MINVAL ((ub1) 0) #define MINSB1MAXVAL ((sb1) 127) #define MAXSB1MINVAL ((sb1) -127) #ifndef lint typedef unsigned short ub2; typedef signed short sb2; #else #define ub2 unsigned short #define sb2 signed short #endif #define UB2MAXVAL ((ub2)USHRT_MAX) #define UB2MINVAL ((ub2) 0) #define SB2MAXVAL ((sb2) SHRT_MAX) #define SB2MINVAL ((sb2) SHRT_MIN) #define MINUB2MAXVAL ((ub2) 65535) #define MAXUB2MINVAL ((ub2) 0) #define MINSB2MAXVAL ((sb2) 32767) #define MAXSB2MINVAL ((sb2)-32767) #ifndef lint typedef unsigned int ub4; typedef signed int sb4; #else #define eb4 int #define ub4 unsigned int #define sb4 signed int #endif #define UB4MAXVAL ((ub4)UINT_MAX) #define UB4MINVAL ((ub4) 0) #define SB4MAXVAL ((sb4) INT_MAX) #define SB4MINVAL ((sb4) INT_MIN) #define MINUB4MAXVAL ((ub4) 4294967295) #define MAXUB4MINVAL ((ub4) 0) #define MINSB4MAXVAL ((sb4) 2147483647) #define MAXSB4MINVAL ((sb4)-2147483647) #define UB1BITS CHAR_BIT #define UB1MASK ((1 << ((uword)CHAR_BIT)) - 1) typedef ub1 bitvec; #define BITVEC(n) (((n)+(UB1BITS-1))>>3) #ifdef lint # define oratext unsigned char #else typedef unsigned char oratext; #endif #ifndef lint typedef ub4 duword; typedef sb4 dsword; typedef dsword dword; #else #define duword ub4 #define dsword sb4 #define dword dsword #endif #define DUWORDMAXVAL UB4MAXVAL #define DUWORDMINVAL UB4MINVAL #define DSWORDMAXVAL SB4MAXVAL #define DSWORDMINVAL SB4MINVAL #define MINDUWORDMAXVAL MINUB4MAXVAL #define MAXDUWORDMINVAL MAXUB4MINVAL #define MINDSWORDMAXVAL MINSB4MAXVAL #define MAXDSWORDMINVAL MAXSB4MINVAL #define DEWORDMAXVAL EB4MAXVAL #define DEWORDMINVAL EB4MINVAL #define MINDEWORDMAXVAL MINEB4MAXVAL #define MAXDEWORDMINVAL MAXEB4MINVAL #define DWORDMAXVAL DSWORDMAXVAL #define DWORDMINVAL DSWORDMINVAL #ifndef lint typedef ub4 dsize_t; # else # define dsize_t ub4 #endif # define DSIZE_TMAXVAL UB4MAXVAL # define MINDSIZE_TMAXVAL (dsize_t)65535 #ifndef lint typedef sb4 dboolean; # else # define dboolean sb4 #endif #ifndef lint typedef ub4 dptr_t; #else #define dptr_t ub4 #endif #ifndef lint typedef char eb1; typedef short eb2; typedef int eb4; typedef eb4 deword; #else # define eb1 char # define eb2 short # define eb4 int # define deword eb4 #endif #define EB1MAXVAL ((eb1)SCHAR_MAX) #define EB1MINVAL ((eb1) 0) #define MINEB1MAXVAL ((eb1) 127) #define MAXEB1MINVAL ((eb1) 0) #define EB2MAXVAL ((eb2) SHRT_MAX) #define EB2MINVAL ((eb2) 0) #define MINEB2MAXVAL ((eb2) 32767) #define MAXEB2MINVAL ((eb2) 0) #define EB4MAXVAL ((eb4) INT_MAX) #define EB4MINVAL ((eb4) 0) #define MINEB4MAXVAL ((eb4) 2147483647) #define MAXEB4MINVAL ((eb4) 0) #ifndef lint typedef sb1 b1; #else #define b1 sb1 #endif #define B1MAXVAL SB1MAXVAL #define B1MINVAL SB1MINVAL #ifndef lint typedef sb2 b2; #else #define b2 sb2 #endif #define B2MAXVAL SB2MAXVAL #define B2MINVAL SB2MINVAL #ifndef lint typedef sb4 b4; #else #define b4 sb4 #endif # define B4MAXVAL SB4MAXVAL # define B4MINVAL SB4MINVAL #ifndef uiXT typedef ub1 BITS8; typedef ub2 BITS16; typedef ub4 BITS32; #endif #if !defined(LUSEMFC) # ifdef lint # define text unsigned char # define OraText oratext # else typedef oratext text; typedef oratext OraText; # endif #endif #define M_IDEN 30 #ifdef AIXRIOS # define SLMXFNMLEN 256 #else # define SLMXFNMLEN 512 ) 2147483647) #define MAXEWORDMINVAL ((eword) 0) #define MINUWORDMAXVAL ((uword) 4294967295) #define MAXUWORDMINVAL ((uword) 0) #define MINSWORDMAXVAL ((sword) 2147483647) #define MAXSWORDMINVAL ((sword) -2147483647) )) #ifndef lint #if (__STDC__ != 1) # define SLU8NATIVE # define SLS8NATIVE #endif #endif #ifdef SLU8NATIVE #ifdef SS_64BIT_SERVER # ifndef lint typedef unsigned long ub8; # else # define ub8 unsigned long # endif #else # ifndef lint typedef unsigned long long ub8; # else # define ub8 unsigned long long # endif def SS_64BIT_SERVER # ifndef lint typedef signed long sb8; # else # define sb8 signed long # endif #else # ifndef lint typedef signed long long sb8; # else # define sb8 signed long long # endif ASYS_TYPES # include <sys/types.h> # define ORASYS_TYPES #endif #ifndef boolean #ifndef lint typedef int boolean; #else #define boolean int #endif #endif #ifndef SIZE_TMAXVAL # define SIZE_TMAXVAL UBIG_ORAMAXVAL #endif #ifndef MINSIZE_TMAXVAL # define MINSIZE_TMAXVAL (size_t)4294967295 #endif #if !defined(MOTIF) && !defined(LISPL) && !defined(__cplusplus) && \ !defined(LUSEMFC) typedef OraText *string; #endif #ifndef lint typedef unsigned short utext; #else #define utext unsigned short #endif #endif | http://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a89857/oci03typ.htm | CC-MAIN-2015-40 | refinedweb | 3,785 | 52.7 |
Opened 2 years ago
Last modified 1 day ago
In the current Django setup, it is possible for a signal listener to register itself with the dispatcher (PyDispatcher) more than once.
The result is that the same function can end-up being called twice when a single signal is sent.
class A(models.Model):
a = models.IntegerField(default=1)
def test_function_A(): print "Called listener"
dispatcher.connect(test_function_A, signal=signals.pre_init, sender=A)
>>> from django.dispatch import dispatcher
>>> from django.db.models import signals
>>> from testproj.testapp.models import A
>>> dispatcher.getReceivers(sender=A, signal=signals.pre_init)
[<function test_function_A at 0x1203fb0>]
>>> from testapp.models import A
>>> dispatcher.getReceivers(sender=A, signal=signals.pre_init)
[<function test_function_A at 0x1203fb0>, <function test_function_A at 0x1206a30>]
>>> A()
Called listener
Called listener
<A: A object>
The problem seems to stem from the fact that Django puts itself in Python's path more than once. For example, for the project 'testproj', Python will look in /path/to/project/ and /path/to/project/testproj/. This, both testapp and testproj.testapp are valid.
Though these locations are both valid and reference the same content, they are actually treated as being different. Models seem to be treated as singletons, but other objects do not. I assume that this is because of the way in which models are loaded when Django fires-up.
>>> from testproj.testapp.models import A as full_path_A_class
>>> from testapp.models import A as short_path_A_class
>>> id(full_path_A_class), id(short_path_A_class)
(6797536, 6797536)
>>> from testproj.testapp.models import test_function_A as full_path_A_function
>>> from testapp.models import test_function_A as short_path_A_function
>>> id(full_path_A_function), id(short_path_A_function)
(18898800, 18960624)
Note in the last line of output that two different values are returned (18898800 != 18960624). Thus, the dispatcher.connect function has no way to determine that the listeners are the same.
The code responsible for the registration of listeners in PyDispatch can be found at django/dispatch/dispatcher.py:153-170; however, I'm not sure that we should go hacking on the PyDispatcher code. I'm honestly not sure what the solution to this one should be.
The following things should be noted:
The Python path stuff done by manage.py is a bit of a red herring in this case; it's just as possible to trigger this with a relative import. The problem stems from the fact that in the second import statement in your example, Python looks for the key testapp.models in sys.modules and doesn't find it (because the first import added an entry for testproj.testapp.models), so the module gets initialized again.
We hack around this in the model loading system because it's necessary to insure only one copy of each model class, and do it by checking the source filename of the module, which is a bit more reliable (though not completely so -- you can still trip it up with symlinks if you really want to).
I'm also unsure of the best way to get around this -- hacking in PyDispatcher? probably won't be that fun.
On 04/06/07, ubernostrum wrote:
it's just as possible to trigger this with a relative import
I thought this at first, as well... and maybe you're right, but I attempted to test this case as well before 'declaring' the Python path to be the cause.
Here's a simple test case using a relative import where it doesn't happen when using the full path and where it does when using the short path.
from models import *
>>> dispatcher.getReceivers(signal=signals.pre_init, sender=A)
[<function test_function_A at 0x12061b0>]
>>> from testproj.testapp.views import *
>>> dispatcher.getReceivers(signal=signals.pre_init, sender=A)
[<function test_function_A at 0x12061b0>]
>>> from testapp.views import *
>>> dispatcher.getReceivers(signal=signals.pre_init, sender=A)
[<function test_function_A at 0x12061b0>, <function test_function_A at 0x1206af0>]
To me, it looks like it's the root of the import tree that matters. If something is referenced as project_name.app_name in an import, any relative imports look like they'll work properly. However, if they're referenced as app_name alone in imports, relative imports seem to cause duplication.
This is at the boundaries of my level of Python sophistication, though... so there might be something I'm overlooking in my analysis.
The simplest way to understand this problem is to realise that it is the import path (the bit after the "from" and before the "import" keywords) that is stored as a key in sys.modules. Different keys in that dictionary imply the classes are considered different by Python.
So we have to re-identify them inside the signal handler code by using external information that might be constant. That is why James is suggesting the source path identification trick we use in loading.py. You could also use things like a hash of the bytecode, but that's a little less stable.
(Last comment was by me.)
We can certainly try to patch it to use the a work-around similar to that of loading.py. It seems to me that this problem is bigger than PyDispatcher? though.
My concern is that Django is patching around this in multiple locations and it's indicative of a large problem. If it was confined to Django's internals, that would be OK in my eyes... but the double-loading potentially means twice the memory, violation of singleton patterns, double-registration, etc. That's a serious problem for complex projects/applications.
What do you guys think? Is this something that's OK to keep working around, or should changing behavior be considered?
Well, as far as "double loading", this is technically upstream -- everything we're seeing is a logical consequence of Python's (documented) import behavior, so "changing behavior" would really mean "changing Python", which probably won't get very far ;)
We work around this in specific cases, and we may end up needing to do so for the dispatcher, but in the general case I don't think there's anything we can do, because this is how Python's import behavior is "supposed" to work.
I understand that this is standard Python behavior... it's just that Django encourages a setup where this duplication effect is more likely to happen. That is, putting the same content in Python's path twice.
My conjecture regarding changing behavior was more along the lines of changing user behavior, not that of Python (encouraging a design pattern where only project-level references would be used, for example... though this goes against application portability goals). It's a tricky issue, and it's certainly not the fault of Django or its developers... its just something that we should probably try to make sure its a problem that users know to avoid.
A work-around for the dispatcher does seem to be in order in the short term. It looks like upstream for PyDispatcher has been abandoned, so I guess we're on our own.
If there's a particular approach you're interested in pursuing for the dispatcher, let me know and I can take a crack at coming up with a patch.
Ben, it's standard Python behaviour, so people rapidly become used to it: if you are referencing things by id() or name, use the same import path or find some other way to identify them. This has to be taken into account when pickling as well.
It's not worth making a big deal out of for Django users. We work around it (by using another identification method) in the rare cases we need it to ease the user's burden. You didn't even know it existed in loading.py, showing how little of a burden it puts on your use. We'll do the same thing for signal matching, most likely. It's not something that's going to change very much in upstream Python -- certainly not in Python 2.x, so there's no benefit in looking for any bigger picture. This is just the way life is.
I have just been burned by this, and as a novice django user but a fairly experienced python programmer, I take serious issue with the "people rapidly become used to it" attitude. In fact, the comment by mtredinnick on 4/7 above is more infuriating every time I read it.
In my case, I had the signal registration code in a module along with the model to which the signal was attached. That module was being loaded (in development/"runserver" mode) twice: once apparently for validation and once when the module was loaded by the http subsystem. Finding this bug report at least pointed me in the right direction, and I've "fixed" the problem by moving the registration code elsewhere.
This is clearly not a problem with a trivial fix, but I guarantee that people will get hung up on it. The signals architecture, something which could otherwise be a significant win, is far less useful subject to this sort of hidden breakage.
The dispatch_uid argument in the refactored signals (see #6814) will fix this.
(In [8223]) Major refactoring of django.dispatch with an eye towards speed. The net result is that signals are up to 90% faster.
Though some attempts and backwards-compatibility were made, speed trumped compatibility. Thus, as usual, check BackwardsIncompatibleChanges for the complete list of backwards-incompatible changes.
Thanks to Jeremy Dunck and Keith Busell for the bulk of the work; some ideas from Brian Herring's previous work (refs #4561) were incorporated.
Documentation is, sigh, still forthcoming.
Fixes #6814 and #3951 (with the new dispatch_uid argument to connect).
Reverting spam.
By Edgewall Software. | http://code.djangoproject.com/ticket/3951 | crawl-002 | refinedweb | 1,597 | 65.22 |
If i have an simple class as below containing a collection where i know that the collection can never have null added to it, is there any way to tell resharper this so that i would not get a warning on the indicated line (item may be null).
At the moment i tend to put a Debug.Assert(item!=null) wich prevents the warning on debug build with no release overhead. Other than that i could wrap all the standard collection types with wrapper classes or use an IEnumerator<T> extension (e.g. foreach (var item in items.WithNoNulls()) but this all seems a bit messy just to remove the resharper warnings.
public class MyClass { [NotNull] private readonly List<IMapItem> items = new List<IMapItem>(); public void Add([NotNull] IMapItem item) { items.Add(item); } private void DoWord() { foreach (var item in items) { string s = item.Name; } } }
Hi Martyn,
Actually, if JetBrains.Annotations are referenced to this code (to use ReSharper's [NotNull] attribute), I do not get any warnings on the code you provided.
I'm sorry if I didn't understand you correct. If so, could you please provide some additional details on this situation.
Thanks.
Attachment(s):
screen431.png
Forget to mention that this is with the following resharper option set:
Assume Entity value can be null = When entity does not have an explicit NotNull Attribute.
-
Essentially everything that does not have a [NotNull] attribute is assumed to be [CanBeNull]. I have found this to benificial if it is used from the start of a project as it requires all code to be explicit about exactly where null are allowed or not (requiring the issue to be though about when writting the code) as well as the the actual warning being more through.
Looking for the same thing. Has anyone know if it's possible ? | https://resharper-support.jetbrains.com/hc/en-us/community/posts/206016779-Howto-indicate-all-items-in-a-collection-are-NotNull-with-ContractAnnotation | CC-MAIN-2020-16 | refinedweb | 306 | 62.78 |
Hello,
I have done some reading around this topic, and what is suggested is simple, but when I apply it it doesn't work. The Bluetooth module flashes a red LED, and does not turn green.
My aim is to send and receive information with the Arduino using the Processing 2.2.1 IDE. I am using an Arduino Uno with a BlueSMiRF Gold attached as follows: RX -> TX-0 TX -> RX-1 5V -> VCC GND -> GND I am unplugging the TX and RX pins during Arduino code upload.
The code below should print "Hello World" to the Processing IDE console after connecting to the bluetooth module on the arduino uno. My Arduino code is as follows:
void setup () { //initialize serial communication at 9600 baud rate Serial.begin(9600); // opens serial port, sets data rate to 9600 bps } void loop() { //send "Hello World!" over the serial port Serial.println ("Hello World"); // wait 100 milliseconds so we don't drive ourselves crazy delay (100); }
My Processing code is:
import processing.serial.*; Serial myPort; // Create object from Serial class String val; // Data received from the serial port void setup() { println(Serial.list()); String portName = Serial.list()[1]; //change the 0 to a 1 or 2 etc. to match your port myPort = new Serial(this, portName, 9600); } void draw() { if ( myPort.available() > 0) { // If data is available, val = myPort.readStringUntil('\n'); // read it and store it in val } println(val); //print it out in the console }
TroubleShooting Done: 1. When connected by USB, the Arduino console shows the "Hello World" is printed. 2. I am able to connect and send/receive information using MATLAB, so the BlueSMiRF is not damaged, and the computer is connecting to it find (no drivers required). 3. The designated COM port "Serial.list()[1]" is correct.
My belief is it is an issue with the Processing IDE code. Please let me know! | https://forum.arduino.cc/t/arduino-bluetooth-processing/300050 | CC-MAIN-2021-31 | refinedweb | 314 | 65.83 |
In this post we’ll implement Reed-Solomon error-correcting codes and use them to play with codes. In our last post we defined Reed-Solomon codes rigorously, but in this post we’ll focus on intuition and code. As usual the code and data used in this post is available on this blog’s Github page.
The main intuition behind Reed-Solomon codes (and basically all the historically major codes) is
Error correction is about adding redundancy, and polynomials are a really efficient way to do that.
Here’s an example of what we’ll do in the post. Say you have a space probe flying past Mars taking photographs like this one
Unfortunately you know that if you send the images back to Earth via radio waves, the signal will get corrupted by cosmic something-or-other and you’ll end up with an image like this.
How can you recover from errors like this? You could do something like repeat each pixel twice in the message so that if one is corrupted the other will get through. But still, every now and then both pixels in a row will be corrupted and it’s twice as inefficient.
The idea of error-correcting codes is to find a way to encode a message so that it adds a lot of redundancy without adding too much extra information to the message. The name of the game is to optimize the tradeoff between how much redundancy you get and how much longer the message needs to be, while still being able to efficiently decode the encoded message.
A solid technique turns out to be: use polynomials. Even though you’d think polynomials are too simple (we teach them starting in the 7th grade these days!) they turn out to have remarkable properties. The most important of which is:
if you give me a bunch of points in the plane with different
coordinates, they uniquely define a polynomial of a certain degree.
This fact is called polynomial interpolation. We used it in a previous post to share secrets, if you’re interested.
What makes polynomials great for error correction is that you can take a fixed polynomial (think, the message) and “encode” it as a list of points on that polynomial. If you include enough, then you can get back the original polynomial from the points alone. And the best part, for each two additional points you include above the minimum, you get resilience to one additional error no matter where it happens in the message. Another way to say this is, even if some of the points in your encoded message are wrong (the numbers are modified by an adversary or random noise), as long as there aren’t too many errors there is an algorithm that can recover the errors.
That’s what makes polynomials so much better than the naive idea of repeating every pixel twice: once you allow for three errors you run the risk of losing a pixel, but you had to double your communication costs. With a polynomial-based approach you’d only need to store around six extra pixels worth of data to get resilience to three errors that can happen anywhere. What a bargain!
Here’s the official theorem about Reed-Solomon codes:
Theorem: There is an efficient algorithm which, when given points
with distinct
has the following property. If there is a polynomial of degree
that passes through at least
of the given points, then the algorithm will output the polynomial.
So let’s implement the encoder, decoder, and turn the theorem into code!
Implementing the encoder
The way you write a message of length
as a polynomial is easy. Pick a large prime integer
and from now on we’ll do all our arithmetic modulo
. Then encode each character
in the message as an integer between 0 and
(this is why
needs to be large enough), and the polynomial representing the message is
If the message has length
then the polynomial will have degree
.
Now to encode the message we just pick a bunch of
values and plug them into the polynomial, and record the (input, output) pairs as the encoded message. If we want to make things simple we can just require that you always pick the
values
for some choice of
.
A quick skippable side-note: we need
to be prime so that our arithmetic happens in a field. Otherwise, we won’t necessarily get unique decoded messages.
Back when we discussed elliptic curve cryptography (ironically sharing an acronym with error correcting codes), we actually wrote a little library that lets us seamlessly represent polynomials with “modular arithmetic coefficients” in Python, which in math jargon is a “finite field.” Rather than reinvent the wheel we’ll just use that code as a black box (full source in the Github repo). Here are some examples of using it.
>>> from finitefield.finitefield import FiniteField >>> F13 = FiniteField(p=13) >>> a = F13(7) >>> a+9 3 (mod 13) >>> a*a 10 (mod 13) >>> 1/a 2 (mod 13)
A programming aside: once you construct an instance of your finite field, all arithmetic operations involving instances of that type will automatically lift integers to the appropriate type. Now to make some polynomials:
>>> from finitefield.polynomial import polynomialsOver >>> F = FiniteField(p=13) >>> P = polynomialsOver(F) >>> g = P([1,3,5]) >>> g 1 + 3 t^1 + 5 t^2 >>> g*g 1 + 6 t^1 + 6 t^2 + 4 t^3 + 12 t^4 >>> g(100) 4 (mod 13)
Now to fix an encoding/decoding scheme we’ll call
the size of the unencoded message,
the size of the encoded message, and
the modulus, and we’ll fix these programmatically when the encoder and decoder are defined so we don’t have to keep carrying these data around.
def makeEncoderDecoder(n, k, p): Fp = FiniteField(p) Poly = polynomialsOver(Fp) def encode(message): ... def decode(encodedMessage): ... return encode, decode
Encode is the easier of the two.
def encode(message): thePoly = Poly(message) return [(Fp(i), thePoly(Fp(i))) for i in range(n)]
Technically we could remove the leading
Fp(i) from each tuple, since the decoder algorithm can assume we’re using the first
integers in order. But we’ll leave it in and define the decode function more generically.
After we define how the decoder should work in theory we’ll run through a simple example step by step. Now on to the decoder.
The decoding algorithm, Berlekamp-Welch
There are a lot of different decoding algorithms for various error correcting codes. The one we’ll implement is called the Berlekamp-Welch algorithm, but before we get to it we should mention a much simpler algorithm that will work when there are only a few errors.
To remind us of notation, call
the length of the message, so that
is the degree of the polynomial we used to encode it. And
is the number of points we used in the encoding. Call the encoded message
as it’s received (as a list of points, possibly with errors).
In the simple method what you do is just randomly pick
points from
, do polynomial interpolation on the chosen points to get some polynomial
, and see if
agrees with most of the points in
. If there really are few errors, then there’s a good chance the randomly chosen points won’t have any errors in them and you’ll win. If you get unlucky and pick some points with errors, then the
you get won’t agree with most of
and you can throw it out and try again. If you get really unlucky and a bad
does agree with most of
, then you just run this procedure a few hundred times and take the
you get most often. But again, this only works with a small number of errors and while it could be good enough for many applications, don’t bet your first-born child’s life on it working. Or even your favorite pencil, for that matter. We’re going to implement Berlekamp-Welch so you can win someone else’s favorite pencil. You’re welcome.
Exercise: Implement the simple decoding algorithm and test it on some data.
Suppose we are guaranteed that there are exactly
errors in our received message
. Call the polynomial that represents the original message
. In other words, we have that
for all but
of the points in
.
There are two key ingredients in the algorithm. The first is called the error locator polynomial. We’ll call this polynomial
, and it’s just defined by being zero wherever the errors occurred. In symbols,
whenever
. If we knew where the errors occurred, we could write out
explicitly as a product of terms like
. And if we knew
we’d also be done, because it would tell us where the errors were and we could do interpolation on all the non-error points in
.
So we’re going to have to study
indirectly and use it to get
. One nice property of
is the following
which is true for every pair
. Indeed, by definition when
then
so both sides are zero. Now we can use a technique called linearization. It goes like this. The product
, i.e. the right-hand-side of the above equation, is a polynomial, say
, of larger degree (
). We get the equation for all
:
Now
,
, and
are all unknown, but it turns out that we can actually find
and
efficiently. Or rather, we can’t guarantee we’ll find
and
exactly, instead we’ll find two polynomials that have the same quotient as
. Here’s how that works.
Say we wrote out
as a generic polynomial of degree
and
as a generic polynomial of degree
. So their coefficients are unspecified variables. Now we can plug in all the points
to the equations
, and this will form a linear system of
unknowns (
unknowns come from
and
come from
).
Now we know that this system has a good solution, because if we take the true error locator polynomial and
with the true
we win. The worry is that we’ll solve this system and get two different polynomials
whose quotient will be something crazy and unrelated to
. But as it turns out this will never happen, and any solution will give the quotient
. Here’s a proof you can skip if you hate proofs.
Proof. Say you have two pairs of solutions to the system,
and
, and you want to show that
. Well, they might not be divisible, but we can multiply the previous equation through to get
. Now we show two polynomials are equal in the same way as always: subtract and show there are too many roots. Define
. The claim is that
has
roots, one for every point
. Indeed,
But the degree of
is
which is less than
by the assumption that
. So
has too many roots and must be the zero polynomial, and the two quotients are equal.
So the core python routine is just two steps: solve the linear equation, and then divide two polynomials. However, it turns out that no python module has any decent support for solving linear systems of equations over finite fields. Luckily, I wrote a linear solver way back when and so we’ll adapt it to our purposes. I’ll leave out the gory details of the solver itself, but you can see them in the source for this post. Here is the code that sets up the system
def solveSystem(encodedMessage): for e in range(maxE, 0, -1): ENumVars = e+1 QNumVars = e+k def row(i, a, b): return ([b * a**j for j in range(ENumVars)] + [-1 * a**j for j in range(QNumVars)] + [0]) # the "extended" part of the linear system system = ([row(i, a, b) for (i, (a,b)) in enumerate(encodedMessage)] + [[0] * (ENumVars-1) + [1] + [0] * (QNumVars) + [1]]) # ensure coefficient of x^e in E(x) is 1 solution = someSolution(system, freeVariableValue=1) E = Poly([solution[j] for j in range(e + 1)]) Q = Poly([solution[j] for j in range(e + 1, len(solution))]) P, remainder = Q.__divmod__(E) if remainder == 0: return Q, E raise Exception("found no divisors!") def decode(encodedMessage): Q,E = solveSystem(encodedMessage) P, remainder = Q.__divmod__(E) if remainder != 0: raise Exception("Q is not divisibly by E!") return P.coefficients
A simple example
Now let’s go through an extended example with small numbers. Let’s work modulo 7 and say that our message is
2, 3, 2 (mod 7)
In particular,
is the length of the message. We’ll encode it as a polynomial in the way we described:
If we pick
, then we will encode the message as a sequence of five points on
, namely
through
.
[[0, 2], [1, 0], [2, 2], [3, 1], [4, 4]] (mod 7)
Now let’s add a single error. First remember that our theoretical guarantee says that we can correct any number of errors up to
, which in this case is
, so we can definitely correct one error. We’ll add 1 to the third point, giving the received corrupted message as
[[0, 2], [1, 0], [2, 3], [3, 1], [4, 4]] (mod 7)
Now we set up the system of equations
for all
above. Rewriting the equations as
, and adding as the last equation the constraint that the coefficient of
is
, so that we get a “generic” error locator polynomial of the right degree. The columns represent the variables, with the last column being the right-hand-side of the equality as is the standard for Gaussian elimination.
# e0 e1 q0 q1 q2 q3 [ [2, 0, 6, 0, 0, 0, 0], [0, 0, 6, 6, 6, 6, 0], [3, 6, 6, 5, 3, 6, 0], [1, 3, 6, 4, 5, 1, 0], [4, 2, 6, 3, 5, 6, 0], [0, 1, 0, 0, 0, 0, 1], ]
Then we do row-reduction to get
[ [1, 0, 0, 0, 0, 0, 5], [0, 1, 0, 0, 0, 0, 1], [0, 0, 1, 0, 0, 0, 3], [0, 0, 0, 1, 0, 0, 3], [0, 0, 0, 0, 1, 0, 6], [0, 0, 0, 0, 0, 1, 2] ]
And reading off the solution gives
and
. Note in particular that the
given in this solution is the true error locator polynomial, but it is not guaranteed to be so! Either way, the quotient of the two polynomials is exactly
which gives back the original message.
There is one catch here: how does one determine the value of
to use in setting up the system of linear equations? It turns out that an upper bound on
will work just fine, so long as the upper bound you use agrees with the theoretical maximum number of errors allowed (see the Singleton bound from last time). The effect of doing this is that the linear system ends up with some number of free variables that you can set to arbitrary values, and these will correspond to additional shared roots of
and
that cancel out upon dividing.
A larger example
Now it’s time for a sad fact. I tried running Welch-Berlekamp on an encoded version of the following tiny image:
And it didn’t finish after running all night.
Berlekamp-Welch is a slow algorithm for decoding Reed-Solomon codes because it requires one to solve a large system of equations. There’s at least one equation for each pixel in a black and white image! To get around this one typically encodes blocks of pixels together into one message character (since
is larger than
there is lots of space), and apparently one can balance it to minimize the number of equations. And finally, a nontrivial inefficiency comes from our implementation of everything in Python without optimizations. If we rewrote everything in C++ or Go and fixed the prime modulus, we would likely see reasonable running times. There are also asymptotically much faster methods based on the fast Fourier transform, and in the future we’ll try implementing some of these. For the dedicated reader, these are all good follow-up projects.
For now we’ll just demonstrate that it works by running it on a larger sample of text, the introductory paragraphs of To Kill a Mockingbird:
def tkamTest():.''' k = len(message) n = len(message) * 2 p = 2087 integerMessage = [ord(x) for x in message] enc, dec, solveSystem = makeEncoderDecoder(n, k, p) print("encoding...") encoded = enc(integerMessage) e = int(k/2) print("corrupting...") corrupted = corrupt(encoded[:], e, 0, p) print("decoding...") Q,E = solveSystem(corrupted) P, remainder = (Q.__divmod__(E)) recovered = ''.join([chr(x) for x in P.coefficients]) print(recovered)
Running this with unix
time produces the following:
encoding... corrupting... decoding.... real 82m9.813s user 81m18.891s sys 0m27.404s
So it finishes in “only” an hour or so.
In any case, the decoding algorithm is an interesting one. In future posts we’ll explore more efficient algorithms and faster implementations.
Until then!
Posts in this series: | https://jeremykun.com/category/coding-theory-2/ | CC-MAIN-2020-10 | refinedweb | 2,835 | 60.04 |
Tim Chase wrote: > Using the windows versions of the tool osis2mod I have run tests > making four types of modules(raw, raw + cipher, zipped, zipped + > cipher). In both cases that the -c switch was used ( -c > abcd1234efgh5678) while the osis2mod program output indicated that > the cipher key phase was being used, identical modules were produced > without any encryption. Thanks for doing the extra tests! This suggests that either there is a Windows / Linux difference in this area somewhere, or else the SWORD and osis2mod you are using were not compiled with USBINARY defined and so (by design??) do not encrypt at all under any circumstances. Since the export restrictions are history, we should probably check that the Windows binaries Crosswire publishes do have the encryption code enabled. I know the default in usrinst.sh is to compile with -DUSBINARY on Unix-like patforms, so all my tests based on svn have it defined. I'm not sure what the Windows defaults are... as far as I can tell, lib/vcppmake/vc8/libsword.vcproj does not define USBINARY anywhere. Hmmm... and I think our package debian/rules file also omits it, which would explain why my r2400 osis2mod (installed from a .deb) didn't encrypt, but both the crosswire.org r2337 one (of unknown origin, but presumably compiled by hand) and my r2435 one (compiled by hand from svn using usrinst.sh) did. Of course, I *would* discover this packaging issue literally a few hours after the Ubuntu Karmic Feature Freeze happens (bad timing!). SUMMARY: Looks like we could use a bit more consistency in our default compiler/linker options :) Tim, if you are compiling SWORD yourself under Windows, please try defining USBINARY in the appropriate PreprocessorDefinitions line(s) in lib/vcppmake/vc8/libsword.vcproj and recompile, reinstall, and retest yet again. I think and hope that is the right place to define this for a Windows Visual C++ build. Chris and the SWORD devs: it might be helpful if an osis2mod (and all the family of *2mod utilities for that matter) that is compiled without USBINARY defined would output an error message, rather than saying it is encrypting but not actually doing so... how hard would this be to add? Maybe even not offer the -c option if USBINARY is omitted at compile time, and changing the help output accordingly? Or, if there is no need at all for USBINARY now anyway, as I rather suspect, then it might be better to just remove the #ifdef USBINARY and the related #else return b; #endif from src/modules/common/sapphire.cpp and so guarantee that every copy of SWORD always has the encryption code compiled into it future :) Jonathan | http://www.crosswire.org/pipermail/sword-devel/2009-August/032439.html | CC-MAIN-2016-36 | refinedweb | 448 | 61.16 |
removing files
Discussion in 'C Programming' started by Anthony, Jul 11, 2003.
- Similar Threads
removing code blocks from autogenerated files with antion, Jan 6, 2005, in forum: Java
- Replies:
- 2
- Views:
- 561
- ion
- Feb 8, 2005
how i can extract text from the PDF files,power point files,Ms word files?crazyprakash, Oct 26, 2005, in forum: Java
- Replies:
- 4
- Views:
- 3,691
- adrian
- Oct 30, 2005
Any algo or codesnipplet for removing noise from Wave(.wav) filesBhanu, May 5, 2006, in forum: Java
- Replies:
- 2
- Views:
- 1,520
- Roedy Green
- May 5, 2006
- Replies:
- 4
- Views:
- 1,246
- M.E.Farmer
- Feb 13, 2005
recursively removing files and directoriesrbt, Jan 16, 2006, in forum: Python
- Replies:
- 5
- Views:
- 9,187
- rbt
- Jan 16, 2006
Removing .DS_Store files from mac foldersDavid Pratt, Mar 2, 2006, in forum: Python
- Replies:
- 11
- Views:
- 1,129
- Dennis Lee Bieber
- Mar 9, 2006
removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML
- Replies:
- 6
- Views:
- 844
- Richard Tobin
- Nov 14, 2006
- Replies:
- 3
- Views:
- 1,912
- Rolf Magnus
- Jan 18, 2009 | http://www.thecodingforums.com/threads/removing-files.314144/ | CC-MAIN-2016-40 | refinedweb | 188 | 58.66 |
Face Capture and Face Detection in c# using webcam PART 1
Posted by vivekcek on July 14, 2011
Hi Friends your friend vivek is here.The past month was very tough lots of emotional issues.Each days with .NET giving me new knowledge.
This time i was planning to develop an image based authentication system.My plan was to implement this system with in one day.
1. Capture your face from your web cam.
2. Store that image and your information to a database.
3. Next time when you came to system,the system will check your face against the image stored in database.
In this article i just wanna implement the first step,that is how to capture image using a web cam.A small image of the application is given below.Hi hi i haven’t put my face on the application
because some of my friends always complain that i am writing blog for impressing girls.I don’t want that impression went to waste box.So put a match box in my hand.
OpenCV
OpenCV is a tool developed by Intel in C++.OpenCV can do almost all image processing operations.Initially i planned to use OpenCV.Then i changed my mind because OpenCV
is C++ oriented.After some Google search i found EMGU CV wrapper for .NET and c#.
1. Capture your face from your web cam.
a. Download EmguCv and install it.
b. The installation folders bin folder contain all DLL’s for our development(C:\Emgu\emgucv-windows-x86 2.2.1.1150\bin).
c. Create a windows form project and add reference to Emgu.CV.dll,Emgu.CV.UI.dll,Emgu.Util.dll
d. Copy all OpenCV dll’s from Emgu’s bin folder(C:\Emgu\emgucv-windows-x86 2.2.1.1150\bin) to your applications Bin folder
e. Add Emgu controls to your visual studio toolbox from Emgu.CV.dll
f. Put an Emgu ImageBox in your form,in which we show captured image.
g. Put 2 buttons one to start capture image and next one to store that image to a database.
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Emgu.CV; using Emgu.CV.Structure; using Emgu.CV.UI; using Emgu.Util; namespace WebCamCapture { public partial class Form1 : Form { private Capture _VivekCapTure; private bool _captureInProgress; public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { #region if capture is not created, create it now if (_VivekCapTure == null) { try { _VivekCapTure = new Capture(); } catch (NullReferenceException excpt) { MessageBox.Show(excpt.Message); } } #endregion if (_VivekCapTure != null) { if (_captureInProgress) { //stop the capture //captureButton.Text = "Start Capture"; Application.Idle -= ProcessFrame; } else { //start the capture //captureButton.Text = "Stop"; Application.Idle += ProcessFrame; } _captureInProgress = !_captureInProgress; } } private void ProcessFrame(object sender, EventArgs arg) { Image<Bgr, Byte> frame = _VivekCapTure.QueryFrame(); captureImageBox.Image = frame; } } }
Maulik Dusara said
nice article here.
when will you post part-2
Akash Tripathi said
dude i wanna know the code to store image in database
and which database to use???
sing said
i have complete using ur code but my imagebox show nothing is it need another connection code to my web camera? | https://vivekcek.wordpress.com/2011/07/14/face-capture-and-face-detection-in-c-using-webcam-part-1/ | CC-MAIN-2018-26 | refinedweb | 536 | 62.34 |
0
Hi guys i was wondering how in python you invert this line as I have indicated. I have tried all sorts of mathemtical functions. I am using zelle graphics module. Currnetly I have produced lines going in one direction im trying to produce the same lines again but in the other direction i am sure its just a logical solution. Any help would be most appreciated. Thanks.
from graphics import * def main(): colour = raw_input("Enter the patch colour: ") win = GraphWin("Patch", 200, 200) drawPatch(win, 50, 50, colour) def drawPatch(win, x, y, colour): for i in range(5): for j in range(5): if (i + j) % 2 == 0: topLeftX = x + i * 20 topLeftY = y + j * 20 topRightX = x + i * 20 #how to invert lines so they cross topRightY = y + j * 20 line1 = Line(Point(topLeftX, topLeftY), Point(topLeftX + 20, topLeftY + 20)) line1.setFill(colour) line1.draw(win) #Next Lines accross line2 = Line(Point(topRightX, topRightY), Point(topRightX + 20, topRightY + 20)) line2.setFill(colour) line2.draw(win) main() | https://www.daniweb.com/programming/software-development/threads/252027/simple-inversion-problem | CC-MAIN-2018-22 | refinedweb | 169 | 66.78 |
import "github.com/mitchellh/go-homedir"
DisableCache will disable caching of the home directory. Caching is enabled by default.
Dir returns the home directory for the executing user.
This uses an OS-specific method for discovering the home directory. An error is returned if a home directory cannot be detected.
Expand expands the path to include the home directory if the path is prefixed with `~`. If it isn't prefixed with `~`, the path is returned as-is.
Reset clears the cache, forcing the next call to Dir to re-detect the home directory. This generally never has to be called, but can be useful in tests if you're modifying the home directory via the HOME env var or something.
Package homedir imports 9 packages (graph) and is imported by 2362 packages. Updated 2019-06-11. Refresh now. Tools for package owners. | https://godoc.org/github.com/mitchellh/go-homedir | CC-MAIN-2019-51 | refinedweb | 142 | 59.4 |
#include "Box.H"
#include "ProblemDomain.H"
#include "NamespaceHeader.H"
#include "NamespaceFooter.H"
Go to the source code of this file.
divide a box into its centered-diff boxes, its one-sided boxes and tell whether the one sided boxes are there. The one-sided boxes are one wide at most. All Boxes are intersected with the domain. InBox should be the size of the domain of computation (the size of the gradient box).
Divide a box, a_inBox, into a box where centered differences can be used, a_centerBox, and boxes where one-sided difference can be used, a_loBox and a_hiBox based on the current problem domain, a_domain, and the difference direction, a_dir. The union of these computation boxes are returned as a_entireBox. The one-sided difference boxes are one wide at most and if they have been defined then the corresponding flag, a_hasLo or a_hasHi, is set to one, otherwise it is zero. All boxes lie within the domain, a_domain.
This function is used when in direction a_dir a 2 point stencil of cell- centered data is being used to compute something at the cell face between the cell centers of the stencil. The data for the stencil is valid in a_inBox. It uses a_inBox to compute a box (face-centered in a_dir) where the full stencil can be used, a_centerBox, and boxes (face-centered in a_dir) where a 1 point stencil can be used, a_loBox and a_hiBox based on the current problem domain, a_domain, and the stencil direction, a_dir. The union of these 1 and 2 point stencel boxes is returned as a_entireBox (face-centered in a_dir). The 1 point stencil boxes are one wide, at most, and if they have been defined then the corresponding flag, a_hasLo or a_hasHi, is set to one, otherwise these flags are zero. All output boxes lie within the domain. | http://davis.lbl.gov/Manuals/CHOMBO-SVN/EBLoHiCenter_8H.html | CC-MAIN-2019-22 | refinedweb | 306 | 63.39 |
A.
If you're feeling lucky, enter the Activetuts+ competition to win one of 3 signed copies! (Of course, you can always purchase a copy..)
Introduction
With the new TextLayoutFramework (TLF), text is found in these things called containers. They either can be physically drawn on the stage using the Text tool and given an instance name or, as is more common, can be created at runtime. You also know that the text can be formatted and manipulated using the Properties panel. The neat thing here is the word properties. If there is a property in the panel, its counterpart is found in ActionScript. The bad news is, ActionScript is stone, cold stupid. It doesn't have a clue, for example, what a container is until you tell it to create one. It won't format text until you tell it what to do. It won't even put the text on the stage until it is told to do so.
Most projects will start with you telling Flash to create a
Configuration() object, which is used to tell Flash there is a container on the stage and how to manage the Text Layout Framework for the stuff in the container. The actual appearance is handled by the
TextFlow() class, which takes its orders, so to speak, from the
Configuration() object.
Naturally, being stupid, the
Configuration() object needs to be told exactly how to manage the text in the container. The default format is set through a property of the Configuration class called
textFlowInitialFormat. To change it, you simply use the
TextlayoutFormat () class to set the fonts, colors, alignment, and so on, and then tell the boss—
Configuration ()—that its
textFlowInitialFormathas changed to the ones you set using
TextLayoutFormat().The boss will get that, but he isn't terribly bright, so you next need to tell him to hand the actual work to another member of the management team, the
TextFlow() class. This class has overall responsibility for any words in a container. Being just as dim as the boss,
TextFlow() needs to be told what a paragraph is (ParagraphElement), how wide the paragraph is (SpanElement), whether any graphics are embedded in the paragraph (InLineGraphicElement), whether any of the text contains links (Link Element), and so on. Not only that, but it needs to be told what text is being added to the container so it can handle the line length and to add any children (addChild) that contain that formatting so the user can actually see it.
The
TextFlow() class, again not being too terribly bright, will then hand the job over to another member of the management team, the
IFlowComposer() class, whose only job is to manage the layout and display of the text flow within or among the containers. The flow composer finishes the process by deciding how much text goes into a container and then adds the lines of text to the sprite. This is accomplished through the use of the
addController() method, which creates a
ContainerController() object whose parameters identify the container and its properties.
The usual last step is to tell the FlowComposer to update the controllers and put the text on the stage according to how the other members of the team have told the Configuration() object how their piece of the project is to be managed.
With this information in hand, let's move on to working with TLF in ActionScript. We're going to create a column of text with ActionScript.
Step 1: New Document
Open a new Flash ActionScript 3.0 document, rename Layer 1 to actions, select the first frame of the actions layer, and open the Actions panel.
Step 2: ActionScript
Click once in the Script pane, and enter the following:
var myDummyText:String = "The introduction of the Adobe CS5 product line puts some powerful typographic tools in your hands—notably, a new API (Application Programming Interface) called Type Layout Framework (TLF)—and with as more tools in the Adobe line up nudge closer to a confluence point with Flash, the field of typographic motion graphics on the Web is about to move into territory that has yet to be explored. To start that exploration, you need understand what type is in Flash and, just as importantly, what you can do with it to honor the communication messengers of your content.";
You need some text to add to the stage. This string is the third paragraph of this chapter. Now that you have the text to go into the container, you need to load the class that will manage it.
Step 3: Configuration()
Press the Enter (Windows) or Return (Mac) key, and add the following line of code:
var config:Configuration = new Configuration();
As you may have noticed, as soon as you created the Configuration() object, Flash imported the class—
flashx.textLayout.elements.Configuration —whose primary task is to control how TLF behaves. The next code block tells TLF how the text will appear on the stage.
Step 4: TextLayoutFormat Class
Press the Enter (Windows) or Return (Mac) key twice, and enter the following:
var charFormat:TextLayoutFormat = new TextLayoutFormat(); charFormat.fontFamily = "Arial, Helvetica, _sans"; charFormat.fontSize = 14; charFormat.color = 0x000000; charFormat.textAlign = TextAlign.LEFT; charFormat.paddingLeft =100; charFormat.paddingTop = 100;
The TextLayoutFormat class, as we said earlier, is how the text in a container is formatted. The properties in this class affect the format and style of the text in a container, a paragraph, or even a single line of text. In this case, we are telling Flash which fonts to use, the size, the color, how it is to be aligned (note the uppercase used for the alignment), and the padding that moves it off the edges of the container.
Before you move on, you need you to do something. There is a coding issue. Scroll up to the import statements. If you see this line—
import flashx.textLayout.elements.TextAlign;—proceed to the next code block. If you don't, delete this line in the code block just entered:
charFormat.textAlign = TextAlign.LEFT;. Reenter
charFormat.textAlign =. Type in the first two letters of the class (
Te), press Ctrl+spacebar, and the code hint should appear. Find
TextAlign, and double-click it. This should add the missing import statement. To preserve your sanity, we will be providing a list of the import statements that should appear at the end of each exercise. We strongly suggest that you compare your list of import statements against the list presented and, if you are missing any, add them into your code.
Now that you know how the text will be formatted, you need to tell the Configuration() object to use the formatting. If you don't, it will apply whatever default setting it chooses.
Step 5: textFlowInitialFormat
Press the Enter (Windows) or Return (Mac) key twice, and enter the following:
config.textFlowInitialFormat = charFormat;
Step 6: TextFlow ()
Press the Enter (Windows) or Return (Mac) key, and enter the following code block:
var textFlow:TextFlow = new TextFlow( config ); var p:ParagraphElement = new ParagraphElement(); var span:SpanElement = new SpanElement(); span.text = myDummyText; p.addChild( span ); textFlow.addChild( p );
The
TextFlow () object needs to be here because its job is to manage all the text in the container. The constructor—
TextFlow (config)—lets TLF know that it is to use the config object created earlier so it now knows how to format the contents of the container and even the container itself.
The next constructor—
ParagraphElement()—essentially tells Flash how a paragraph is to be handled. There is only one here, so it really doesn't need a parameter.
The final step is to get all the formatting and layout into the container on the stage.
Step 7: ContainerController
Press the Enter (Windows) or Return (Mac) key, and add these final two lines:
textFlow.flowComposer.addController( new ContainerController( this, 500, 350 ) ); textFlow.flowComposer.updateAllControllers();
The first line adds the
ContainerController and tells Flash the container being managed is the current DisplayObject (
this), which currently is the stage, and to set its dimensions to 500 pixels wide by 350 pixels high.
Step 8: Test
Save the project, and test the movie. The text, as shown below, appears using the formatting instructions you set.
Import Statements for this Exercise
These are the import statements for this exercise:;
Using ActionScript to create and format the container and its text
Though this coding task may, at first, appear to be a rather convoluted process, we can assure it isn't; it will become almost second nature as you start using ActionScript to play with text in the containers.
With the introduction of the Text Layout Format, your ability to create text, format text, put it in columns, and generally manipulate it using ActionScript has greatly expanded your creative possibilities. Before you get all excited about this, you need to know that the word Framework is there for a reason.
Any TLF text objects you create will rely on a specific TLF ActionScript library, also called a runtime shared library (RSL). When you work on the stage in the Flash interface, Flash provides the library. This is not the case when you publish the SWF and place it in a web page. It needs to be available, much like Flash Player, on the user's machine. When the SWF loads, it is going to hunt for the Library in three places:
- The local computer: Flash Player looks for a copy of the library on the local machine it is playing on. If it is not there, it heads for Adobe.com.
- Adobe.com: If no local copy is available, Flash Player will query Adobe's servers for a copy of the library. The library, like the Flash Player plug-in, has to download only once per computer. After that, all subsequent SWF files that play on the same computer will use the previously downloaded copy of the library. If, for some reason, it can't grab it there, it will look in the folder containing the SWF.
- In the folder containing the SWF: If Adobe's servers are not available for some reason, Flash Player looks for the library in the web server directory where the SWF file resides. To provide this extra level of backup, manually upload the library file to the web server along with your SWF file. We provide more information around how to do this in Chapter 15.
When you publish a SWF file that uses TLF text, Flash creates an additional file named textLayout_X.X.X.XXX.swz (where the Xs are replaced by the version number) next to your SWF file. You can optionally choose to upload this file to your web server along with your SWF file. This allows for the rare case where Adobe's servers are not available for some reason. If you open the file where you saved this exercise, you will see both the SWF and, as shown in Figure 6-25, the SWZ file.
The .swz file contains the Text Layout Framework.
The Giveaway!
We're running this giveaway a little differently since Adam from Aetuts+ pushed Wildfire my way.. Wildfire is a brilliant promotion builder and makes entering competitions a piece of cake! If you'd like to be in with a chance of winning one of three signed copies of "Foundation Flash CS5 for Designers", just enter!
How do I Enter?
- Send a tweet from the entry page. For every Twitter follower that enters through your link you get an extra entry.
- Fill in your details once you've done so. That's it!
The three winners will be announced on Monday 6th September. Good luck!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/articles/flash-cs5-for-designers-tlf-and-actionscript-win-1-of-3-signed-copies--active-5231 | CC-MAIN-2019-43 | refinedweb | 1,961 | 61.26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.