text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
tag 415573 + moreinfo thanks On Tue, Mar 20, 2007 at 01:33:52PM +0100, Jeroen Massar wrote: > Package: libc6 > Version: 2.3.6.ds1-13 > Severity: important > > > Valgrind has been reporting the following already for a long time: > > ==16241== Thread 2: > ==16241== Conditional jump or move depends on uninitialised value(s) > ==16241== at 0x40270CC: __pthread_manager (manager.c:128) > ==16241== by 0x4151389: clone (clone.S:119) > > This might pose an attack vector, as memory on the stack is not cleared > out per default, depending on the compiler that is used, which in > general is gcc which does not do that; which is evident otherwise > valgrind would not complain about it. > > The problem seems to be somewhere inside: > 8<--------------------------------------------- > /* If we have special thread_self processing, initialize it. */ > #ifdef INIT_THREAD_SELF > INIT_THREAD_SELF(self, 1); > #endif > --------------------------------------------->8 > Which, when trying to follow it, is a huge messy code block. > Trying to determine exactly that this problem occurs is difficult > because of this, it would have been very handy if instead of #defining > functions that code was actually in functions and then let the compiler > choose to optimize it out or not. But that is my opinion. > > Can somebody, more fluent in glibc, take a look at this? Could you please provide a small program (and the flags used to build it) to reproduce the problem ? Does it still apply to glibc2.5 currently in unstable ? -- ·O· Pierre Habouzit ··O madcoder@debian.org OOO
Attachment:
pgpEbnrTVFqFa.pgp
Description: PGP signature
|
https://lists.debian.org/debian-glibc/2007/04/msg00236.html
|
CC-MAIN-2015-40
|
refinedweb
| 244
| 64.51
|
Or being able to
An update — September 2017
A week or so ago, some students applied this concept to the idea of typosqatting (registering malicious packages with names similar to popular libraries). By getting a university to issue a security notice, they generated some interest, and finally resulted in some changes to pypi/warehouse to address these issues.
I decided to take another look at the download figures for my packages, and see what damage my malicious alter-ego could have wreaked.
Across the 12 system module packages I’m hosting, I’m getting on average 1.5 thousand downloads per day, via pip. This adds up to 491,292 downloads so far this year. I’m hoping to hit 500k downloads before my packages are deleted!
By package, the download ratios pretty much match the numbers from May:
There’s a plan to delete my fake packages now that restrictions have been added to prevent this sort of attack, but it was fun while it lasted!
Intro
At a London python dojo in October last year, we discovered that PyPi allows packages to be registered with builtin module names.
So what? you might ask. Who would pip install a system package? Well the story goes something like this:
- An inexperienced Python developer/deployer realises they need X functionality
- Googling/asking around, they find out that to install packages, people use
pip
- Developer happily types in e.g.
pip install sys
- Baddie has registered the
syspip module, and included a malicious payload
- Developer is now pwned by malitious package, but
import sysin python works, and imports a functional sys module, so nobody notices.
When we discovered this, I was pretty interested in how plausible this was as an attack vector, so did a few things:
- Proactively registered all the common system module names that I could think of, as packages
- Uploaded an empty package to each of them that does nothing other than immediately traceback:
raise RuntimeError("Package 'json' must not be downloaded from pypi")
Why upload anything?
It’s perfectly possible to squat on a pypi package and not upload any files. But by adding an empty package, I could track the downloads from the pypi download stats.
Pypi upload their access logs (sans identifying information) to google big query, which is pretty awesome, and allows us to get a good idea of how many systems each package ends up on.
How effective is this attack vector?
Big query says that so far this year (19th May 2017), my dummy packages have been download ~244k times, lucky they’re benign huh, otherwise that’s 1/4 million infected machines!
Some of the downloads will be people using custom scrapers, others may be automated build jobs, running over and over, but I used some tactics to gauge the quality of this data:
- pypi download logs include a column
installer.namethis seems equivalent to an HTTP user agent string, by only selecting rows where the installer.name is
pip, we’re more likely to be counting actual installs, rather than scrapers, or other bots
- Another column:
system.releasetracks very high-level system version information (for example
4.1.13–18.26.amzn1.x86_64) By including this in the counts, we can see that lots of different types of setups are downloading these packages, suggesting it’s not just a few bots scraping the site. 3.1k different system versions have downloaded my packages this year, compared with 33k total unique versions across the whole of pypi
The query I used is here:
What now?
I never actually received a reply to my email, so a while later, I raised an issue on the official pypi github issue tracker in January. This also got no reply.
I’m currently squatting all the system package names that seem most at risk, and doing so with benign packages, so I don’t see much of a risk of disclosing this now.
|
https://hackernoon.com/building-a-botnet-on-pypi-be1ad280b8d6
|
CC-MAIN-2019-47
|
refinedweb
| 657
| 57.2
|
Python plugin - problem with setFirstVisibleLine
Hi, I have a script that finds the function declaration of the word under the cursor. E,g, in the following example, I select
A()within function B, run the script, and it takes me to the line of
def A():
def A(): pass def B(): A()
The problem I’m having is as follows:
I use
editor.lineFromPosition(pos)to get the line number of the found function, and I use
editor.setFirstVisibleLineto go to that line.
However, I realized tonight that
editor.lineFromPosition()gives you the absolute line number of that position, but
editor.setFirstVisibleLinedoesn’t set the line to the absolute line of that number because it doesn’t take hidden lines into consideration!
To explain a little more:
- if
editor.lineFromPosition()returned 100
- Between 1 and 100, there are 10 hidden lines (by folding)
- I want line 100 to be the first visible line
editor.setFirstVisibleLine(100)will show line 110!
I hope this is clear…
How am I supposed to overcome this???
Thanks,
Davey
P.S. gotoLine(100) does work properly, but this doesn’t set it at the top of the view
- Claudia Frank last edited by
I guess you are looking for editor.gotoPos(pos)
Cheers
Claudia
Hi Claudia,
Thanks, but this also doesn’t bring it to the top of the screen
Maybe I have to do just create my own loop to keep testing whether the line I want is at the top…??
Sounds weird… is it true?
Thanks
Davey
- Claudia Frank last edited by
Hi Davey,
now I get it, never used editor.setFirstVisibleLine before
and my test was done with your ~20 lines of description, so nothing really changed by using this call with parameter 100 shame
You are right, from the documentation it looks like you have to do it
SCI_GETALLLINESVISIBLE to see if something is hidden at all.
And loop SCI_GETLINEVISIBLE(int line) to get the hidden lines, count it and … hm …
I guess it makes a difference if the hidden lines are under the line you want to be on the top or before.
So, count the lines which are before your line … or … !?
Let me know - I have to go to work soon ;-)
Cheers
Claudia
:) Thanks
It doesn’t sound right - correct?
Anyway, I’ll let you know what I come out with… if I do! :)
Thanks for your help!
Davey
- Claudia Frank last edited by Claudia Frank
no - I guess you need
editor.visibleFromDocLine
and
editor.docLineFromVisible
together with
setFirstVisibleLine
Cheers
Claudia
Awesome Claudia!
Thanks a ton - I knew there had to be some better method!
You saved me a lot of time!
Thank you!
Davey
(P.S. I searched all the docs for Line and Visible - but I didnt see this! Weird!)
- Scott Sumner last edited by
Both visibleFromDocLine() and docLineFromVisible()…and setFirstVisibleLine() are in the pythonscript documentation.
I’d also like to point out the pythonscript has its own discussion area separate from the Notepad++ Community one. It is found here:
I’m not saying not to post pythonscript related stuff here, just pointing that other forum out as an option. :)
Hi Scott, thanks
Yes, I do recognize that forum, I actually have some scripts there I think
I was actually debating for a few minutes where to post it… :)
Thanks,
Davey
|
https://community.notepad-plus-plus.org/topic/11092/python-plugin-problem-with-setfirstvisibleline/2
|
CC-MAIN-2020-05
|
refinedweb
| 555
| 63.39
|
Delphi Development Pretty Good Practices #4 – Do Work in Classes
The next principle for the “Pretty Good Practices” we’ll discuss is this notion: Whenever possible and as much as possible, put functionality in a class – preferably a class that can be easily unit tested, reused, and separated from any user interface.
TextScrubber demonstrates this via the use of the TTextScrubber class in the uTextScrubber.pas unit. TTextScrubber is a simple TObject descendant that does all the work for the whole application, really. It is a standalone class – you could take the uTextScrubber.pas unit and use it in most any project you cared to. Because of this, it is also very easy to write unit tests for this class. (We covered unit testing in my previous series “Fun with Testing DateUtils.pas”, but I’ll discuss Unit Testing in a later post in this series as well.) The class attempts to follow the “Law of Demeter”, which says that classes should know as little as possible about outside entities. The three principles of the Law of Demeter are as follows:
- Each class should have only limited or hopefully no knowledge of other classes.
- If a class must have knowledge of other classes, it should only have connections to classes that know about it as well.
- Classes should never “reach through” one class to talk to a third class
In the case of TTextScrubber, it only knows about and utilizes the TClipboard class and nothing else. It doesn’t try to grab things out of TClipboard or attach to or require any other class. It pretty much minds its own business, utilizes the services of the clipboard, and provide an easy way to get at its functionality. It endeavors to do one thing: scrub text, by both straightening and “un-formatting” it. It has short, sweet method bodies, and ensures that it doesn’t try to do too much beyond exactly what it is supposed to do. Following the Law of Demeter tends to make your code more maintainable and reusable. By reducing dependencies, you ensure that a class is as flexible as possible and that changes to it don’t tend to have far reaching consequences.
So, to as large a degree as possible, you should endeavor to put the functionality of your program into classes. One way to tell you are not doing this is if you tend to do “OnClick” programming, or relying on event handlers to do the work of your application. The Pretty Good Practices way of programming would dictate that your event handlers would contain code that merely instantiated and used other classes instead of having the actual code in them to do the work of your application.
So for instance, most of the work in TextScrubber gets done in an OnClick event of the TTrayIcon component. That code looks like this:
procedure TStraightTextMainForm.MainTrayIconClick(Sender: TObject); begin MainTrayIcon.Animate := True; case TextScrubberOptions.ClickChoice of ccStraightenText: begin DoStraightenText; end; ccScrubClipboard: begin DoPurifyText; end; end; end;
It merely calls one of two functions, DoStraigthenText or DoPurifyText, that scrub the text on the clipboard. Those two methods look pretty much the same – they merely create a TTextScrubber, use it, and then free it. DoStraightenText looks like this:
procedure TStraightTextMainForm.DoStraightenText; var TS: TTextScrubber; begin TS := TTextScrubber.Create(TextScrubberOptions.ShouldTrim); try TS.StraightenTextOnClipboard; finally TS.Free; end; end;
This method is very simple and to the point — it utilizes the TTextScrubber class to do the work. It’s not always entirely possible, but I try to make as many of my event handlers and methods follow this pattern of merely utilizing the functionality of external classes. Doing so enables a few things:
- It means that functionality is much easier to unit test. Isolated classes with specific functionality make unit testing really easy.
- Functionality is easier to share and reuse. An isolated, decoupled class can easily be moved to new applications as it has few or no dependencies.
- Lean event handlers mean that your user interface isn’t tightly coupled to the work code. This means that adjusting or altering your UI is easier to do, and adjusting and altering the work code doesn’t mean a change in the way the UI works.
So, to sum up – always try to build standalone classes to do the work or your application.
Share This | Email this page to a friend
Nick, what kind of functionality do you recommend to put in classes? Or do you try to put everything?May 5th, 2010 at 12:08 pm
For instance, whould you put things like IntToStr() into classes?
There are people which put literaly everything into classes, so they end up with lots of classes that only have lots untility functions wrapped into class methods.
I agree with Magno, could you please explain the best way for "utility" functions. Every programmer on the planet as a toolkit developed over many years which are shared between projects. Is it best to wrap these in classes or leave as global functions in generic units? To me, some things do not "belong" to a class and it makes no sense to construct/destruct an object just to encapsulate the functionality.May 5th, 2010 at 1:39 pm
Gotta agree with Rick. I’ve seen a lot of code, and even some in the Delphi RTL, that tries to put utility functions in classes simply for the sake of having it in a class. Delphi’s Generics implementation even encourages this, since you can’t have a standalone generic function.
But this is really ugly, and anyone who writes "classes" like that ought to be ashamed of themselves.. An object is a collection of variables bound to a set of methods. A set of functions with no shared state is not an object, it’s a collection of functions.
Just because Java and .NET got it wrong and created an ugly object model full of abstraction inversions doesn’t mean Delphi (or Delphi developers) need to follow their bad example.May 5th, 2010 at 2:08 pm
Java and .NET are/were simply too idealistic. They, as with a lot of "new/modern language features" are designed in a rarified intellectual atmosphere, largely devoid of practical considerations. People sit around and talk in highly theoretical and idealistic terms about how code should be written in the "purest" form.
Many modern additions to those language have been artifical attempts to address pragmatic concerns without compromising the "OO purity" of the languages… anonymous methods are very useful when your language doesn’t support first class functions, for example.
Pascal always struck the right balance between pragmatism and helpfulness and it’s depressing to see Embarcadero directing their efforts into polluting the language with concepts from other languages simply to "keep up" whilst true and more pressing needs are neglected.May 5th, 2010 at 3:05 pm
@Joylon: Exactly. OOP is good because it helps you write code that’s easier to understand, and because it provides inheritance and polymorphism. If what you’re doing doesn’t require inheritance and polymorphism, and wouldn’t gain any advantages to code readability from putting it in a class, then OOP is not good for it.
It’s a tool, not a Gospel principle.May 5th, 2010 at 4:25 pm
[...] Nick Hodges » Blog Archive » Delphi Development Pretty Gοοԁ Practices #4 &n… [...]May 6th, 2010 at 2:23 am
I Agree With Nick… in principle. (wink to anyone following the UK elections)…but also Joylon/Mason.
One of the unique strengths of Delphi is that you can still have units with simple routines, and have them globally accessible. Languages that enforce the object model often lead to juggling static/creation order problems and granularity and You-Aint-Gonna-Need-It issues. Until you see a class suggesting itself in your routines, there is no need to overdo it. Classes can become Tamagotchis.May 6th, 2010 at 2:32 am
I couldn’t agree more with you guys! Not everything must be wrapped by a class. This tendency to imitate "mainstream" languages is really worry me. (However, I do understand that mainstream = $ ). Delphi is in the process of becoming less Delphi and more something like D# this days…May 6th, 2010 at 3:53 am
I agree with the others that "not everything must be in a class."
I’m curious what many of you think about the records in IOUtils. They’re basically collections of static functions/procedures put into a record basically to act as a namespace. Worthwhile, or should that unit have been split into IOUtils.Directory, IOUtils.Path, and IOUtils.File?May 6th, 2010 at 5:26 am
I agree with a lot in this post except the use of the word "always".
I think the biggest take away should be the comment about OnClick programming. It doesn’t matter so much if you put non-UI code in classes or methods as long as it’s not in your form events.May 6th, 2010 at 5:46 am
I agree with all of you (Nick included). In order to understand what Nick really thinks about utility methods you only have to look as far as his NixUtils.pas included in his project. Certainly utility methods are fine outside of classes but when you have a class that is supposed to exhibit some behavior it is good form to encapsulate that behavior within methods of the class. Some of the utility functions the class needs can be located outside the class however it is not a good idea to make the class to heavily dependent on other classes in order to complete its intended behavior.
Thanks for the articles Nick I am reading them all intently.May 6th, 2010 at 9:10 am
If you want to always use classes, you are much better of dropping Delphi and switching to Java or C#.
One of the great strengths of Delphi is that the programmer has a choice to use classes where they help and skip them where they don’t.May 7th, 2010 at 4:19 am
Hi, you have provided a detailed and nice information on how to proceed with delphi developmentMay 11th, 2010 at 9:08 pm
Slightly OT, but Nick, can you reconsider the template of your site. Light-ish text on a dark basckground is hard to read.May 12th, 2010 at 12:31 pm
Delphi allows nested functions/procedures. A nice and helpfull feature.
I think everybody has seen functions with more than 3 nested functions/procedures like this:
function CalculateSomething(arg1:integer;arg2:string):string;
function AddSomething;
begin …. end;
// more nested functions/procedures here
…
var
…
begin
…
end;
In my opinion this is a **very strong** indicator that you should refactor this code to a class.June 9th, 2010 at 2:14 pm
So look out for nested functions and change the litte monsters to a class.
sx2008 –
I agree. That is good advice.
NickJune 9th, 2010 at 2:56 pm
I personally prefer putting utility methods into a static class, for Nicks’s reasons and also -
1) Decreases namespace pullution - Code insight list is shorter when you have less global methods
2) Compatability with Delphi Prism, where everything is a class (excluding Prism’s global namespace hack)
3) Elegance - I find TFile.Exists(FileName) nicer than FileExists(FileName).
But as others have said the ability to have global procedures and static classes is a strength of Delphi.June 9th, 2010 at 8:31 pm
|
http://blogs.embarcadero.com/nickhodges/2010/05/05/39444
|
CC-MAIN-2015-48
|
refinedweb
| 1,922
| 62.68
|
28 August 2012 06:09 [Source: ICIS news]
NAGOYA, Japan (ICIS)--Trading Corp of Pakistan (TCP) closed a highly anticipated urea purchase tender on 27 August, sources said on Tuesday.
TCP is hoping to secure 300,000 tonnes of imported urea, with shipment requested 14 days from opening the letters of credit.
Market participants were keen on the results of the tender because they expected the offer price to establish a benchmark in a fairly inactive urea market.
The lowest offer was submitted by global fertilizer trading house Keytrade at $399.38/tonne (€319.50) CFR (cost & freight) ?xml:namespace>
Keytrade offered a total of 100,000 tonnes, without specifying the origin of the cargoes.
The next lowest offer was submitted by Dreymoor at $407.15/tonne CFR Pakistan for a total of 200,000 tonnes, expected to be sourced in
Helm submitted the third lowest offer at $407.41/tonne CFR Pakistan for 200,000 tonnes of urea from
Other offers were at $407.69-419.63/tonne CFR
In a new move, TCP will be allowed to ask suppliers to match the lowest offer under this tender.
Previously, TCP had only been permitted to award the lowest offer under its tenders. Re-tenders were typically the lowest offer which often did not provide for the full quantity of urea required.
The new tender is expected to satisfy demand for the rabi crop season.
($1 = €0.80)
Additional reporting by Adele Z
|
http://www.icis.com/Articles/2012/08/28/9590191/trading-corp-of-pakistan-closes-300000-tonne-urea-tender.html
|
CC-MAIN-2014-52
|
refinedweb
| 243
| 63.8
|
From: Christopher Hunt (huntc_at_[hidden])
Date: 2005-08-02 02:21:14
OK, this is now completely solved. :-)
Interestingly, boost/detail/lightweight_mutex.hpp was being included in
one place and declaring lightweight_mutex from
boost/detail/lwm_nop.hpp. However the implementation from
boost/detail/lwm_pthreads.hpp was actually being used.
To cut a long story short, declaring a prefix file that forced boost to
configure itself prior to importing Cocoa and Carbon dependencies
completely solved the problem. Here's what my Cocoa pch file looks
like:
//
// Prefix header for all source files
//
#ifdef __cplusplus
#include <boost/config.hpp>
// Gets brought in early so that any
Carbon presence doesn't influence
// its configuration.
#endif
#ifdef __OBJC__
#import <Cocoa/Cocoa.h>
#endif
If you're doing just Carbon, then do your #include <Carbon/Carbon.h>
after boost/config.hpp is included (in place of the #ifdef __OBJC__
block).
The real gremlin here is this bit of code in
boost/config/platform/macos.hpp:
# ifndef TARGET_CARBON
# include <boost/config/posix_features.hpp>
# endif
There is a posting that discusses the rationale of the above: and also makes the
observation that there's probably a better way to check for MSL.
My recommendation is to check for the definition of __MSL__. Thus the
above code becomes:
# ifndef __MSL__
# include <boost/config/posix_features.hpp>
# endif
IMHO the existing MSL check is a bug (checking TARGET_CARBON is too
broad) and will be a trap for all Cocoa/Carbon programmers wanting to
use boost. I would also think that the reality now is that most people
use xcode/gcc and not CodeWarrior/MSL (this is from someone who used to
use CW and MSL all the time). Thus, the motivation for addressing the
issue should now be high.
What needs to be done to fix the problem? I'm willing to help to save
some other poor soul from spending a day or two stuffing around.
Cheers,
-C
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net
|
https://lists.boost.org/boost-users/2005/08/12938.php
|
CC-MAIN-2022-40
|
refinedweb
| 346
| 59.8
|
#include <compactor.h>
Definition at line 34 of file compactor.h.
Definition at line 39 of file compactor.h.
Definition at line 123 of file compactor.cc.
Definition at line 125 of file compactor.cc.
Add a source database.
Definition at line 158 of file compactor.cc.
Referenced by DEFINE_TESTCASE(), and main().
Perform the actual compaction/merging operation.
Definition at line 164 of file compactor.cc.
Referenced by DEFINE_TESTCASE(), and main().
Resolve multiple user metadata entries with the same key.
When merging, if the same user metadata key is set in more than one input, then this method is called to allow this to be resolving in an appropriate way.
The default implementation just returns tags[0].
For multipass this will currently get called multiple times for the same key if there are duplicates to resolve in each pass, but this may change in the future.
Reimplemented in MyCompactor.
Definition at line 177 of file compactor.cc.
Referenced by FlintCompact::merge_postlists(), ChertCompact::merge_postlists(), and BrassCompact::merge_postlists().
Set the block size to use for tables in the output database.
Definition at line 128 of file compactor.cc.
Set the compaction level.
Definition at line 146 of file compactor.cc.
Set where to write the output.
Definition at line 152 of file compactor.cc.
Referenced by DEFINE_TESTCASE(), and main().
Set whether to merge postlists in multiple passes.
Definition at line 140 of file compactor.cc.
Set whether to preserve existing document id values.
Definition at line 134 of file compactor.cc.
Referenced by DEFINE_TESTCASE(), and main().
Update progress.
Subclass this method if you want to get progress updates during compaction. This is called for each table first with empty status, And then one or more times with non-empty status.
The default implementation does nothing.
Reimplemented in MyCompactor.
Definition at line 170 of file compactor.cc.
Referenced by compact_brass(), compact_chert(), and compact_flint().
For internal use only.
Reference counted internals.
Definition at line 43 of file compactor.h.
|
http://xapian.org/docs/sourcedoc/html/classXapian_1_1Compactor.html
|
crawl-003
|
refinedweb
| 325
| 63.36
|
):
import Data.List
import qualified Data.Map as M
type Grid = M.Map String (M.Map Int [String])
data Constraint = Link (String, String) (String, String)
| PosLink (String, String) Int
| NextTo (String, String) (String, String)
| RightOf (String, String) (String, String)
deriving Eq
type Solver = ([Constraint], Grid)
addConstraint :: Constraint -> Solver -> Solver
addConstraint c (cs, g) = (c : cs, g)
removeIf :: (String, String) -> (String, String) ->
[String -> String -> Int -> Grid -> Bool] -> Grid -> Grid
removeIf (f1, v1) (f2, v2) cs g = M.adjust (M.mapWithKey (\k ->
if and [c f1 v1 k g | c < - cs] then delete v2 else id)) f2 g notAt :: (Int -> Int) -> String -> String -> Int -> Grid -> Bool
notAt f f1 v1 i g = M.notMember (f i) (g M.! f1) ||
notElem v1 (g M.! f1 M.! (f i))
runConstraint :: Constraint -> Grid -> Grid
runConstraint (Link a b) = removeIf a b conds . removeIf b a conds
where conds = [(\f1 v1 k -> notElem v1 . (M.! k) . (M.! f1))]
runConstraint (PosLink (f1,v1) i) =
M.adjust (M.update (const $ Just [v1]) i) f1
runConstraint (NextTo a b) = removeIf a b [notAt pred, notAt succ]
runConstraint (RightOf a b) = removeIf a b [notAt pred] .
removeIf b a [notAt succ]
adjustOthers :: Eq k => (v -> v) -> k -> M.Map k v -> M.Map k v
adjustOthers f k = M.mapWithKey (\k’ v -> if k’ == k then v else f v)
simplify :: Grid -> Grid
simplify g = foldr ($) (M.mapWithKey (\_ v ->
M.mapWithKey (\i x -> let d = x \\ concat (M.elems $ M.delete i v)
in if length d == 1 then d else x) v) g)
[ M.adjust (adjustOthers (\\ take 1 x) i) f
| (f, v) < - M.assocs g, (i, x) <- M.assocs v, length x == 1] run :: Solver -> Solver
run (cs, g) = (cs, simplify $ foldr runConstraint g cs)
apply :: Solver -> Solver
apply = head . head . dropWhile (null . tail) . group . iterate run
solved :: M.Map k (M.Map k’ [v]) -> Bool
solved g = and [False | (_, v) < - M.assocs g, (_, xs) <- M.assocs v, length xs /= 1] solve :: Solver -> [String]
solve s = map (unwords . map head) . transpose . map (M.elems) .
M.elems $ head [ r | let (cs, g) = apply s,
(f, v) <- M.assocs $ g, (i, xs) <- M.assocs v, x <- xs, let (_, r) = apply (cs, M.adjust (M.adjust (const [x]) i) f g), solved r ] grid :: Grid grid = M.fromList . zip (words "owner brand drink pet color") $ map (M.fromList . zip [1..] . replicate 5) [words "Englishman Ukranian Norwegian Japanese Spaniard", words "Old_Gold Kools Chesterfields Lucky_Strike Parliaments", words "Coffee Tea Milk Orange_Juice Water", words "Dog Snails Horse Fox Zebra", words "Red Green Ivory Yellow Blue"] problem :: Solver problem = foldr addConstraint ([], grid) [Link ("owner", "Englishman") ("color", "Red"), Link ("owner", "Spaniard") ("pet", "Dog"), Link ("drink", "Coffee") ("color", "Green"), Link ("owner", "Ukranian") ("drink", "Tea"), RightOf ("color", "Ivory") ("color", "Green"), Link ("brand", "Old_Gold") ("pet", "Snails"), Link ("brand", "Kools") ("color", "Yellow"), PosLink ("drink", "Milk") 3, PosLink ("owner", "Norwegian") 1, NextTo ("brand", "Chesterfields") ("pet", "Fox"), NextTo ("brand", "Kools") ("pet", "Horse"), Link ("brand", "Lucky_Strike") ("drink", "Orange_Juice"), Link ("owner", "Japanese") ("brand", "Parliaments"), NextTo ("owner", "Norwegian") ("color", "Blue")] main :: IO () main = mapM_ putStrLn $ solve problem [/sourcecode]
A more elegant solution (see for more detail):
import Data.List
indexOf :: (Eq a) => [a] -> a -> Int
indexOf xs x = head $ elemIndices x xs
nextTo :: Int -> Int -> Bool
nextTo a b = abs (a – b) == 1
rightOf :: Int -> Int -> Bool
rightOf a b = a == b + 1
options :: String -> [[String]]
options = permutations ."] main :: IO () main = mapM_ print solution [/sourcecode]
One I did in Python 3.0.1, using the stadard library’s itertools module for the permutations:
Here’s the pastebin:
|
https://programmingpraxis.com/2009/06/16/who-owns-the-zebra/?like=1&source=post_flair&_wpnonce=ca20eaf6a9
|
CC-MAIN-2017-22
|
refinedweb
| 586
| 80.92
|
.
Goal
In this post, we will show the steps necessary to add a single column (Party) to the Presidents table. Because in Part 2 we enabled Code Migrations, this will be a lot simpler than in Part 2.
Let’s Do it!
Now, we have a stable project that is running correctly with the database table Presidents defined as follows:
We want to add a new column “Party” so let’s update the model file (it’s in our original programs.cs file). It will now be changed as follows:
public class Presidents { [Key] [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public long Id { get; set; } public string LastName { get; set; } // New Columns for first migration public int YearElected { get; set; } public bool CurrentPresident { get; set; } // New Column for second migration public string Party { get; set; } }
We now have to create the .cs files in the MIgration folder that will know about this change. So, we simply execute the command from the package manager Add-Migration AddNewColumnParty and we get the following results:
PM> Add-Migration AddNewColumnParty
Scaffolding migration ‘AddNewColumnParty’.1832107_AddNewColumnParty’ again.
Now, we want to add some customization because we know that Bush and Reagan are republicans and Obama is a democrat. So, we add to the “Up()” method the following:
public override void Up() { AddColumn("Presidents", "Party", c => c.String()); Sql("UPDATE Presidents SET Party = 'Republican' WHERE LastName='ReaganX'"); Sql("UPDATE Presidents SET Party = 'Republican' WHERE LastName='BushX'"); Sql("UPDATE Presidents SET Party = 'Democrat' WHERE LastName='ObamaX'"); }
Now, we run the package manager command: Update-Database –Verbose and the work is done for us.
PM> Update-Database -Verbose
Using NuGet project ‘ConApp’.
Using StartUp project ‘ConApp’.
Target database is: ‘agelessemail’ (DataSource: ., Provider: System.Data.SqlClient, Origin: Configuration).
Applying explicit migrations: [201202171832107_AddNewColumnParty].
Applying explicit migration: 201202171832107_AddNewColumnParty.
ALTER TABLE [Presidents] ADD [Party] [nvarchar](max)
UPDATE Presidents SET Party = ‘Republican’ WHERE LastName=’ReaganX’
UPDATE Presidents SET Party = ‘Republican’ WHERE LastName=’BushX’
UPDATE Presidents SET Party = ‘Democrat’ WHERE LastName=’ObamaX’
[Inserting migration history record]
Now, looking at the generated data we have:
Keep in mind that “Update-Database” figured out what database we were on and just did the appropriate update. The beauty of this is if some developer in the group is on a different version, all she has to do is say “Update-Database” and there database will be brought up to date along with whatever version was current at the time.
Conclusions
All I can say is “Congrats!” to the Microsoft engineers. I’ve been doing ORM’s for a long time and have my doubts along the way about some Microsoft has rolled out, but this time, I think they really have listened and done what we need.
I like the article. I am trying to do a migration where I change the DatabaseGeneratedOption.None to DatabaseGeneratedOption.Identity with a starting seed of 25000. The reason it is None to start with is that I need to retain the Id from the old data but once it is converted to the new database I need it to start incrementing at 25000. Is this possible and if so, how do I go about doing it?
Great article, short and precice. Helped med when I was learning Entity Framework.
|
http://peterkellner.net/2012/02/17/entityframework-code-first-4-3-adding-a-single-default-column-to-a-migration-enabled-project-part-3/
|
CC-MAIN-2017-39
|
refinedweb
| 535
| 54.42
|
We Scriptures themselves. Thanks in advance.
Although far from definitive, The passage below refers to 'the beloved disciple' as (possible) author. John is traditionally believed to be this disciple, though not explicitly mentioned as such (Jn 13:23; 19:26; 20:2; 21:7,20). However argument has been made that it could have been Lazarus. (Jn 11:3,5,11,36)
John 21:20-24 (note especially last verse)[a].
Thanks Parrster. The problem I'm having is that tradition seems to play a big part. We can agree that the disciple whom Jesus loved wrote the book, but just who was that disciple. i think you have some good thoughts, and they mirror mine.
Glad I could contribute. Yes, tradition does come heavily into authorship. However, to quote Mark Roberts, author of Can We Trust The Gospels,
"...it doesn't really matter whether the Beloved Disciple was John, Lazurus, or some other disciple of Je
That will take a lot of research. You may not get a quick answer from people. Any particular reason you're looking into this? I think it's based on tradition and other 1st and 2nd century sources because there may not be a direct, clear answer proving it was john.
I tend to agree with you. Actually, there is a reason I'm asking. I've always been taught that the Apostle John wrote the book, but no one offers proof..If John didn't write the book, it changes much of the Gospels. Thanks for responding, Lori.
Not every Bible author is known. This one is named after John. I have to trust that the HS directed the men who decided the authorship of the books of the Bible. To get some wrong and some right would mean the Scriptures cannot be trusted.
You have asked a question for which there is no definitive answer. As I'm sure you are aware (but for those who are not) only one of the 4 gospels, an unanswered question, though noted theologians throughout the ages maintain that it was the disciple John and I find that hard to dispute without evidence to the contrary.
However, this question reminds me of a curiosity I have never understood. Like the recent movie "Heaven is Real" there are many examples of NDEs or similar episodes where people have visited heaven and actually met and spoke to Jesus and he to them. Dr. Eby, a credible subject, tells the story in his book about his death, visit to heaven and miraculous return (I think he was dead for 18 hours). Often theses subjects return and relate their conversations with Jesus yet the unanswered questions like who wrote the Gospel of John, are there remnants of Noah's Ark, what "fish" swallowed Jonah, how old is the earth, are the 7 days of creation literal days (I believe they are), what happened to the dinosaurs? etc. never seem to be asked of Jesus. I suppose maybe in some instances they were but for some reason the subjects don't or aren't allowed to remember it when they return but I assure you if I had an NDE and could fellowship with Jesus in the flesh I would be asking these questions and more, not because I question the Bible but I just want to know the answers I can't find in the Bible.
Thanks for your input, tsadjatko .
Based on Scripture interpreting Scripture the fourth Gospel tells us who it is.
Peter turned and saw THE DISCIPLE WHOM JESUS LOVED following them, the one who also had leaned back against him during the supper and had said, “Lord, who is it that is going to betray you?” - John 21:20
THIS IS THE DISCIPLE who is bearing witness about these things, and who has written these things, and we know that his testimony is true. - John 21:24
Who was the disciple that Jesus loved bearing witness?
It was Mary who anointed the Lord with ointment and wiped his feet with her hair, whose brother LAZARUS was ill. So the sisters sent to him, saying, “LORD, HE WHOM YOU LOVE IS ILL."
There it is, Lazarus whom Jesus loved, proof just from the Gospel itself; Scripture interpreting Scripture.
Knowing this first of all, that no prophecy of Scripture comes from someone’s own interpretation. - 2 Peter 1:20
Proof? You are joking, right?
Thanks PandN. I think there's even more evidence than just that statement in the Gospel itself. it amazes me how so many people just believe what's been told them, myself included. This is a revelation I just saw for myself so I'm still exploring it.
"evidence" is not exactly proof. You can only have proof by a preponderance of the evidence and any words on a page are not proof of who wrote them.
|
https://hubpages.com/religion-philosophy/answer/237903/who-wrote-the-gospel-of-john
|
CC-MAIN-2017-30
|
refinedweb
| 814
| 72.16
|
CPSC 124, Fall 2005
Answers to the First Test
Question 1: There are two types of errors that can occur in a program: syntax errors and semantics errors. Explain what is meant by each of these terms, and give a specific example of each type of error.
Answer: Syntax refers to the grammar or structure of a language. A program that has a syntax error cannot even be compiled, because it is not a legal program. Semantics refers to the meaning of a program. Because a computer can only follow instructions, and not understand what they are meant to do, the computer cannot find semantics errors. A program that has a semantics error can be compiled and run, but it will not do what the programmer intended.
An example of a syntax error would be a missing semicolon at the end of a statement, or using a variable that has not been declared. The computer will find these errors at compile time.
An example of a semantics error would be a for loop that says "for (i = 0; i <= 100; i++)" when the programmer wants to repeat a task 100 times. In fact, the for loop repeats the task 101 times, so the program does not do what the programmer intended.
Question 2: Java has eight built-in primitive types. List any four of them, and state briefly what type of data is represented by each primitive type in your list.
Answer: The eight primitive types are byte, short, int, long, float, double, char, and boolean. (Note that String is not a primitive type.) The most common primitive types are int, double, char, and boolean, so a typical answer to this question would be:
int --- values of type int are whole numbers that can be represented using 32 bits.
double --- values of type double are real numbers with about 17 decimal places of accuracy.
char --- values of type char are individual characters from the Unicode character set.
boolean --- the only values of type boolean are true and false
Question 3: What is meant by a literal? Explain the difference between a literal and the value that is represented by a literal.
Answer: A literal is a sequence of characters that is typed in a program to represent a constant value. There are numeric literals such as 37 and 18.75, character literals such as 'A' and '\n', the boolean literals true and false, string literals such as "Hello" and "He said \"Hello.\"\n", and so on. A literal is the way a value is typed, but it is not itself the value. The actual value in the computer is a binary number. For example, the actual value represented by the character literal 'A' is a 16-bit binary number that stands for the character A in the Unicode character set.
Question 4: Suppose that x and y are variables of type double that have already been declared and assigned values. Write a code segment that will compare the values of x and y, and will tell the user the result of the comparison by outputting one of the messages "x is bigger", "y is bigger", or "they are the same".
Answer: This can be done either with an if...else if statement or with three separate if statements. The first option is somewhat more efficient, since it doesn't make unnecessary tests after the answer is already known. Here are both solutions");
If you like to include braces in your if statements, even when they are unnecessary, then these become"); } }
Question 5: An integer N is said to be prime if it is greater than 1, and for every integer $i$ from 2 to $N-1$ (inclusive), N % i != 0 (that is, N is not evenly divisible by i). Write a complete program (starting with "public class\dots") that tests whether an integer that is input by the user is prime. The program should ask the user for an integer and read the response, and it should tell the user whether the number is or is not prime. If the number is less than 2, you can say immediately that it is not prime. Otherwise, you must use a loop to test for divisibility by any integer in the appropriate range.
Answer: Here is one possible answer:public class PrimeText { public void main(String[] args) { int N; // the number to be tested for primality System.out.print("Enter a number to be tested for primality: "); N = TextIO.getlnInt(); if ( N < 2 ) System.out.println( N + " is not prime." ); else { boolean isPrime; // Value will tell whether N is prime. int i; isPrime = true; // Assume N is prime, unless forced to change our mind. for ( i = 2; i < N - 1; i++ ) { if ( N % i == 0 ) { isPrime = false; // We have found evidence that N is not prime. break; // No need to go on testing, since we know the answer. } } if ( isPrime ) System.out.println( N + " is prime." ); else System.out.println( N + " is not prime." ); } } }
Question 6: Suppose that str is a variable of type String that has already been assigned some value. Write a for loop that will print all the characters in the string str, with one character on each line of output.
Answer: Recall that str.length() tells how many characters are in the string. The characters are numbered from 0 to str.length()-1, and character number i is referred to as str.charAt(i). Here is one possible answer:int i; for ( i = 0; i < str.length(); i++ ) { System.out.println( str.charAt(i) ); }
Question 7: Suppose that x and y are variables of type double. Suppose that the value of x is 10.0, and the value of y is 2.0. Find the value of each of the following expressions:a) x + 3 * y b) x / y / 2.0 c) Math.pow(x,y) d) x > y && y < 0 e) (int)(x / 3.0)
Answer:a) x + 3 * y is 16.0 (x + 3 * y = 10.0 + (3 * 2.0) = 10.0 + 6.0 = 16.0) b) x / y / 2.0 is 2.5 (x / y / 2.0 = (10.0 / 2.0) / 2.0 = 5.0 / 2.0 = 2.5) c) Math.pow(x,y) is 100.0 (Math.pow(x,y) means xy, and 102 = 100)) d) x > y && y < 0 is true (this is a boolean expression) e) (int)(x / 3.0) is 3 (x / 3.0 = 3.333..., but (int) takes the integer part)
Question 8: Show the exact output that will be produced when the computer executes the following code segment. (If your answer is wrong, you might get some partial credit by explaining your reasoning.)int a, b, c; a = 33; b = 12; while ( b > 0 ) { c = b; b = a % b; a = c; System.out.println("a = " + a); System.out.println("b = " + b); } System.out.println("Answer is: " + a);
Answer: The expression "a % b" means the remainder when a is divided by b. For example, 33 &12 is 9, so the first time through the loop, b becomes 9 and a becomes 12. Continuing in this way, the output is seen to be:a = 12 b = 9 a = 9 b = 3 a = 3 b = 0 Answer is: 3
(Note: This code might look like nonsense, but as a matter of fact, it produces a meaningful and useful answer. The output of the code is the greatest common divisor of the original values of a and b, that is, the result is the largest positive integer that is a common divisor of both a and b. The algorithm that is used here to compute the greatest common divisor is known as Euclid's algorithm, and it was one of the first algorithms ever invented.)
Question 9: Approximately what do you expect the value of total to be after the following code segment is executed? Why?int i, total; total = 0; for ( i = 0; i < 100; i++ ) { if ( Math.random() < 0.5 ) total = total + 1; else total = total + 2; }
Answer: The if statement has a 50% chance of adding 1 to the total and a 50% chance of adding 2 to the total. Since the loop is repeated 100 times, we expect that 1 will be added to total about 50 times, and 2 will be added about 50 times. If it were exactly 50 times in each case, the final value of total would be 150. Since the process is random, we don't expect to get exactly 150, but the answer is probably something pretty close to that. (In theory, it could be anything between 100 and 200, but in practice the answer is almost always pretty close to 150.)
Question 10: One way to approach the task of developing an algorithm is to use pseudocode and stepwise refinement. What does it mean to "develop an algorithm?" How can pseudocode and stepwise refinement be used in the process? (An example would be helpful.)
Answer: An algorithm is an unambiguous step-by-step process that performs a certain task and finishes in a finite number of steps. A computer program is an expression of an algorithm in a programming language. To develop an algorithm means to start with a problem and to create an algorithm that will solve that problem. The goal is to come up with a step-by-step procedure that can be expressed in program code, so that it will be possible to use a computer to solve the problem.
The term "pseudocode" refers to informal, English-like descriptions of algorithms. When using pseudocode, there is no need to follow the strict, formal syntax of a programming language. Also, it is not necessary to include all the details of the algorithm, at least in the early stages of development.
To use stepwise refinement, a short pseudocode algorithm is written. The goal is to give a coarse outline of the major steps that need to be taken to solve the problem. The outline is still an algorithm is that it describes a step-by-step process for solving the problem, but with much less detail than is needed in a computer program. Once the first outline of the algorithm is written, it can be "refined." That is, detail can be added to the algorithm by breaking down the steps in the pseudocode algorithm into smaller steps. This refinement step is repeated until the pseudocode algorithm is detailed enough to be translated directly into program code.
As an example, consider problem 5 on this test. A first rough outline of the solution could be:Get a number from the user Test if the number is prime Tell the user the answer
This is a very rough pseudocode algorithm, and it is the the first step in the development of an algorithm by stepwise refinement. The next step could refine this to:Ask the user to enter a number Read the user's number if the number is less than 2 tell the user it's not prime else test whether the number has a divisor if it does tell the user it's not prime else tell the user it's prime
This is more detailed, but the test for divisibility needs still more detail. A loop has to be used to test numbers in the appropriate range. Also, it's useful at this stage to have some names for the data objects in the algorithm. So the next step could be to refine this to:Ask the user to enter a number Let N be the user's input number if the number is less than 2 tell the user N is not prime else Let isPrime = true for each i from 2 to N-1 if N is divisible by i isPrime = false if isPrime is true tell the user N is not prime else tell the user N is prime
This pseudocode algorithm is detailed enough to be translated directly into Java code.
|
http://math.hws.edu/eck/cs124/f05/test1.html
|
crawl-002
|
refinedweb
| 1,995
| 71.34
|
We currently are running Apache 1.3.26 which forwards requests to our tomcat
4.0.4 server using mod_jk/ajp13. After making the following request:
GET /index.jsp HTTP/1.1\x0d\x0aHost: host1.foo.c
om\x0d\x0aTransfer-Encoding: Chunked\x0d\x0a\x0d\x0aAAAAAAAA\x0d\x0a\x0d\x0a
which returns a 404, we begin to see an intermittent problem where valid request
results in the response of another request being returned instead of the correct
one.
We have seen this problem in Tomcat 4.0.6 and 4.1.12. I've looked in the apache
bug database and in the tomcat mailing list and seen similar issues, but most of
those were in tomcat 3 and we do not see the same behavior in tomcat 3.
Just to clarify the behavior that we are seeing is that the request/response
pairs are being mixed up.
There was some fixes recently in the ajp13 java code,
so could you try with a TC 4 from CVS ?
I didn't see such behaviour, Apache 1.3 return me error 400, Bad request.
I mark this to WORKSFORME until someone provide a reproductable test pattern....
I will try the latest tomcat and let you know if I still have this problem. Here
is how we are able to reproduce it:
I have a servlet:
import javax.servlet.http.HttpServlet;
import java.io.PrintWriter;
import java.io.IOException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import org.apache.log4j.Category;
import java.util.Enumeration;
public class LoadServlet extends HttpServlet
{
public void init(ServletConfig config) throws ServletException
{
Category logger = Category.getRoot();
logger.info("LoadServlet::init(): Loading LoadServlet!");
}
public void service(HttpServletRequest request, HttpServletResponse response)
{
Category logger = Category.getRoot();
StringBuffer url = new StringBuffer();
url.append("LoadServlet::service(): Request URL (" + ((HttpServletRequest)
request).getRequestURI() + "?");
Enumeration enum = request.getParameterNames();
while (enum.hasMoreElements())
{
String key = (String) enum.nextElement();
url.append(key + "=" + request.getParameter(key));
}
logger.info(url.toString() + ")");
// Take the request and response to it.
String id = request.getParameter("ID");
logger.info("LoadServlet::service(): Got ID parameter(" + id + ")");
StringBuffer responseString = new StringBuffer().append("<HTML>The ID you p
assed in is: " + id + "</HTML>\n");
logger.info("LoadServlet::service(): Response (" + responseString + ")");
response.addHeader("ID" , java.net.URLEncoder.encode(id));
try
{
PrintWriter pw = response.getWriter();
pw.write(responseString.toString());
}
catch(IOException e)
{
logger.error(e.getMessage(), e);
}
}
}
running on tomcat that just returns the parameter(ID) in both the header and the
body of the response. I also have a perl script:
#!/usr/local/bin/perl -w
use IO::Socket;
unless (@ARGV > 1) { die "Usage: ./syncsvt.pl: <host> <port>\n";}
($host) = $ARGV[0];
($port) = $ARGV[1];
$remote = IO::Socket::INET->new( Proto => "tcp",
PeerAddr => $host,
PeerPort => "$port",
);
unless ($remote) { die "Cannot connect to $host\n" }
$remote->autoflush(1);
print $remote "GET /index.jsp HTTP/1.1\x0d\x0aHost:
host1.foo.com\x0d\x0aTransfer-Encoding:
Chunked\x0d\x0a\x0d\x0aAAAAAAAA\x0d\x0a\x0d\x0a";
while ( <$remote> ) { print }
close $remote;
that invokes my chunked encoding request.
To reproduce the problem, I first run the perl script and hit the apache server
on port 80 and get back the 404 response saying index.jsp is not found. Then
with two browsers open, I invoke the LoadServlet servlet with the parameter
ID=<value> through apache using HTTPS 4-5 times on each browser repeatedly.
After a while, one of my requests will return the 404 response for the
chunked-encoding request I made earlier from my perl script. After that, every
once in a while, I will get back the response of one of my previous requests.
A tcpdump of my tomcat server on port 8009 shows that it is writing the correct
response back, but the mod_jk log in debug mode shows that it receieved the
response of a previous request.
Modifying the mod_jk shared lib to close the socket in ajp_done():
// close the socket
jk_close_socket(p->sd, l);
p->sd = -1;
seems to fix this problem because the socket is no longer reused, but I'm still
not sure what causes the problem and closing the socket each time takes away the
advantages of reusing the socket.
The latest version of mod_jk in CVS seems to fix this behavior. I no longer
see the mixed request response pairs during my test. However, my chunked
encoding request now hangs. I have to either ctrl-c the request or wait for
the socket timeout to close the connection.
|
https://bz.apache.org/bugzilla/show_bug.cgi?id=14282
|
CC-MAIN-2019-43
|
refinedweb
| 746
| 51.65
|
It's common for a form to contain at least one field that the user must fill in. For example, there isn't any ethical way to determine the user's email address automatically, so you need to ask the user to enter it in a form field. If it's important that you contact the user, then you probably want to set up the form field so that the submission won't go through unless the email field is filled in.
Here are a few things you can do to encourage users to fill in mandatory fields:
Make it clear which fields are mandatory. Many sites place an asterisk before or after a field and include a note such as Fields marked with * are mandatory at the top of the form.
For a radio button group, always set up your form so that one of the <input> tags includes the checked attribute. This ensures that one option will always be selected.
For a selection list, make sure that one of the <option> tags includes the selected attribute.
If you've done all this, then the only thing left to do is to use some JavaScript to check for Text, Textarea, Password, or FileUpload fields that have not been filled it. The next few sections present functions that perform this type of validation.
CAUTION
The functions in the next few sections essentially look for fields that aren't empty or that don't contain only whitespace (such as a tab). Bear in mind, however, that this doesn't mean you're assured of getting valid data. It simply means that you won't get the most trivial data.
First, a Trick: Adding Custom Properties to an Object
In the previous section, you saw that creating a generic display function was much easier in the long run than hard-wiring field names into the script. It appears on the surface that we might not be able to do the same thing for mandatory fields. For a generic function to work, it needs to be able to loop through all the form fields. Because we're only interested in text fields, we can narrow things down by looking for fields with a type property value of text, textarea, or password. But then what? Once we have a text field, how does a generic function know whether the field is mandatory or not?
The secret to solving this problem is that you can create "custom" properties for an object. These are temporary properties that last only as long as the user visits your page, but that's all we need. To set up a custom property, you use the following syntax:
Object.Custom_Property = Initial_Value
To get around our problem, we could assign a property named mandatory to each text object and initialize this property to true for the mandatory field, and to false for the optional fields. You need to do this using statements that run while the page loads but after the form loads. Listing 1 provides an example.
Listing 1: Working with a Custom Property
<html> <head> <title>Listing 29.3. Working with a Custom Property</title> </head> <body bgcolor="#FFFFFF"> <form> <b>Custom Property:</b> <br> <input type="text" name="text_field"> <p> <input type="button" value="Toggle Custom Property" onClick="toggle_custom_property(this.form.text_field)"> </form> <script language="JavaScript" type="text/javascript"> <!-- document.forms[0].text_field.mandatory = true document.forms[0].text_field.value = true function toggle_custom_property(current_field) { // Get the current value of the mandatory property var current_value = current_field.mandatory // Set the property to the opposite value current_field.mandatory = !current_value // Display the new value in the field current_field.value = current_field.mandatory } //--> </script> </body> </html>
Notice, first of all, the first two statements in the <script> block. These execute after the form has been loaded by the browser. The first one assigns a custom property named mandatory to the field named text_field and sets this property to true:
document.forms[0].text_field.mandatory = true
The Button object runs the toggle_custom_property() function and sends the Text object as the argument. In this function, the value of the mandatory property is stored in current_value, the opposite of this value is stored in the mandatory property, and then this new value is displayed in the text box.
What this means is that you can set up all your form's text fields with the mandatory property (or whatever you prefer to call it) and set it to true for those fields that the user must fill in, and set it to false for optional fields. With that done, your validation loop would be set up like this (assuming that current_form is a reference to the Form object):
for (counter = 0; counter < current_form.length; counter++) { if (current_form[counter].type=="text" && current_form[counter].mandatory) { // Make sure current_form[counter] has been filled in. } }
Note, too, that my if() test ignores Textarea and Password objects to make the code readable. In practice, you'd include these objects in the if() statement (as I do in Listing 2).
Checking for Empty Fields
The simplest validation is to see whether a mandatory field is empty. Here's a function that returns true if the string_value argument is either the empty string ("") or null:
function its_empty(string_value) { // Check for the empty string and null if (string_value == "" || string_value == null) { // If either, it's empty so return true return true } // Otherwise, it's not empty so return false return false }
TIP
Here's a more efficient version of the its_empty() function:
function its_empty(string_value) { return (string_value == "" || string_value == null) { }
Listing 2 shows how you'd incorporate this function into a validate() function that's called by a button object.
Listing 2: Checking for an Empty Field
<script language="JavaScript" type="text/javascript"> <!-- function validate(current_form) { var missing_fields = new Array() var total_missing = 0 // Loop through all the form elements for (counter = 0; counter < current_form.length; counter++) { // Is this a visible text field that's mandatory? if ((current_form[counter].type == "text" || current_form[counter].type == "textarea" || current_form[counter].type == "password") && current_form[counter].mandatory) { // Is it empty? if (its_empty(current_form[counter].value)) { // If so, add the field to the array of missing fields missing_fields[total_missing] = current_form[counter] total_missing++ } } } // Were there any fields missing? if (total_missing > 0) { // Start the message var missing_message = "Sorry, you must fill in the following " + (total_missing == 1 ? " field:" : " fields:") + "\n______________________________\n\n" // Loop through the missing fields for (counter = 0; counter < missing_fields.length; counter++) { missing_message += missing_fields[counter].name + "\n" } // Finish up and display the message missing_message += "\n______________________________\n\n" + "Please fill in these fields and then resubmit the form." alert(missing_message) // For emphasis, put the focus on the first missing field missing_fields[0].focus() } else { // Otherwise, go ahead and submit current_form.submit() } } function its_empty(string_value) { // Check for the empty string and null if (string_value == "" || string_value == null) { // If either, it's empty so return true return true } // Otherwise, it's not empty so return false return false } //--> </script>
The validate() function begins by setting up an array named missing_fields that will be used to hold each mandatory field that hasn't being filled in. The variable total_missing tracks the number of missing fields. Note, as well, that the two text fields are set up as mandatory using the following code that appears after the form:
<script language="JavaScript" type="text/javascript"> <!-- // Make the two Text fields mandatory document.forms[0].Your_Name.mandatory = true document.forms[0].Your_Email.mandatory = true //--> </script>
The function then loops through the form fields looking for those with the type property of text, textarea, or password, and with the custom mandatory property set to true. If it finds such a field, the field's value is sent to the its_empty() function. If that function returns true, the field is added to the missing_fields array and total_missing is incremented.
When the loop is done, the function checks the value of total_missing. If it's greater than 0, a message to the user is initialized in the missing_message string and a for() loop adds the name of each missing field to the message. The message is then displayed to the user, and the focus is moved to the first missing field.
Checking for Fields That Contain Only Whitespace Characters
A user could try to get around your is_empty() function by entering one or more spaces or by pressing Enter within a text area. To fill these loopholes, use the its_whitespace() function in Listing 3.
Listing 3: Checking for Whitespace-Only Fields
function its_whitespace(string_value) { // These are the whitespace characters var whitespace = " \n\r\t" // Run through each character in the string for (var counter = 0; counter < string_value.length; counter++) { // Get the current character current_char = string_value.charAt(counter) // If it's not in the whitespace characters string, // return false because we found a non-whitespace character if (whitespace.indexOf(current_char) == -1) { return false } } // Otherwise, the string has nothing but // whitespace characters, so return true return true }
The function begins by initializing the whitespace variable to hold a string containing all the possible whitespace characters: space, newline (\n), carriage return (\r), and tab (\t). Then a for() loop runs through each character in the string_value argument. With each pass, the current character is extracted using the charAt() method and stored in the current_character variable. Then the indexOf() method is used to see if current_character is in whitespace. If it's not, indexOf() returns -1, and it means we've found a non-whitespace character, so the function returns false. If we make it out of the loop, then it means that the string contained only whitespace characters, so the function returns true.
|
http://www.informit.com/articles/article.aspx?p=23227
|
CC-MAIN-2019-22
|
refinedweb
| 1,594
| 53.1
|
A CFFI binding for Hoedown, a markdown parsing library.
Project description
A CFFI binding for Hoedown (version 3), a markdown parsing library.
Documentation can be found at:
Installation
Misaka has been tested on CPython 2.7, 3.2, 3.4, 3.5, 3.6, 3.7 and PyPy 2.7 and 3.5. It needs CFFI 1.0 or newer, because of this it will not work on PyPy 2.5 and older.
With pip:
pip install misaka
Or manually:
python setup.py install
Example
Very simple example:
import misaka as m print m.html('some other text')
Or:
from misaka import Markdown, HtmlRenderer rndr = HtmlRenderer() md = Markdown(rndr) print(md('some text'))
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/misaka/
|
CC-MAIN-2018-51
|
refinedweb
| 140
| 61.02
|
So I made an issue out of it:
Gerrit On 20.03.2014 22:54, Imsieke, Gerrit, le-tex wrote:
Today I noticed that I could actually build an index of all XSLT, XProc, Relax NG and Schematron files on my hard disk (3316 files). I couldn’t do that 2 years ago because the maximum number of distinct namespaces in a DB was limited to 256 or so. Thanks, BaseX team, for lifting this restriction! This has already proved really useful: I knew that I wrote an XProc step that conditionally invoked a step whose local name I remembered. The simple XPath expression collection('home')//*:declare-step[*:choose//*:paths] helped me identify the two relevant files. Since we do a lot of development in XML-syntax languages, an XML database is really really good for structured searches on these files. I bet you XQuery devs still use grep to query your code ;) Gerrit
_______________________________________________ BaseX-Talk mailing list BaseX-Talk@mailman.uni-konstanz.de
|
https://www.mail-archive.com/basex-talk@mailman.uni-konstanz.de/msg03925.html
|
CC-MAIN-2020-40
|
refinedweb
| 165
| 61.16
|
In this article, we will go through a less-talked about topic in the ASP.NET Core Community. We will discuss in detail, Globalization and Localization in ASP.NET Core Application and go through various approaches on changing the Culture of the Application via Request. We will also do some advanced configuration where we store the Selected Language Information to the Cookie in the client browser. You can see the complete source code of this implementation on my Github.
While starting to build an ASP.NET Core application, or any other application, There are few considerations to take care of. One of them is, “Will our Application be Multi-Lingual?”. According to me, it is highly important to make your application Multi-Lingual, if you are not able to anticipate the future of your application. Down the line, there can be an instance where you have already completed the application, but suddenly it needs to be multilingual as well. Trust me, you do not want to be in that place.
To be on the safer side, it’s good to build your applicaiton with multi-language support right from the beginning, yeah?
Table of Contents
What we’ll Learn?
In brief, here are the topics that we will cover in this article.
- Basics of Globalization & Localization
- Building a MultiLingual ASP.NET Core Application (Web-API and MVC)
Here is a small demo of what we will build.
What is Globalization?
Globalization is a process by which develoeprs make a product supported in multiple languages. This mostly involves improving the backend of web applications, which ultimately contributes to the SEO of the website. This includes meta-data, file names, labels and even the website URLs.
What is Localization?
Localization is the process to transalate the content to fulfil the demands of a particular culture / language. This does not limit to just transalting the texts, but also numbers, date and time formats, currency , symbols and so on.
Read the Official Documentation from Microsoft here.
Building a Multi-Lingual ASP.NET Core Application
We will build an ASP.NET Core MVC Application and try to implement Localization to it. I wil be using Visual Studio 2019 as my IDE. So in this application, for better demonstration of the implementation, we will work with both Views and API Controllers. In the Views, we will localize the Content of the UI and in the API part, let’s work on the messages thrown by the API.
We will be working with Resource files to store the transalations for various languages. Let’s create a new ASP.NET Core 3.1 MVC Application.
PS, Selecting Authentication is Optional.
Registering the Required Services and Middleware
Let’s register the Service required for Localization in our ASP.NET Core Applicaiton. Navigate to Startup.cs/ConfigureServices method.
services.AddLocalization(options => options.ResourcesPath = "Resources"); services.AddMvc() .AddViewLocalization(Microsoft.AspNetCore.Mvc.Razor.LanguageViewLocationExpanderFormat.Suffix) .AddDataAnnotationsLocalization();
Now, let’s add the Localization Middleware to the HTTP Pipeline. Navigate to Configure Method of Startup.cs and add in the following.
var cultures = new List<CultureInfo> { new CultureInfo("en"), new CultureInfo("fr") }; app.UseRequestLocalization(options => { options.DefaultRequestCulture = new Microsoft.AspNetCore.Localization.RequestCulture("en"); options.SupportedCultures = cultures; options.SupportedUICultures = cultures; });
Line 1 – Here, we specify the Cultures that our Application will support. For now, it supports English and French. You could make it more specific by adding something like “en-US” and “fr-FR”. In this line we define a collection of supported cultures for later use.
Line 5 – Everytime your Application get’s a request, How will the application know which culture to use? Here is where we define it.
Line 6 – If the request contains no culture information or cultures that we don’t yet support, the user will be served with English Resources.
Line 7 – This will indicate how we format numbers, currencies etc.
Line 8 – Used when we retrieve resources.
Localizing the API Endpoint.
Once we have registered the required services and middlewares, let’s try to use Localization in an API endpoint. In this section, we will create a Random API endpoioint that generates a GUID with a message, and try to switch languages between English and French. We will also go through 2 of the 3 ways to switch locales for the incoming request.
Create a New Folder in the Controllers and name it API. Here, add a new Empty API Controller and name it LanguageController.
using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Localization; using System; using System.Threading.Tasks; namespace Multilingual.ASPNETCore.Controllers.API { [Route("api/[controller]")] [ApiController] public class LanguageController : ControllerBase { private readonly IStringLocalizer<LanguageController> _localizer; public LanguageController(IStringLocalizer<LanguageController> localizer) { _localizer = localizer; } [HttpGet] public async Task<IActionResult> Get() { var guid = Guid.NewGuid(); return Ok(_localizer["RandomGUID", guid.ToString()].Value); } } }
Line 11 – IStringLocalizer is used to generate Localized String based on the key. Here we are going to inject this Service to the constructor of the Controller.
Line 19 – Random GUID Generation.
Line 20 – We use the Localizer Object to get the appropriate message that has a Key “RandomGUID” in our Resource File. You can see that we are also passing the GUID to the object. But we have not yet created a Resource File right?
Resource File Naming and Folder Convention.
This is one of the cleaner approaches by Microsoft. Ideally Resources for each View/API Controller/Models has to seperated with a smiliar folder structure.
In our case, the Language Controller is located at Controllers/API/LanguageController.cs. Remeber the Resource Folder we created in the beginning? Here is where you would place the Resources. Create a similar folder structure within the Resource Folder.
Understand the convention? We need resources for the Language Controller, hence we split the LanguageController to both en and fr resource files having corresponding resources.
RandomGUID is the key that we specified in the API Controller right? Let’s add values to this particular key in both of the Resource Files.
Get the point, yeah? This seems a cleaner way to seperate resources based on the Folder Structure. What’s your opinion about this?
Now, let’s build and run our application. I will use postman to test this endpoint, as I will change certain Request Headers too.
First, send a GET request to localhost:xxx/api/language.
You can see that our implementation works right out of the box. As we mentioned earlier, if the request doesnt have information about the requested culture, the response would revert back to our default culture, that is, English. Now, let’s try to request with the fr culture. There are 3 ways to do this.
- Specify the Culture in the Request Query String.
- Specify the Culture in Request Header,
- and store the preference culture in a cookie.
In this section, we will cover the first 2 ways to request for a particular culture.
Query String Based Culture Request
As you might have already guessed, in this approach we will just need to append a specific query to our request. Switch back to Postman and send a GET request to localhost:xxxx/api/Language?culture=fr
Easy, yeah? Like we talked about earlier let’s change the culture to ‘ar’ an check the response.
As expected, we get a response with our default selected Culture, English.
Request Header Based Culture Request
Now, let’s add the culture info to the header of the request. It’s quite simple to do in POSTMAN. Click on the Headers and add the key “Accept-Language” and change the value to ‘fr’. You can receive the expected response.
Now that we learnt how to localize API response, let’s implement the same in the Views too. Although the concept is same, there are slight variations that we will go through.
Localizing the Views
In this section, I will localize the HomePage View (Views/Home/Index.cshtml). To keep things simple, let’s just implement this feature to transalte the Welcome and caption texts of this view.
Navigate to Views/Home/Index.cshtml and modify the following.
@inject Microsoft.AspNetCore.Mvc.Localization.IViewLocalizer localizer @{ ViewData["Title"] = "Home Page"; } <div class="text-center"> <h1 class="display-4">@localizer["Welcome"]</h1> <p>@localizer["LearnMore"]</p> </div>
Line 1 – Previously for the controller we use a String Localizer. But here in the views, inorder to support HTMLs too, we use this IViewLocalizer Interface.
Line 7,8 – Like before, we use the localizer object and specify the key of the resource we need.
Finally, let’s create the Resource. Remember the Folder convention that we used earlier? Make something similar here too.
Pretty self explanatory I guess. The only difference is that, here we use the HTML content too. Let’s run the Application and see the Home Page.
The English Resources work. Let’s add a query string here to change the culture to French.
Store the Culture Preference in a Cookie
Adding Query Strings or changing the Reuqest Headers may look like a convinient way while tsesting the application. But in practical / production scenarios, they are a big NO-NO. What would be better is to have the preferred Culture stored somewhere in the client machine, preferrably the client Browser as a cookie, right?
So, here is how it will work. In our Shared Layout, we will introduce a drop down which contains a list of available cultures. Once the user changes the DropDown Value, it should trigger a controller action that would set the selected culture on to a cookie, which would then reflect back on to our UI.
Before that we need to change our Service Registrations since we need a IOptions list of RequestCultures. Navigate to the Configure Services in the Startup class and add the following.
services.Configure<RequestLocalizationOptions>(options=> { var cultures = new List<CultureInfo> { new CultureInfo("en"), new CultureInfo("fr") }; options.DefaultRequestCulture = new Microsoft.AspNetCore.Localization.RequestCulture("en"); options.SupportedCultures = cultures; options.SupportedUICultures = cultures; });
Then, in the Configure method, comment out all that we had added earlier and add this new line of code.
//var cultures = new List<CultureInfo> { // new CultureInfo("en"), // new CultureInfo("fr") //}; //app.UseRequestLocalization(options => //{ // options.DefaultRequestCulture = new Microsoft.AspNetCore.Localization.RequestCulture("en"); // options.SupportedCultures = cultures; // options.SupportedUICultures = cultures; //}); app.UseRequestLocalization(app.ApplicationServices.GetRequiredService<IOptions<RequestLocalizationOptions>>().Value);
Next, let’s create a partial view to hold the combobox (select) component. I had refered Microsoft’s documentations for this. Under the Views/Shared Folderm add a new View and name it _CulturePartial.cshtml
.Name }) .ToList(); var returnUrl = string.IsNullOrEmpty(Context.Request.Path) ? "~/" : $"~{Context.Request.Path.Value}{Context.Request.QueryString}"; } <div title="@Localizer["Request culture provider:"] @requestCulture?.Provider?.GetType().Name"> <form id="selectLanguage" asp- <select name="culture" onchange="this.form.submit();" asp- </select> </form> </div>
Line 6,7 – Here, we are introducing the IOptions of the Request Localizer.
Line 16 – We are adding a form that would trigger the SetCulture Method of the CultureController (we will add this Controller in some time)
Now that we have created the partial view, let’s add it to the Navigation bar. Open up the Views/Shared/_LoginPartial.cshtml and add this highlighted code.
<li class="nav-item"> @await Html.PartialAsync("_CulturePartial") </li> > }
We have not added the Controller yet. But. let’s just run the application to check if we are able to render the select component within our layout.
With that working, let’ add the Controller and Actions. We will also add resource for this Partial View so that ‘en’ appears as English, ‘fr’ as French asd so on. Get the point, yeah?
First, I will add the required Resources.
We have to make a slight change in the Partial View also.
@{ var requestCulture = Context.Features.Get<IRequestCultureFeature>(); var cultureItems = LocOptions.Value.SupportedUICultures .Select(c => new SelectListItem { Value = c.Name, Text = Localizer.GetString(c.Name) }) .ToList(); var returnUrl = string.IsNullOrEmpty(Context.Request.Path) ? "~/" : $"~{Context.Request.Path.Value}{Context.Request.QueryString}"; }
Here, we use the Localizer Object to get the transalated string from the corresponding resource file. Next, add a new Controller, CultureController.
using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Localization; using Microsoft.AspNetCore.Mvc; using System; namespace Multilingual.ASPNETCore.Controllers { public class CultureController : Controller { [HttpPost] public IActionResult SetCulture); } } }
Line 10 – Here we add a new POST Method that will be invoked by the change in the Combobox.
Line 12-15, we create a new cookie with the selected culture information. We also set an expiration of 1 year to the cookie.
Let’s Run the Application.
You can see that our implementation works well. Quite easy to implement Globalization and Localization in ASP.NET Core, right? We will finish this article here.
If you found this article helpful, consider supporting.Buy me a coffee
Summary – Globalization and Localization in ASP.NET Core
We learnt all about Globalization and Localization of ASP.NET Core Application following several recommended approaches. We covered topics like IStringLocalizer, View Localizer, Query based Culture, Request based Culture, and storing the Culture into the Cookie, and creating a select component to help the user select the prefered Culture.
Hey, greate article… but how will that work if you would have the Resource Files within its own class library?
Hi, Thanks.
You could have a new assembly that serves the resources only. And use this assembly in the registration to load.
Or, probably have the library return in terms of keys, so that you can translate in the asp.net core app itself.
Thanks and regards
Hey, thanks for the quick response. I think this is currently really tricky and it is also mentioned as an issue/troubleshoot on the official documentation (–class-libraries-issues) …
it think the only problem would be on the registration process within the service collection where you register the ResourcePath… I am already looking for a while for a solution but haven´t found one yet.
I have currently an ASP.NET Core 3.1 API Project which implements the CQRS Pattern and IMEDIATR Pattern and therefor I would like to place all resources within its own library… dont know if this will be possible… maybe with this ResourceLocationAttribute somehow…
Best Regards
Nice & helpful ! Thanks !
One remark tho, I think you went a bit far with the translation (maybe for the purpose of the demo); the language picker should not be globalized but rather each language should appear as the natives write it. If one doesn’t know either English or the Latin characters, hitting the default ‘en’ page with the ‘en’ translated language picker would transform into a guessing game until they manage to get right the “hieroglyph sequence” that means their language 🙂
Hi, Thanks for the feedback.
Haha, yeah. I would say this is more of a tutorial that states the capabilities of a features. Implementation totally depends on the developer / client.
Thanks and Regards
Great & so helpful !
But I have a question, how can I create a new cookie?
please I am waiting your answer.
Workssssssssssssss 100000000000%
Hi Mukesh, great article.
I have one question, is it required ResourcesPath = “Resources” or can have ResourcesPath = “LocalizationResources”. Asking because having “Resources” having conflict.
Thanks for the guide, Mukesh.
Could you explain how to localize models – i.e. store all of the properties of the model in multiple languages and display the correct version with the correct langauge?
For example, you have an online shop, and you want to be able to display product details in English and French. What’s the best way to handle this (particularly with regard to data structure?)
|
https://www.codewithmukesh.com/blog/globalization-and-localization-in-aspnet-core/
|
CC-MAIN-2020-45
|
refinedweb
| 2,573
| 51.34
|
I have had this card for a little over 2 years. Now I want to upgrade to something better because I not only game, but hook my computer up to my Sony HDTV for movies and such.
My system Specs:
Intel core i7 960 (3.85 GHz)
18GB RAM (corsair vengeance? forgot)
512GB SSD (OCZ vertex 4)
EVGA GTX 560 TI Superclocked (want to upgrade?)
650w PSU
I am looking at maybe a EVGA GTX 760 Superclocked or around that price range of $250. I do not want to SLI my 560 because I don't think my PSU can handle it and heat issues.
Any suggestions or advice? Thank you guys in advance you are the GTX 760 is a good choice. Other option in that price range would be a HD 7950.
Btw playing movies through a TV doesn't warrant an upgrade since it's not GPU intensive. You should see a nice performance bump when playing graphically demanding games though.
I know quite a few people are looking for a new video card, and the EVGA Superclocked GTX 660Ti 3GB card is an absolute beast. I normally don't post anything like this, but I feel this card for less than $300 is a steal. If it doesn't belong here, or if it is against forum rules, I apologize, and feel free to "moderate" it.
Here it is: Newegg.com - EVGA SuperClocked+ 03G-P4-3663-KR GeForce GTX 660 Ti 3GB 192-bit GDDR5 PCI Express 3.0 x16 HDCP Ready SLI Support Video Card
If you figure in the 15% off and the $20 MIR, that comes to about $268.99. Definitely the cheapest this card has ever been!
That is a real good price! You gonna grab one Kelly?
PSU
If you got a different video card and a different PSU, chances are there is something else wrong unless you got very unlucky and got another bad part from one of the two things swapped out.
My guess would be one of the following just from one you said:
The new PSU is bad as well
The CPU is overheating and shutting off
GPU could be overheating and shutting off (Though this is unlikely)
Those are just a few things to check, I would go into the bios and see what temp the CPU is running at while in there and decide if its running above what it should be. If that's not the issue I would try returning the PSU for a different one and see if that fixes the issue. Though the fact that the machine shuts off with 3 different PSU's points to a different part being an issue.
i just got it today ( waiting on my PSU comes monday ), i was just wondering if that card can do Crysis on Max settings without any lag using DX10, what about DX9, also can i OC this card or does super clocked mean its already OC'ed to the max, thanks.
if you have brench marks for this card that would help me, cant seem to find em.
my system
Intel Core 2 Quad
EVGA 9800 GTX SuperClocked
2GB Ram
Gigabyte Motherboard
320GB HDD
I can't find any reviews comparing the g80 gtx superclocked to my current sli set, Just curious to know how it would compare. If anyone can find some good links with fps on a current game, like FEAR or something like that I'd appreciate. Or just any general 2 cents would do, thx.
/input come on people
If SLI GeForce 7900 GTX 512MB cards can't beat a single 8800 GTX then your card (Slightly under the 7900 GTX) can't.
I noticed after I installed beta driver 310.70 that temps at idle were way up, about 41c, so I removed the card and cleaned the fan and heatsink with compressed air (it wasn't very dirty). Still high temps afterwards, started looking at settings, and noticed that the card was no longer idling down (staying at 981 MHz core and 2257 MHz memory all the time). That explains the high temps, but how do I fix this problem. I tried a completely clean install of the stable driver (306.97) per a guide on the proper way to update Nvidia drivers, but the problem remains. Checked all the BIOS settings, multiple monitor is disabled, etc. Checked all settings in Nvidia Control Panel. Any help will be much appreciated! BTW I am running Windows 8 Pro x64
Just wondering what would be best route to go without spending a fortune to increase FPS in XPlane. I have never used SLI and am wondering if a second identical 770 in SLI will give me a good increase in FPS.
I am going to buy a nVidia EVGA GeForce GTX 460 SuperClocked from tech4u.com. Although Fumz has kind of said that my computer is compatible for the card, I just want to make sure. It's a lot of money and I'd hate it if I find out i doesn't fit in my computer after I've bought it. So anyway, will this graphics card work with my computer?
Thanks in advance,
Jaidyn
The first problem will be making sure it will fit...so measure! Second that card requires two pci six pin power connectors....Your spec's on psu just say 550w....Read Punksters tutorial on psu's:Planning to upgrade your Graphics Card? Read First...
While not all inclusive it will tell the brands to avoid...gotta be carefullCoolmax CUL-750B 750 W Power Supply Review | Hardware Secrets
A 750w burned @450 w....
I just bought an EVGA GeForce GTX 260 Core 216 Superclocked....the 896-P3-1257-AR with the 626 Core clocks speed...etc. Anyway I was wondering if it's worth using EVGA's Step Up program to get a vanilla GTX 275?
I know the 275 has 240 shader processors vs 216 with the newer 260 but is it worth the upgrade? Also, I'm looking at the 896MB RAM too, not the 1792MB version (although I'd like to have it but can't afford it).
Thanks!
Well i have just ordered a few new items for my pc, to allow me to play cod4, crysis etc
on high details. I was wandering as my motherboard has an unbuilt graphics card if i would have to disable this somehow? and if so anyone help me with that? Or wether it would just go straight in and be used instead of the inbuilt card. If anyone could please help with this matter i would much appreciate it! My cards due on Wed-Fri. Thanks MoiRa
So as the title says it I m buying an EVGA Gefore GTX Ti DS superclocked http www tigerdirect com applications SearchTools item-details asp EdpNo Buying TI superclocked card GTX EVGA DS Geforce have a 560 an question amp CatId I m just not sure what power supply to get I m thinking about this one here http www tigerdirect com applications SearchTools item-details asp EdpNo amp CatId only problem is the card has pci e six pin plugs on it Will that power supply be fine The card comes with Buying an EVGA Geforce GTX 560 TI DS superclocked card have a question power adapters specific name escapes me right now shows pictures on site with card Also I chose to go with a premade store bought pc from gateway this time Last time bought pieces and had a friend build it for me I had to get it at bestbuy for certain reasons this time Anyways I have a gateway FX - e so that s another concern I don t know if that power supply has enough things for my computer Any help would be appreciated thanks nbsp
The PSU is one of the most important parrts of your computer. Don't go cheap. Seasonic or Corsair make quality PSU's. 600w to 650w should be sufficient for your computer if it is a quality PSU.
I AM UPGRADING FROM A RADEON 5700 1G TO A
EVGA SuperClocked 03G-P4-2666-KR GeForce GTX 660 3GB 192-bit GDDR5 PCI Express 3.0 x16 HDCP
is this a good enough step up to play battle field 4 at a high level?
low on cash but this card I found seems to have what I need plus maybe a few yrs in it before a upgrade is needed. any input would be appreciated.
reason I ask is I have a day to cancel and get something else but I am not real good with computers. I am running a phenom quad core with 4 gigs of ram on win 7 ultimate and a 600 watt power supply and 1t hard drive
OH AND MY MOTHERBOARD IS A GA-MA785GM-US2H
This should answer your question. The GTX 660 has 30FPS with ultra quality at 1680x1050. Lowering the quality settings to high, raised the frame rate to nearly 50FPS.
Final Thought:
...
If you can live with high settings however, you'll boost your frame rates by around 60%, sometimes as much as 80%, which is enough to make the GTX 660 and HD 7850 practical options. ...
Fortunately, there's less to consider when choosing a processor -- just make sure you have a quad-core chip. It doesn't even have to be a new one, as even the Core i7-920 and Phenom II X4 980 took care of business. ...Click to expand...
Side note:.Click to expand...
Another improvement would be to get 8GB memory. In my opinion 4GB is boarder-line to being sufficient.
Hello I have a question with a sincere honest answer or opinion If I am going from my EVGA opinion: To need GeForce GB I 1 EVGA your DDR3 GT PCI-E? Upgrading 430 GeForce MB DDR PCI-E GT Overclocking video card already years old now since the release of the processor to the newer EVGA GeForce GT GB DDR PCI-E Graphics Card G-P - -LR released in October result in better graphics than what I have now I know the graphics card is more of EVGA series's I need your opinion: Upgrading To EVGA GeForce GT 430 1 GB DDR3 PCI-E? 'economy' line but I hear that it is really not bad for the low price I need to know if I need your opinion: Upgrading To EVGA GeForce GT 430 1 GB DDR3 PCI-E? it will be a good fit Will I actual be ahead Okay my video graphics habit implications I know you're asking yourself A I have Windows Professional -bit I have GB of RAM B Basically I am not a true blood gamer To be honest I rarely play games with the inherent downloaded Microsoft Games in Windows at times C I am using the DVI connection cable for an older inch Samsung SyncMaster B LCD Monitor D I do alot of photo and video editing with the latest Corel and Adobe professional products plus other like-products E I use the quot highest screen quot resolution AT ALL TIMES x I never switch out to a lower resolution Okay any issues you see here I may have with my habits Based on the EVGA GeForce GT line compared to the GeForce GT line where am I ahead where am I regressing Please reply Thank you
The only things the GT 430 really has over your current card is DirectX 11 and much lower power,
Video Card Comparison - GPUReview.com
for everything else it would be a downgrade. There aren't many reviews that have both cards in it but this one does (for most of the benchmarks),
ZOTAC GeForce GT 430 1 GB Review - Page 1/32 | techPowerUp
I tried to installed an old gtx760 on my computer. When I turn on my computer, the mouse and keyboard were working but there was black screen. I updated BIOS to the newest version, which is A12, and it still didn't work.
my product is DELL XPS 8500
Hi guys,
I've had my Dell Inspiron 660 since late 2012. Really happy with it. This model has the big casing, with i5 3330 and 8GB RAM.
I've never altered components or over clocked anything. I only switched out the OEM GT620 in 2014 for a GTX 750. No issues with my PC whatsoever.
My question is simple. I'm looking to upgrade my GPU, and need a PSU upgrade as such. Will there be any issues if I were to upgrade my PSU with an EVGA B1 600W, and get the GTX 960 with it?
Do I need to drill holes for the PSU, make adjustments, update BIOS settings etc? Is it as simple and taking out and putting the new PSU in, or must I do some other stuff as well?
Thanks! Appreciate any and all responses.
Hello Niki,
In looking at the 660 Owner Manual - - it seems the case is a standard ATX design - meaning that the PSU you are looking at should be fine. In some rare cases, depending on the PSU design, some of the screw holes might be slightly off but you'll always be able to line up two screws, at least (which is really all you need).
Generally speaking, BIOS updates on a desktop are only needed if Dell identifies a problem they need to address or to add support with newer CPUs.
The only real tricky part about replacing the PSU is cable management. Trace where your current cables are going to & once you've got the new PSU in, go through and wrap up the unused cables using a rubber band or large cable tie to keep them clear of blocking air flow.
And don't be like me and forget to connect the 6-pin rail to the GTX960 after you get it installed! (I have a 960 myself & I'm quite happy with it.)
Hope this helps!
after many trial and errors with the fans.. i finally figured out why my PC is still so loud.. well the NB chipset cooler on this motherboard (eVGA NF43) is very loud for a little tiny fan. so i decided to replace it with passive cooling.. such as the Zalman ZM-NB47J
Question is.. i have a 6600GT right by the chipset cooler.. will the zalman fit in there? i kno it wont with the 7800GTs but im not sure how much longer ther PCB's are on those cards.. maybe someone can help me =]
After looking at my 6600gt, the picture of the motherboad, and the picture of the cooler, I don't think it will work. You use the blue pci-e slot right?
I guess there are several options. If your chipset stays plenty cool you could try slowing the fan either using a software program, or by using a manual fan speed controller. Or I've seen people with the same heatsink and a similar board layout cut some of the spokes(or bend them), so that it wouldn't interfere with the card. There also might be replacments with a fan that is quieter available.
I am running windows Home Premium 64bit. I just did a fresh install and when I ran the WEI the computer rebooted. I can run it with out being in SLI mode, but as soon as I enable and refresh the computer reboots. I am wondering if it is caused by the power supply. I have a rosewill RP650 2 psu. Newegg.com - Rosewill RP650-2 650W ATX12V v2.3 & EPS12V v2.91 SLI Ready CrossFire Ready Active PFC Power Supply
What are your thoughts?? I have the latest drivers and again this is a new OS install.
Hi and welcome to seven forums
How recent is the SLI? Have you just added the 2nd card?
Could you tell us the rest of your system specs or fill them in on your profile.
Do both cards work fine if you just have 1 installed?
Sounds like it could be a power issue if so.
Cheers
Paul.
I ordered a 7800GTX "KO" with EVGA ACS3 cooling (bigger cover for heatsink fan and card) I also will have it as well the cpu on liquid cooling I am hoping to get near 2.7-2.8 alot with same core doing 2.7 - 3.0 on 64x2 4800 and not going to oc the already higher 490/1300
Hey all,
Does anybody here have a EVGA GTX260 core 216 SUPERCLOCKED edition with an Asus p6t deluxe mobo?
If so, what BIOS are you using? And in your BIOS, did you change any settings (voltage especially) that deal with your GPU?
Thanks!
Sal
Sorry to be off topic, but does that motherboard work fine with the graphic card and boots up fine?
Just my friend bought an XFX x58i motherboard, and he had no output display with his 8800 Ultra and 9600GT. After trying to fix it for several hours, we narrowed it down to the motherboard being dead, and he is sending it back and getting the Asus P6T motherboard.
Hi Guys Got a hard question to ask haha Ive been having problem with my computer restarting but lately my graphics have been falling appart as well When i started my computer this morning I saw rows and rows of dots down Ultra 8800 SuperClocked the screen and when I get into windows its running in somewhat of a safe mode virtually no GPU Ive tried updating the driver flashed the bios on the GPU and changed the PCI-E slots for it nothing seems to help My guess is that I had an overheating problem that got 8800 Ultra SuperClocked worse and worse till it finally cooked itself I think the writing is on the walls but i just need to hear it from you guys Is my GPU now an expensive paper weight Or is there something else I can try to fix it S A quick answer would be good im moving to a different city tomorrow and need this sorted out PS Ive been using an old GPU card and my computer works fine nbsp
What you are describing is artifacting. It means either the card's memory is failing OR you have it overclocked.
If you have not OCed the card, just rma it back to the mfg. Many video card mfgs have a lifetime warranty; others are 3yrs.
I seem to be experiencing really high GPU temps when playing Brink. I have a GTX570 Superclocked, with 2 24" monitors. Case and card are both free of any dust build up. The card is only 3months old, it idels at around 50c and max temp ive seen even running Crysis 2 is only 65c/68c. however when playing Brink(1980*1050, everything on medium) my GPU temp went to 85c! Fan speed was set manualy to 85%(cant go any higher using EVGA Precision). Can anyone tell me if this is safe/ok? My specs E8500, Asus P5N32-E SLI, 4Gb DDR2 667Mhz, Corsair 650W PSU, Antec 900 with 4 120mm and 1 200mm
Seems a little warm on idle but that may be because your house is warm.
Other than that 85 sounds about right at 100% gpu load.
I would however turn your fan speed back to auto, and monitor fan speed in game to see if it rises above 85% speed itself, and also your temps.
which one of these cards is a better buy.
links
im on a budget so its about all i can afford
Ok,I am building a PC and I would like to buy EVGA GTX 650 Superclocked 2GB GDDR5.
I got one 120 mm fan and I can afford myself one more(both 1200 rpm)
Will that cut it?
You don't need to if you don't want to.
I ve just purchased my fiance the EVGA Geforce GT superclocked edition as an early Christmas gift We ve just finished installing it and started up some of our games The main game that we play Geforce superclocked Question 9800 edition regarding GT is World of Warcraft which with our old card Radeon x got around FPS in big cities Now that we ve installed the new card it Question regarding Geforce 9800 GT superclocked edition s only getting around FPS in the big cities I don t know the make of the power supply but I do know that it s W If this information is needed we can pop it open and get If there s any further information you need please ask I d hate to have to return it to the store since it appears to be a very nice card according to reviews nbsp
Do you have the latest drivers installed? And have you confirmed if the low FPS are not due to your internet connection?
I'm a computer dummy. Should I upgrade my three year old system's graphics card by installing a 7600 GT Superclocked DDR2 512MB AGP 8X?
My current card is an ATI 9800XT 256MB 8X, which was the top of the line card. My computer was custom built; card price at the time was $500.
Hey everybody I have a Dell Studio XPS MT MT Mini Tower see below and it's current graphics card is a Nvidia GeForce GT which is getting long in the tooth I currently can't pony up the - to replace it and build my Studio Dell 435MT in a 660 - Superclocked Impossible? XPS GTX own so I decided to upgrade the RAM from GB to GB and slap in a kick-a graphics card to last me - years The system has a stock PSU which according to the spec page on Dell com PC is at home I'm away at school for the week it has a -watt PSU I already know the card will work with the Motherboard the card is PCI-express v but backwards-compatible with PCI-express v which is what I have My question is about the PSU Is watts enough for that card plus the rest of the components According to Newegg it needs a watt PSU as the minimum Plus I don't recall there being a power cable for the video card attached to the PSU My current video GTX 660 Superclocked in a Dell Studio XPS 435MT - Impossible?
I ve just purchased my fiance the EVGA Geforce GT superclocked edition as an early Christmas gift We ve just finished installing it and started up some of our games The main game that we play is World of Warcraft which with our old card Radeon x got around FPS in Question 9800 GT superclocked edition regarding Geforce big cities Now that we ve installed the new card it s only getting around FPS in the big cities Question regarding Geforce 9800 GT superclocked edition Power Supply - Dynex DX-PS We believe it s either the CPU or the PSU but we re not sure If there s any further information you need please ask I d hate to have to return it to the store since it appears to be a very nice card according to reviews nbsp
I noticed on newegg.com that all eVGA products were renamed to EVGA. I found this strange and I prefer the eVGA name over the EVGA name. But I was wondering if the name is eVGA or EVGA. I guess it really doesn't matter, but I kind of care.
Anyway because of neweggs change in names I thoughh the correct name is EVGA. Yet in my eVGA e-GeForce 7900 GS KO 256MB (and thats how I believe it should be named. Sorry cfitzarl) manual it is called eVGA.
Anyway, you known the question.
If you head over to evga.com, then you see that it is EVGA all over the place. Maybe they decided to change the spelling at some point?
I recently bought a new graphic card, replacing my old Nvidia GeForce 9500GT 1GB a month ago, the games that I run is fine, plays in max quality with no lag at all. After a month, I'm getting problems now with my graphic card. My games are running awfully slow, older games running slow. What's happening? Also, I am not sure what Power Supply brand I am running but all I know is that the wattage of the Power Supply is about 450 Watts.
Intel Core 2 Duo E8400 3.00GHz
4GB of RAM DDR2
GTS 450 SC 1GB
Windows 7 64bit Ultimate Edition
Open up the case and there will be a label on the side of the power supply, tell us what the make and model of it.
i have a problem with the video output of my video card, its an nvidia gt610 from evga that i got last christmas, but for some reason its not giving me a signal to the monitor, sometimes it would and other times it won't it happens randomly when i plug it in, it runs off of a pci express x16 slot, all year it was working fine but now its giving me problems i can't tell if will need a new one or not but my motherboard video output works fine no problems there at all is there something i can do to fix the card
I don't know much about SLI PSU compatibility
This is my graphics card
EVGA | Products 700 Series Family&uc=EUR
What PSU would I need to run 2 EVGA GTX 770's
For a single, EVGA recommends at least a 600 watt power supply with a 42 amp 12 volt rating.
see this for what some recommend for 2 card SLI.
Hi,
My opinion based on the fact that you aren't yet capable of gaming at 1080p resolutions is stick with your 560 till October and then get a much cheaper 670 from the US (instead of the FTW). For $630 you're inching pretty close to a pair of 670s @ $800. The rest of your machine would NOT be a bottleneck to a pair of 670s but I'd be willing to bet you would be happy with just 1 of them.
For all you GeForce users out there EVGA has a new Version of Precision out.
I almost posted that yesterday. You have to be registered there to download, including address and phone number, so be aware. Looks cool though
A Guy
I just updated my drivers on my new evga 8800 gts 320 mb and when it restarted im getting an error about out of ranged change display mode to 1680x1050 @ 60 ghz i cant get into windows to do anything cause that error keeps coming up with a black screen nor can i change anything on my monitor settings i can get into safe mode tho please help asap!
it seems that what i go to nvdia it uninstalls my old drivers which removes the 1080 display mode and when its restarted its gone so it cant boot from there any ideas how to work around this?
Which one would you get? I'm soon going to get one, and I'm not sure which to buy...
I've used EVGA cards and am currently using one, and customer service is great, and I love EVGA precision.
However, the Gigabyte has three fans and is already overclocked (I'll still overclock it :P). Also, will EVGA Precision work with the Gigabyte card?
EVGA 670
Gigabyte 670
Thanks
Edit: Also, it seems the Gigabyte one has an 8 and 6 pin connector, while the EVGA one has 2 sixes.
FWIW I bought the EVGA 670 on Amazon today and is supposed to be delivered tomorrow. Can't address your particulars as I have been previously a Radeon owner...I am hoping this "upgrade" from my 5770 will allow my sony vegas 11 pro gpu acceleration capability to work.
The other day I installed an eVGA 6800 128MB standard card and am having a problem with jumping, flickering textures in specific areas of BF2 maps (e.g. railroad/bridge of Karkand map, and individual areas of others). I have all the latest drivers installed and bios flashed, and think its time to return this otherwise awesome card to sender.
If anyone has any ideas, it would save me a lot of trouble.
I have...
GA-KN8 Pro, NVidia N3 driver, AC'97
1 Gig DDR 400
Athlon 3200+
Halitosis (Just kidding)
Here check this out
If you can do this your card will be Pretty close to a Geforce 6800 GT....
All the Geforce 6800 cards are based on the NV40 technology... The only prob is that ultra and GT have 16 pixel pipelines compared to 6800's (vanialla) 12 pixel pipeline...
if you can unlock the other 4 you may see some improvements ...
I have purchased a eVGA 7900GS PCI-E video card. I also have an eMachine T6528 which uses the MS-7207 Motherboard. It has the intergrated VGA card with Geforce 6100. I placed the eVGA 7900GS, but I got no signal from it. I still have signal from my intergrated VGA card. I set my primary video card to PCI Express on the BIOS. It still does not see the new card, any clue for me? Thanks.
Hello, ken123,, did you disable the integrated card? You said that you set the PCI-e to the primary but i don't know if this is the same thing or not.
To disable the integrated graphics in Windows: Right click on your My Computer icon, click Properties. Open the Hardware tab and press the Device Manager button. Expand the Display Adapters section and double click the built in graphics. On the drop down menu at the bottom, change it to Disable and make sure that the PCI-e graphics is set to Enable.
I wish I was independently wealthy!
Credit to Airbot for first turning me onto this.
EVGA | Articles | EVGA Classified Super Record 2
Damn! I think I saw a picture of that board in the dictionary under the word "overkill".
Very cool though if I had the money i'd get one.
Hi
I was looking at video cards for my next upgrade and my eye caught this card. it is the same price as the 7900GS, but it is called the 7900 GT KO. i was wondering if this was just an overclocked version of the GS, or a real 7900GT gpu. Also will this card be enough to last me for a while or even just to play the current top games? (Flight sim X, BF2, BF2142, FEAR, CS:S, and so on...) Also here will be the build of my next computer:
Asus A8N SLI Deluxe NVIDIA Socket 939 Motherboard and an AMD Athlon 64 3700+ 2.20GHz OEM Processor
EVGA GeForce 7900 GT KO / 256MB GDDR3 / SLI / PCI Express / Dual DVI / HDTV / Video Card (will not be running in SLi)
1024 mb DDR PC3200 RAM
256 Gig Seagate Harddrive 7200 RPM
Here's the link for the video card...
Thanks.
Also i would like to play those games on max settings at at least 1024x768. do you think that setup can do that?
I recently bought a EVGA GTX 980 and installed to my Alienware aurora R2 and it makes a sound in the motherboard. I check all the requirements and its says that it needed a 2x 6 pins connector and 500w or greater power supply. My PC has 525w de Dell M821J. Also like around a month ago I bought in Best Buy the EVGA GTX 970 and it did work perfectly fine but I returned because I wanted the 980. What could it be?
This website have the information of the Alienware Aurora R2 in case it could help.
-
Sorry if I spell something wrong. I still working in my English and thanks for the help.
I didn't mentioned that the GTX 980 doesn't want to work but the GTX 970 did work.
I got issues playing Age of Conan
the AoC support told me to try these in the nvidia stuff
-----
Anisotropic Filtering: Application Controlled
Anti-Aliasing-Gamma Correction: Off
Anti-Aliasing-Mode: Application Controlled
(and more stuff but I won't copy it)
----
I just can't find it !
in XP I would just right click the desktop and get a menu
arg I feel like a nub
It should be in the Control Panel\All Control Panel Items
My Gtx crapped out so I called EVGA to get eVGA over Fuming a replacement After tests a stool sample and a promise to give up my first born child they agreed to replace my card I shipped my card off to them and about days later I get an email telling me that they will not replace the card because when they looked at my card it had a tiny Fuming over eVGA i mean tiny chip at the very edge of the card They said that this damage is Fuming over eVGA a total loss to the card and they will not replace it The chip touches nothing not the circuitry at all and is so smal that you can barely see it My wife unfortunately did not ensure the shipping of my card so Im out of luck EVGA knew the card was bad IE the RMA they gave me before i shipped it off Anyone else have the crappy customer support from them or am I one of the goats running out to slaughter nbsp
When I RMA'd my video card EVGA responded very quickly. They gave me an RMA number just a few hours after I requested an RMA, and I got my new card just under two weeks after I shipped it to them. My card did not have any damage like a chip out of the edge though.
Quote EVGA Precision v - - -All vendor specific hardware access code has been moved from the executable file to separate EVGA v1.8.0 Precision hardware abstraction layer library RTHAL dll to simplify the process of supporting new hardware products in the future -Fixed GDI resource leak during monitoring window detachment attachment -EVGA EVGA Precision v1.8.0 On-Screen Display Server has been updated to version New version adds pre-created On-Screen Display rendering profile for quot The Chronicles of Riddick Assault on Dark Athena quot and introduces floating injection address hooking technology aimed to reduce the risk of On-Screen Display detection by third party applications e g anti-cheat systems -New skin format v gives new possibilities to skin developers and reduces EVGA Precision v1.8.0 skin file size and runtime skin engine memory footprint -Reduced CPU usage for simultaneous GPU and PCB temperature readings both sensors are being read in single pass now -Added workaround for SLI mode switching related issues on ForceWare xx series drivers Now Precision is waiting for the end of SLI mode switching process before retrieving any data via NVIDIA driver API -Includes new quot Sleek Skins quot by Jonttu EVGA Precision v for those don't like to register EVGAPrecision ZIP
Thanks for the update, nice new skins with this one too.
How do you access the controls for the ESA on a eVGA 780i motherboard?
Run nTune? I think....
Enthusiast System Architecture - Wikipedia, the free encyclopedia
And, funny, I have the same mobo - but I couldn't afford all equipment that is ESA certified....
as posted in another thread ive built a new system and heres the specs:
Asus P5N E SLI MB
duo core 2 2.4 intel
hitachi 250 GB sata
evga 8800 GTS
2GB corsair dual channel DDR2 800 mhz
500 watt PSU
The win XP Pro install seemed to go ok but im having some freeze up problems.
The fan on the 8800 evga is turning but i dont feel any real exhust come out the vent at the back of the card. The card seem a little hot to the touch too. Should I feel a good blowing exhust out the back of the card? Or is it minimal and feeling hot to the touch is usual for the 8800 card?
Well I cant really tell the Airflow of my GPU cooler either (ATi x1950xt) but I know that ATi's catalyst can tell me the temps, and they are about 45 idle and can go WAY up in game. FOr nVidia cards I would recomend downloading NvTempLogger which will log your nVidia based temps (more info at the link). I cant tell you what temps you want to have but this thing will let you know what you do have and then you can easily check nVidia's site for what is acceptable (or a quick google search).
My EVGA 8800gt is experiencing major issues, so I will be RMAing it shortly. I was wondering which card you think I will get in return? Anyone RMA one recently?
Probably a direct replacement.
EVGA will probably tell you what card you'll be getting once you get the RMA ticket.
I posted this question at the eVGA forum unfortunately the forum is lame as compared 260 GTX SC/55nm eVGA to here So I have a pair of these cards eVGA GTX 260 SC/55nm I recently updated to driver from beta XX My W E I score for both graphics categories dropped from to I was wondering what driver others eVGA GTX 260 SC/55nm were using And if anyone could suggest a better driver for my cards works eVGA GTX 260 SC/55nm fine as far as gaming and the like goes but you learn nada by being passive I got from all previous drivers after OC'ing the cards before I OC'ed I pushed the cards a little hard but things have always been stable either way I assume what W E I is testing for is MB s so I was wondering why the drop albiet a little drop I have installed Precisions latest version and thats how I OC the cards but the latest version does not work with pre XX XX drivers and I'd rather not revert back Any thoughts appreciated ty ps please do not post that W E I is garbage or inconsequential I use it as a point of comparison
As long as the system works ok i wouldn't worry about a .2 point drop in your WEI. You could always go back to the old drivers unless the new drivers have features or fixes you need.
Ken
So I have a client who has a 660ti P/N: 02G-P4-3660-BR, and he also has a 660ti P/N: 03g-p4-3661-kr
One is the 2GB version and one is the 3GB version. Is he able to use both of these together? Or Not possible since they don't both have 2GB or 3GB?
Yes, but with the 3GB one, it will use only 2GB to match the other one. So you literary have an empty 1GB GB GDDR5 RAM to hook up another monitor .
Wanted to upgrade my graphics card and wondering if the card would be compatible, no BIOS problems, etc, and if it will fit. Measured inside and it should fit. Here are the specs for my PC, I upgraded the Power supply to a CX 750M Corsair. PC SPECS: Here is the card i want to get:
Andrew, welcome to the forum. You should measure the space between the Back I/O plate and the front of the computer. This will allow you to see how much length the card can have and check to see if any components are in the way. Also, I suggest that you contact EVGA Tech Support (1-888-881-3842) to assure that the card will work in the computer. It requires UEFI in the motherboard instead of a standard BIOS. HP didn't begin using UEFI until mid-October, 2012. I don't believe that your motherboard has UEFI. Please click the Thumbs up + button if I have helped you and click Accept as Solution if your problem is solved.
So i upgraded my pc this Christmas, and have been playing saints row three. been pretty disappointed with my framerates.(25 on ultra compared to 35 on medium.) I was thinking somethings bottle necking most likely my ram.
-asus m2n32-sli deluxe
-AMD Anthlon II X3 445 Processor 3.4ghz
-2x2gig 800mhz ram
-600w PSU
nvm discovered the NVCP can do it.
EVGA X58 is the "Fastest System on Planet Earth"
The.
Read more at the source
Link doesn't seem to work.
Hi
need some professional opinions to tell me which one is better
EVGA 770 2GB SC ACX Cooling 1111MHz -1163MHz
EVGA GeForce GTX770 2GB SC ACX - Nvidia GTX 770 med 1163 MHz klockfrekvens!
OR
ASUS 770 2GB DirectCU II OC 1137MHz-1189MHz
ASUS GeForce GTX770 2GB OC - Vasst grafikkort f?r den inbitne gamern
Note :regardless of the customer service.
Thanks.
I've used ASUS products quite a bit and I've always been impressed with the build quality and durability of their stuff. Personally, I wouldn't hesitate to buy one of their cards, but that's me. YMMV.
I haven't used EVGA products, so I can't comment on them.
So I got my new card today and must say Im really loving it I got the HighFlow Bracket and backplate to go with it only to find out that EVGA now ships the newer cards with HF bracket already on them So now I have a extra one Heres what it GTX570 new My SC EVGA looked like after I got it put together Sorry My new EVGA GTX570 SC about the bad quality I could not find my camera so just used my phone Installation went well with one minor problem The card was a bit larger than I anticipated I had to My new EVGA GTX570 SC move one of my SSDs to higher ground so it would fit in my Antec case But theres room so long as a a HD is not in that area At first boot I was actually worried the card was dead I had to check if it was running I had no clue my was actually that loud Im still experimenting with the card and getting used to Nvidias control Panel but I did manage to get a couple benchmarks for those interested For comparison purposes Ill show the vs just in case anyone in is curious how much difference there is At P my ran Metro pretty well with Med settings A stutter in a few places but overall quite well The one on the left is the and the right is the with the same settings The did had PhysX enabled otherwise the same settings I then tried the at High settings with AAA x AF Tesselation and Adv Phys-X at P And heres what Unigine showed at P So far Im quite impressed with its performance And its actually quite and relatively cool eVGA Precision shows idel at - C And by enabling the Fan Profile in it so far Im seeing load temps of ish give or take a couple in Benchmarks And heres a GPUz snip for those interested in all the Geeky stuff Finding my way around the Nvidia control Panel will take a bit used to being as Ive spent the last few years accustomed to ATIs CCC But one thing I really love so far I use alot of profiles for different setting for AA AF Vsync etc for different games Im used to setting these up and enabling them before I launch a game Now I just set a games profile and it enables it automatically when the game launches Very Nice Although I am very happy thus far with card and its performance I cant help but wonder if my now Mid-Range Q amp DDR RAM may be holding it back a little
OM NOM NOM BENCHMARKZZZZ.
Looks sick. I have Metro 2033 on it's highest setting on my two 6850s but i also only have a 1600x900 monitor, too.
TIEM 2 0VRKL0KZ0RZ.
Hi!
Heard a lot more complains on the striker then on the EVGA. So...
I'm definetly going with the eVGA x58 in my next build and am putting together the rest of my specs. I really like the Antec 900 mid size tower, but can't find alot on the internet in regards to this combo. Tigerdirect and other distis that sell barebone bundles, package this mobo with larger towers. Will the Antec 900 not be a good fit?
From the dimensions,the motherboard should fit the 900.
The first reviewer here use that case.
Newegg.
Might want to keep an eye on the length of the video cards you are
going to use.
Going to install W pro upgrade today using vista for the validation or whatever X58 Install evga i7/ and i7/ evga X58 Install back when I built this rig last year I remember I had issues with installing Vista or even XP Pro that I had trouble in the bios setting the AHCI up I wont be doing raid and am going to install W on my WD g raptor GLFS model I just checked and its in IDE mode So my question is before I boot from my upgrade disc of should I hop in the bios and set it to AHCI or leave it as an IDE I will be unplugging my two other drives Seagate g barracudas before I install to my g raptor Will W install the necesary drivers to enable AHCI or what Sorry for the confusion on this Running the SZ G bios and please dont ask me to upgrade the bios first as I'm very uncomfortable doing flashes for some reason Will I see that much gain in performance from IDE to AHCI or is that a moot question I've done quite a bit of reading here on the W forums and apparently I wont even have to run my X disc to install the motherboard drivers Everyone says W will install the current to them ones during the install Thank you Tom
WIN 7 includes ACHI, but you might consider using IDE, especially if you plan to upgrade to a SSD in the future. Some SSDs have issues with ACHI and work best with the plain-Jane MS IDE drivers.
Hi All, I'm planning to buy the 7800 GS but i'm not sure which one is the best one, you suggest me to avoid the OC versions and I will, but the problem is that I heard BFG have bad reputation is it true or not? some reviews at newegg qualify BFG better than eVGA and PNY, I would like to get the eVGA but not sure, also the BFG has a pretty good deal with $50 MIR for total 249, and eVga has it for 269.99, what is your experience guys? which one is better? or is it the same from any brand, Thank you!
depends on warranty
you should check out their warranty policies
I think EVGA has a step up plan and it also lets you overclock without voiding the warranty I think
i'm looking to get this Card to max out,,any good programs for that?
Im useing a Tt Giant II cooler and mem heatsinks so heat isnt a prob. any links or tips would be cool.
thanks all in advance
BTW this is a 128 meg version on nVidia MX-440 8X
look in TS's download section under must have utilities.
New EVGA GTX285 Released check it out here.
The marketing line that was rejected:
"Go beyond your price range"
I wouldn't say no to a SSC.
Hi, I am planing to build a custom budget gamning PC.
Is this GPU( gtx 650 2 gb) or this ( gtx 650 2 gb) really worth it and does it fits to this:
AMD Athlon X4 760K-CPU
MSI FM2 A75MA E35-MOTHERBOARD
Corsair Vengeance 1600 Mhz 4 or 8 GB-RAM
Corsair CX430-PSU
NZXT Source 210-CASE
WD Caviar Blue 500 GB-HDD
Sorry for my english.
Thanks in advance.
Do you have a budget for how much you can spend on the Video Card itself? I would suggest shopping around because Amazon doesn't always have the best deals. Depending on where you are located, you could find some really sweet deals. Spending another $10 you can get an EVGA GeForce GTX 750 Ti Superclocked 2GB ( ) or for the same price as GTX 650 you can get a PNY GeForce GTX 750 Ti 2GB ( ). I'm not saying you have to buy either of those cards, just showing you that you have options. You could also go with an AMD video card if you wanted to. Let us know how much you can spend (on either just the Video Card or the whole system) and we can help you out.
Also, looking at the specs you included, it looks like you have a decent plan for the build. I would suggest getting 8GB of Ram, not just 4GB. If you want to game on it, then 8GB is pretty much the "safe zone" for gaming.
hmm i dono what to do should i get an XFX GT CAD express shipping http www ncix com or XFX eVGA OC'ing 7600GT? or RAM?! new products index DD amp manufacture XFX Technologies amp promoid or this eVGA GT CAD express shipping http www ncix com products index php sku amp vpn -P -N amp manufacture eVGA I have an eVGA GT and I really like their customer service I never have to wait more than minutes on the phone to get tech support but damn their fans are loud and GPU stays oh so warm pretty good cards with eVGA or XFX 7600GT? or new OC'ing RAM?! cood tech and customer service however XFX hmm i heard some good things just some BUT heres the catch notice the clock speeds MHZ GPU and GHZ DDR and the price is better and shipping is halfed the eVGA is doing MHZ and GHZ the GPU clock difference isnt a big deal but the DDR speed is mhz diff and improves performance quite a bit OR maybe i should get some nice OC ing ram OR how about saving my money Im planning on selling my GT for so id only need like another to get either of the top cards mentioned nbsp
Why don't you wait a month or 2 and buy a 7800GT or GTX. with the new 7900 series out now, retailers are dumping the 7800 series at reasonable prices. I've been watching them come down in price in just the last few weeks. The diff between 1.6 and 1.5 GHz is ntohing and you won't notice it or you can just adjust this in the nVidia settings.
I have the 6600GT also and plan to upgrade to 7800 when they hit a really nice price (next few months).
ps, I can't seem to find any 512Mb 7800GT or GTX , btw, weird.
Take gaming performance to extreme levels EVGA GTX Edition EVGA 295 CO-OP is proud to announce the latest and fastest in high performance graphics accelerators the EVGA GTX CO-OP Edition This card combines two GPU s onto a single PCB printable circuit board a clear indication on why this card is called CO-OP Want EVGA GTX 295 CO-OP Edition even more performance Pair up two of these for QUAD SLI or pair up with the standard GTX Features An incredible MB of onboard DDR Memory - View high-resolution images and textures with this massive amount of onboard memory Up to x supported Dual DVI-I and HDMI a - Dual Dual-link DVI support for HDMI with adapter output enables sending both high-definition video and audio signals to an HDTV via single cable nd Generation Unified Shader Architecture - Delivers more gaming performance over the first generation through an incredible enhanced processor cores that provide incredible shading horsepower NVIDIA PhysX - GeForce GPU support for NVIDIA PhysX technology enabling a totally new class of physical gaming interaction for a more dynamic and realistic experience with GeForce Full DX Support - DirectX GPU EVGA GTX 295 CO-OP Edition with full Shader Model support delivers unparalleled levels of graphics realism and EVGA GTX 295 CO-OP Edition film quality effects for today's hottest games NVIDIA CUDA Technology - CUDA technology unlocks the power of the GPU's processor cores to accelerate the most demanding system tasks such as video transcoding delivering up to x performances over traditional CPUs nd Generation Purevideo HD - High-definition video decoder and post-processor delivers unprecedented picture clarity smooth video and accurate color for movies and video EVGA Articles EVGA GTX Co-Op Now that is what a Graphics card should look like I want one
Thanks for the info Mr. Grim I want one too.
Alright so I got a new video card for an old build with a dying video card The new one is an evga GTX non ti non overclocked The problem is upon installing the drivers the PC bluescreens while at the starting windows screen The message says attempt to restart the graphics driver and recover failed or whatever it is along those lines the stop message ix x driver in question is nvlddmkm sys I have eliminated as far as I can tell the possibility of a driver problem I tried driver sweeper once tried two versions of drivers the one on the disc and the latest from online and I simply just uninstalled and reinstalled numerous times Anyway I have eliminated the possibility of drivers im fairly sure and the possibility of a bad motherboard as my GT is working perfectly as I type this on the PC in question Noteably the GT bluescreens 660 Troubleshooting an evga GTX about once a week or so with the same message of driver recovery failed I could not tell you if the stop message is the same if it happens again ill update thread This only happens after a while of gaming or sometimes tabbing in and out of games Temperatures are reasonable not a cause Troubleshooting an evga GTX 660 I might also say that the card DID boot ubuntu live off a USB and display the screen in p quite well It also works fine in safe mode for windows I couldnt get any of the GPU diagnostics tools in ubuntu to work though-most seemed to be unavailable Troubleshooting an evga GTX 660 to download the nvidia drivers didnt like live copy So my question is could the OS itself be the problem Should I attempt to make a small linux partition Or is Troubleshooting an evga GTX 660 it more likely than not just a faulty card Im about done here I just want to know by tommorow if I can send the bloody thing back if it is indeed faulty Specs core quad Q GB ddr w PSU card appears to be working fine on it in terms of fan spinning slight heat generation Intel DG CN example link http compare ebay com like var lv amp ltyp AllFixedPriceItemTypes amp var sbar I know its not a good system to put a gtx on but the rest will be replaced eventually If its relevant my case has more than adequate cooling This is not what I wanted to do upon receiving a new card I wanted to game Please please help and thank you nbsp
The 660 requires a 450W PSU or greater, so you are fine. What OS do you have?
EVGA - Products - EVGA GeForce GTX 980 Ti K|NGP|N ACX 2.0+ - 06G-P4-5998-KR
Yep thanks for the heads up
750.00 to 1050.00 no guarantee's of clocks of course
Hi,
I just bought an EVGA 9400 GT PCI graphics card by Nvidia and I'm having trouble installing it on my HP Windows Xp machine. I uninstalled the onboard ATI Radeon Express 200 card, shut down my pc, opened it up, popped in the card turned it on, flipped video from onboard to pci slot in bios and tried booting... No video. Turned it off, started windows in vga mode, was going to reinstall the drivers to the onboard video and then i realized that my pc had installed the nvidia one because it was being recognized in hardware manager with no problems. I have no idea what im doing wrong, perhaps i missed something. I uninstalled the 9400 and im back on the onboard ati with no problems but I'm back at the begining no installed graphics card. Any and every help greatly appreciated.
Ok quick question, are the GeForce 8800GTS Superclocked compatible w/ a Win2k OS?
I believe that if your GPU is compatible with your P.C., you should be able to run any Windows.
evga 6600 256 PCI-E and im buying it off of a friend but i was just woundering is this a good video card like for games i have a giga byte mobo 2.8 p4 800fsb i want to be able to play far cry, doom 3 ect... just woundering if this setup would do the job thanx
It should definatly do the trick, I think it would do the job even on full graphics for those games. Of course it would depend on things like your chipset, and amount of ram u have. I run something compareable, radeon 9600 pro 256 mb, AMD 64 3200+ 2.2 ghz, 1 gig of ram with an Nforce3. It can run these games perfect.
If your worried about playing full performance, just buy a bit of an older game first, to make sure it runs well. Like Bf1942, call of duty ect..
If they run to your standards go ahead and get doom3, farcry, or halflife2.
My son bought this mb and started building last night. I have never used any sata devices before. He bought a new DVD sata drive, but bought a converter for his current ide HD. The computer starts up weird. It powers on then sounds like its turning off, but then starts up again. I get a message to insert boot media and to hit a key. When I go into the bios, it is not recognizing the HD at all. I think it's recognizing the cd, but when I click on the device it says disabled. What am I doing wrong????
Those adapters are junk. Using a SATA hard drive would be the much wiser choice and the performance of a new SATA drive will be significantly higher over an older PATA drive as well.
Hey guys,
So I tried tracking down a EVGA GTX 670 FTW 4gb to SLI it with my current one, but I couldn't find anything online anymore. Could I pair it up with a another GTX 600 series 4gb or a EVGA GTX 670 FTW 2gb (when I have a 4)?
I want to SLI (now or some short time in the future) but I don't know what to do about my existing GTX 670 problem. I don't really know much about the compatibility of SLI so I was wondering if there was any insight on here.
Thanks!
You can only pair a GTX 670 with a GTX 670 (Same with pairing a GTX 660 and a GTX 660).
But don't worry if you have an EVGA GTX 670 and ASUS GTX 670 for example, it's still fine.
ok here my situation I have an old PC and i am upgrading now the 133-K8-NF43 need Mobo help eVGA .. only thing i took from the old PC is the RAM sticks and the DVD CD burner AAAAANND the harddrive mobo vid card CPU PSU are all new now my motherboard is the NF so I am eVGA 133-K8-NF43 Mobo .. need help aware that I pretty much have to have a clean install of XP so eveerything works fine neway I installed the mobo on the new caase i didnt touch the jumpers theyre all on default because I heard default settings is the normal user setting I connect pin ATX V rail and the video card I plugged in my old harddrive and my CD DVD burner I boot up the computer and it says Floppy failed to boot so I press F to continue n skip that then it says something about hardware conflicts or somethin so it asks me if boot in safe mode last known good configs start normally now at this point i cannot choose because the keyboard wouldnt work neway i tried another way I inserted XP CD restarted and set the CD room to boot and it did but then at the point where it ssays quot press any key to boot from CD quot none of the keys work its like the keyboards got no power or something can anyone plz help me out i really want to get this thing up n running nbsp
What kind of keyboard are you using, usb, or ps2?
Can you get into bios?
Also, what is the exact error message you`re getting.
Regards Howard
January EVGA Precision is available Changelog Precision Released 2.0.2 EVGA below nbsp nbsp Information EVGA Precision v - - Fixed issue in the context help system causing EVGA Precision 2.0.2 Released it to display wrong floating tooltips when hovering cursor over the controls under certain conditions EVGA On-Screen Display server has been upgraded to version New version gives you the following improvements Reduced On-Screen Display rendering related CPU performance hit due to more effective geometry batching in On-Screen Display D rendering mode codepath Now EVGA EVGA Precision 2.0.2 Released the impact on the game performance when saving screenshots on the systems with multicore CPUs Text indication of screen capture events in the On-Screen Display has been replaced with graphics progress indicator Fixed issues causing the On-Screen Display to be rendered in wrong colors in D mode in some multitextured Direct D applications e g several bumpmapping related samples from DirectX SDK Updated profiles list Added option allowing hiding On-Screen Display on captured screenshots Reduced hardware polling time on multi-GPU systems due to cached GPU context switching in hardware monitoring module All synchronization mutexes have been moved from local to global namespace Added programmable alarm system allowing the Precision to play alarm sound when any monitored parameter is out of range defined by user Localization reference guide documentation has been updated to v EVGA Precision EVGA Precision
Thanks for the heads up Aaron.
July EVGA Precision overclocking utility is Released EVGA 1.9.5 Precision available Changelog below nbsp nbsp EVGA Precision 1.9.5 Released Information Version History EVGA Precision v - - Dynamic overclocking and fan speed limits EVGA Precision no longer uses static slider limits calibration and adjusts the limits dynamically when some external factors affect it e g when minimum fan speed is limited by NVIDIA driver Built-in skin sizes have been reduced due to optimized internal skin panels representation and optimized compiled bitmap cache Optional skin compression ability in the built-in skin compiler Now EVGA Precion uses previously undocumented power user oriented startup mode via the task scheduler under Windows Vista Windows EVGA Precision launch no longer requires UAC confirmation at Windows startup Please take a note that Microsoft Visual C runtime libraries must be installed to get new startup mode working Now EVGA Precision automatically fixes startup link if or is enabled but the registry or task scheduler startup entry is missing Now power users can enable optional DirectInput based hotkeys handler via the configuration file DirectInput based hotkeys processing can seriously reduce hotkey response time in the applications heavily loading CPU mostly D games and leaving not enough time for processing standard keyboard input message queues Please take a note that enabling such sophisticated hotkeys handling mode can cause some system security applications e g pro-active application behavior analysis module of KIS to warn you about possible keylogging threat Improved skin engine now skinned controls support horizontal and or vertical centering Skin format reference guide has been updated to document these new alignment modes EVGA On-Screen Display server has been upgraded to version New version gives you the following improvements Now screen capture events are identified visually by text message flashing in On-Screen Display during s Updated profiles list EVGA Precision
Now that is cool. Thanks..
Try this caculator: Thermaltake Power Supply Calculator
I bought my new computer about 1 and a half years ago. This "problem" has been going on from when I bought it... I find my GTX 680 to lag at points that it should not lag at. For example at the moment I am playing Dead Space 1 and whenever I shoot the monsters with a flamethrower, the game lags. When those exploding monsters come and I shoot them, it lags. It used to do this in some other games too but I forgot which. I also play World of Tanks, no lag at all. Am I exaggerating, might it be the game, badly optimized? Dead Space was not downloaded illegally, it was bought from the Humble Bundle Thanks...
Hi bud, can we assume your GPU drivers are up to date?
I am having problems connecting my new graphics card. I connect to the correct power cables and then i"m not able to find my cd drive. Help please.
August 10, 2010
EVGA Precision 1.9.6 is available. Changelog below.
Information
EVGA Precision v1.9.6 (08-10-2010)
EVGA Precision
Thank you Aaron. All updated.
Does anyone have any experience with this? The "used" 980 Ti I bought off of Amazon Warehouse Deals turned out to be brand new and eligible for the Step-Up program, so I submitted a step up for the 1080 ACX 3.0 model. I'm curious about stuff like shipping time and if anyone has had any hassles? My invoice that I uploaded said "used" on it, so I'm wondering if that would disqualify me.
Several mos ago replaced OEM GPU that ran hot w EVGA GT When hot weather got here I noticed GT seemed to run hotter than I think it should w normal browsing temp to EVGA before GT up? 240 Max fan cranking allow But also noticed fan speed really didn t increase much until temps were in high s - Max temp to allow EVGA GT 240 before cranking fan up? even then fan might run only - I can get GPU up past C just clicking back forth on some images - like Mozilla themes Yes the GPU fins fan are clean - took em outside amp blew out Can see down thru the fins w a flashlight to check Really wasn t dirty so made no diff under type of use described Installed MSI s Afterburner amp tried the quot User defined software automatic fan control quot - which controls the fan well SpeedFan didn t amp easy on resources Question - at what temp would you really crank up the fan to avoid ever getting above that temp Specs or manuals don t give this info Obviously the stock setup wasn t effective Right now have Afterburner set to at C C see graph This may not be running the fan high enough to keep it below C - still gets there opening a lot of pages Once fan kicks in temp drops fairly quickly amp running at that easily drops it below C or Should note - no other components get nearly as high so doesn t seem to be a problem w the case I know some report GPUs running F amp even higher but seems kinda high amp hard on components if happens very often Still have a lot of room to increase fan speed but don t want to be overly concerned if C isn t an issue I could raise it to or higher C if necessary This also makes me question effectiveness of the factory thermal paste as I ve read so many complain about That maybe I should think about redoing the paste Thanks nbsp
Hi Debbie,
GPU' s are designed to run warmer than CPU's so while cooler is better, 50-60c is nothing to get overly concerned with. increasing the airflow through your case to lower temps would be preferable to running the cards fan at 90% constantly if you can. The upper thermal limit for GPU's (AMD & Nvidia) is 105c. while thats to warm as far as I am concerned, If you can keep idle temps below 50c and load temps to 85c or less, the card will probably outlast your use for it.
Hope that helped
Hi Guys, not sure if there has been a thread about this already, but couldnt find one.
I would love to upgrade to a triple display setup, and know there can be complications with an SLI or CrossFire setup, so was gonna try a single card set up.
I am looking at GTX 680 but not sure whether to go for a 2GB model of the 4GB.
Has any one got a 2GB 680 and running Nvidia Surround? If so how does it run?
Thanks in advance
Toby
Will this computer be used for gaming?
What card do you have now?
September EVGA's Precision software version is available Changelog below nbsp nbsp Information EVGA Precision v - - Added software automatic fan control mode allowing end users to define custom temperature to fan speed mapping curve Added floating tooltips based help system for advanced EVGA Precision settings window Now it is possible to pause hardware monitoring module via quot Pause quot command in the context menu of hardware monitoring window Performance profiler status information is no longer power user oriented now it is possible to toggle the performance profiler status visibility directly via quot Show status quot command in the context menu of hardware monitoring window Now it is possible to clear hardware monitoring history via quot Clear history quot command in the context menu of hardware monitoring window Now it is possible EVGA 2.0 Precision Released to get directly into hardware monitoring graph properties by clicking a tray icon associated with it or via quot Properties quot command in the context menu of hardware monitoring window Added multilanguage system and sample Russian localization pack Added SDK folder EVGA Precision 2.0 Released containing reference guide documentation for third party EVGA Precision skin and localization creators Now core clock controls are displayed on GeForce FPS Fan tachometer monitoring graph limits have been changed to EVGA EVGA Precision EVGA Precision
Thank you Aaron.
It's a simple question i have 160? to buy graphic card, the best i can get for that kind of money is either EVGA gtx 460 786 SC or MSI gtx n460 786/5d Cyclone.
Which one is better?
Both cost same, one is 158? other 160?.
Which is quieter?
I can't find accurate info, for EVGA it says max noise 40 db, but at the same time it says it is the quieter of this class. For MSI i found review that says it only produces 31 db at full load.
How does mini-hdmi/dvi to hdmi work?
I saw lots of threads on different forums, where people are reporting problems with sound when they connect gtx 460 to TV to watch movies.
Thnx for help.
So, i found more information at goru 3d, they tested some of gtx 460 cards and now it is even tougher to decide.
MSI GTX 460 Cyclone OC - The idle noise levels coming from the card are downright silent, in idle you will barely hear the card as we measured 37~38 dBA, which is right below the threshold of noise coming from the PC itself.Click to expand....Click to expand...
eVGA is a couple of fps faster than cyclone in most tests, but it doesn't have a voltage raise function if i decide to further oc. Whinch means max overclock is a bit lower compare to Cyclone.
In the end i may just buy the cheapest palin gtx 460 fot 145? and overclok it, i am sure i could oc it to 800mhz core and 4000mhz ram.
Is there any combination of NVIDIA drivers BIOS and windows that will actually work with this EVGA FTW in the amplifier I just purchased the and the amplifier I keep getting code which I ve read means the driver is not working I ve literally tried combinations of different drivers bios and I even did the windows anniversary update to see if that would work I ve tried manually deleting nvidia files and DDU still nothing quot Windows has stopped this device because it has reported problems Code quot The + AW17R3 1070 FTW AGA + EVGA GTX hours following anything I can find in s of threads of people not getting the amplifier to work with drivers it is a clear pattern since the inception of the amplifier Before someone trolls me I know AW17R3 + AGA + EVGA GTX 1070 FTW the EVGA FTW card does not allow the graphics amplifier to close I purchased it for resale value in the future because of AW17R3 + AGA + EVGA GTX 1070 FTW the lights I plan to upgrade every new generation My oculus is somewhat working with the M inside my laptop no idea why optimus exists but I got all the different programs dedicated to nvidia but I want to unleash the any help appreciated For background on the exact things I purchased throughout this past year to get here nbsp nbsp nbsp
November 4, 2010
EVGA Precision 2.0.1 is available. Changelog below.
Information
EVGA Precision v2.0.1 (11-04-2010)
Fixed issue in hotkey handler causing it to detect false keystrokes under certain conditions
Added delayed fan speed readback mechanism to improve compatibility
Added French and Polish languages
EVGA
Thanks for the update
Exclusive EVGA GPU Voltage Tuner v EVGA now gives you even more ways to maximize your EVGA card with the exclusive EVGA GPU Voltage Tuner GVT This utility allows you to set a custom voltage level for EVGA tuner voltage GPU your GTX or graphics card see below for full supported list Using this utility may allow you to increase your clockspeeds beyond what was capable before and when coupled with the EVGA Precision Overclocking Utility it is now easier EVGA GPU voltage tuner than ever to get the most from your card With these features and more it is clear why EVGA is the performance leader Features Provides an environment to possibly increase your GPU voltage allowing for higher Core Shader clockspeeds Easy to use interface with Apply at Startup functionality so you do not need to apply on every bootup System Requirements Windows XP Vista and Both bit Supported EVGA graphics card see below Forceware or higher drivers source its only able to alter EVGA cards currently but there will be mods on the way allowing other GT vendors to use this S W SK
I have the HP Pavilion 550-153w and I want to put in EVGA GeForce GTX 950 FTW GAMING ACX 2.0. Will my system handle it?
The question would be why? What do you want for it? A particular game? Of concern is the i3-4170 processor. If it's a game, go to game-debate and do the "Can I play It".The GTX 950 peaks at 90W. Your system would require a minimum of 350W. I would recommend a Corsair CX 500, provided that your processor is sufficient (which I really doubt).
I was running a EVGA 760 up until a few hours ago which I replaced with a 970. When I have the power supply attached to the new card, the onboard graphics won't work, and neither will the card. When I disconnect the card, the onboard graphics works, but the system won't detect the graphics card after I reattach the power supply to it, and therefore it won't allow me to install drivers.EVGA thought it might be a compatibility issue with the bios (or the board), as it's might simply be too old to run the new card. Anyone have any ideas?.
I would recommend at least a 1000 watt or more psu. It's not that much more money and you have more head room. Lots of new 1000W and above psu's available now and we should see some good cyber Monday prices.
I use EVGA Precision and have a Geforce 9800 GTX card. The GPU Temp of my card sits around 51 C while gaming, the fan speed was at 45 default so I turned it up to 70 (GPU temp was around 60s C when the fan speed was 45). Should I mess with the Core clock and memory clock settings or is it better to leave those alone? Core clock is at 700 while Memory clock is 1100. Which one would increase my FPS?
Recently I got an EVGA 133-K8-NF43 Rev 2.2 motherboard but it came without a power supply and cpu chip/fan/ and heatsink. I know that this system runs AMD 64 Athlon, but the problem is that I don't have the processor chip installed and cannot even test this motherboard. Can someone please recommend to me which type of power supply is okay to install into this system and if I should try getting a processor that can fit into this motherboard? I would also like to know any additional info if anyone else knows a little bit more about this motherboard and its quality/graphics/ and the performance it provides. Thanks a lot.
Hello there. I googled that motherboard and I have found this website
It seams as though you just need a standard ATX psu with;
1x 24 Pin Motherboard Plug
1x 4 Pin Motherboard Plug
Both of these plugs are supplied with almost any PSU that you can buy now. Here is a good quality cheep PSU of coarse you can go cheaper but usually you will loose quality and reliability.
As for CPU's this board is a socket 939. It says that it accepts AMD Athlon 64 cpus. That is all I could find about the CPU's on EVGA's limited help section. I presume that something like this would be ok with it.
Jack-O-Bytes
Yes it is a budget card But for the price it is still an awesome card http www nvidia com object geforce html First the cons Setting up D settings through the Nvidia control panel while easy was a mistake I have set 8400GS EVGA eGeForce it up to run at almost the best possible settings leaving almost nothing to run as per application settings Run into issues immediately Playing BF with in-game settings at a medium the game would crash about every hour Same was for the Chronicles of Riddick game The stock cooling is useless Card very often heats up to degrees The multi-display settings in the driver a hard to figure out EVGA eGeForce 8400GS EVGA website does not even display the card anymore and it wasn t very easy to find it there The pros Very good with video processing I run Various Adobe graphics and video programs and none of them have any issues When I finally adjusted the settings in the control panel to let each program decide the quality for itself the card performed great Overall i would have to say that if You like me have a low budget but do want shader and DX compatibility this is the card for You I have yet to test it on Vista nbsp
detoam said:
↑
Yes it is a budget card. But for the price it is still an awesome card.
First the cons:
The stock cooling is useless. Card very often heats up to 58 degrees.Click to expand...
58 degrees Centigrade is not very hot for a GPU.
detoam said:
↑
The multi-display settings in the driver a hard to figure out.Click to expand...
Yes, to a degree, but Nvidia has included every setting you might think you need but it does require a bit of digging and patience.
detoam said:
↑
EVGA website does not even display the card anymore and it wasn't very easy to find it there.Click to expand...
Well, true enough, but this is true of most manufacturers sites now. They try lead you right to the 8800GT cards right from the jump. I suppose they figure that's what everybody's looking for at the moment.
detoam said:
↑
Overall i would have to say that if You (like me) have a low budget, but do want shader 4 and DX10 compatibility this is the card for You.Click to expand...
I didn't take the time to check this out, but I believe the GT8400 also is HDCP compatible as well.
If you aren't trying to establish yourself as a legend in gaming circles this could be the graphics card for you, as it adds dual monitor and High def capability to the average computer with onboard graphics.
I would also suggest that people check out this GT8500 from Biostar: this card is a steal ($35.00) after MIR. It has 512MB of GDDR2 of it's own (doesn't use any system RAM), and is 128 bit access. I believe the 8400s are 64 bit, but keep in mind I don't know if that is really a downside when you consider the likely uses of either of these cards. They both should do a bang up job of driving your flat screen TV and monitor.
|
http://winassist.org/thread/2590/Upgrading-from-EVGA-GTX-560-TI-Superclocked.php
|
CC-MAIN-2019-13
|
refinedweb
| 13,784
| 71.07
|
cs205 Friday 25 August 2006
What is a more important property of a programming language: (a) what it can say; or (b) what it cannot say?
How do different languages you know trade off expressiveness and truthiness? (If Java is the only programming language you know, try to answer this question for English)
High | | | Expressiveness | | | x Java Low | |______________________________ Low High Truthiness
What is an object?
How does strict type checking impact the expressiveness and safety of a language?
How do classes help programmers manage complexity?
How do visibility modifiers help programmers manage complexity? CellState createAlive() // EFFECTS: Returns an alive cell state. { return new CellState(true); } static public/* nonnull */CellState createDead() ... public Color getColor() // EFFECTS: Returns the display color for this state { if (alive) return Color.green; else return Color.white; } ... } public class ExtremeLifeCell extends Cell { public CellState getNextState() // EFFECTS: Returns the next state for this cell. // The next state will be alive if this cell or any of its neighbors // is currently alive. { EnumerationIt's surprisingly difficult to find a good name for a programming language, as the team discovered after many hours of brainstorming. Finally, inspiration struck one day during a trip to the local coffee shop. — James Gosling(); } }
|
http://www.cs.virginia.edu/~evans/cs205/classes/class2/
|
CC-MAIN-2018-43
|
refinedweb
| 201
| 57.16
|
11543/changing-message-format-in-kura-mqtt-cloud-client
I'm running Eclipse Kura with a MQTT cloud client to publish messages. But, now I need to change that format of my MQTT messages.
Like, in this following format: #account-name/#client-id/#API-ID/topic
what if I didn't want the account name or the client id in it?
Any suggessions??? Thanks!
The Eclipse Kura communicates with a MQTT broker using any one of the following two mechanisms:
1 --> Using CloudClient
2 --> Using DataService
I'm guessing the CloudClient you are using might have some restrictions imposed on the topic namespace segregation due to the hierarchy of the communication structure between the devices and the gateways.
If you'd rather have your own namespace segregation, I'd advice using the DataService to directly define your own MQTT namespace for both publishing and receiving MQTT messages.
You'll need to use the OSGi framework and direct it to inject the instance into your component class like in this following sample code:
public class MyComponent {
private DataService m_dataService;
public void setDataService(DataService dataService) {
m_dataService = dataService;
}
public void unsetDataService(DataService dataService) {
m_dataService = null;
}
// activate() deactivate() and all required methods
public void publish() {
String topic = "your/topic";
String payload = "Hello!";
int qos = 0;
boolean retain = false;
try {
m_dataService.publish(topic, payload.getBytes(), qos, retain, 2);
s_logger.info("Publish ok");
} catch (Exception e) {
s_logger.error("Error while publishing", e);
}
}
}
Now, inside your component OSGI-INF/mycomponent.xml, you'll need to set the methods that you want the OSGi to call for injecting the DataService. You can do that by adding the following:
<reference name="DataService"
interface="org.eclipse.kura.data.DataService"
bind="setDataService"
unbind="unsetDataService"
cardinality="1..1"
policy="static" />
After you've done the above, you should be able to pass your topic to the method DataService.publish(...). And, make sure to convert Payloads to byte[] arrays.
It would return a CONNACK packet with ...READ MORE
Try following.
Server loop
void loop() {
// ...READ MORE
Please help!
So, I have a couple of ...READ MORE
Seeing your comments and questions. I had ...READ MORE
Ok so you need two libraries to ...READ MORE
down vote
In the dir mosquitto-1.4.X edit the ...READ MORE
Based on your requirement of a light ...READ MORE
It's not entirely clear what you're asking ...READ MORE
Hey, it turns out that the IBM ...READ MORE
The RPMs for IDAS component are availaible. ...READ MORE
OR
|
https://www.edureka.co/community/11543/changing-message-format-in-kura-mqtt-cloud-client
|
CC-MAIN-2019-30
|
refinedweb
| 413
| 58.89
|
On Tue, Sep 28, 2010 at 08:22:11AM -0600, Bjorn Helgaas wrote:> On Tuesday, September 28, 2010 06:25:18 am Thomas Renninger wrote:> > Greg: Do you mind pushing the first (1/4, V4) and the last (4/4)> > patch into your tree for linux-next and leave the two PNP patches> > out, please.I've applied them now.> > More PNP related discussion, below.> > > > On Monday 27 September 2010 17:09:18 Bjorn Helgaas wrote:> > > On Monday, September 27, 2010 02:25:46 am Thomas Renninger wrote:> > > > > > > What do you think (dev_dbg vs printk(KERN_DEBUG...)?> > ...> > > With the exception of the ones in pnp/resource.c that I want to convert> > > to dev_printk(KERN_DEBUG), I think all the pnp_dbg() uses are things> > > I used during PNP development and haven't ever needed since.> > > > Ok. Sounds sane.> > I used the PNP parts as it nicely showed what the> > module.ddebug boot param is doing, but I agree it hasn't much> > advantage for PNP.> > > > Whatabout compiling pnp in one module namespace, the first> > of the two PNP patches?> > [2/4] looks reasonable to me.And this one.> > > E.g. attached patch would be an on top patch which provides no> > functional change, just that a pnp.debug would be a module param:> > cat /sys/module/pnp/parameters/debug> > As does the one below.And this one.So that left 3/4 out of the series applied to my tree.If this is incorrect, please let me know.thanks,greg k-h
|
http://lkml.org/lkml/2010/10/6/386
|
CC-MAIN-2016-50
|
refinedweb
| 250
| 74.79
|
Zachary K. Hubbard
Zachary K. Hubbard (1983–) is a paranoid, science-denying conspiracy theorist, blogger, YouTuber, utter lunatic, and crank. Essentially, he believes that the entire world is fake (or "scripted", as he prefers to say), being run by a hidden force of evil Jesuits, the (NWO), the Illuminati, powers that be, etc, who control elections, sporting events, terrorist attacks, the weather, natural disasters, and celebrity/politician deaths by sacrificing them "by the numbers" on a near-daily basis for no apparent reason.[note 1] Yes, this is actually what he believes. He "proves" this with his
make believe lunacy secret weapon: new gematria, his very own conspiracy which teaches that the powers that be are the ones responsible for all of this.
He runs a blog site[2] where he posts his grand "proofs" that, usually, can't even be falsified: his Youtube Channel(s) fair no better. He runs a Patreon[3] where he charges $1 to $20 a month (depending on the membership level) to access more conspiracy nonsense, some chapters of his books, and some other pointless stuff.
Even though his Gematria is “the most important work to the truth community” (paraphrased countless times), in December 2020 Hubbard launched a new YouTube channel, Voice of the People[4]. It’s promoted as a gematria-free channel about other types of crankery, which Hubbard launched after (finally) discovering that the majority of the world's population doesn't give a shit about numbers that any idiot can manufacture on a daily basis[5]. Naturally, there is nonetheless some Gematria included in the two videos available as of this writing.
Contents
- 1 Origin story
- 2 Daily life
- 3 New Gematria 101
- 4 Predictions
- 5 Hubbard and Spirituality
- 6 Hubbard and other theories
- 7 Other Lows
- 8 Hubbard vs. RationalWiki
- 9 Likes and Dislikes
- 10 Other Wacky Quotes (for your enjoyment)
- 11 Stopped Clock
- 12 Gallery
- 13 See also
- 14 External links
- 15 Notes
- 16 References
Origin story[edit]
Hubbard was born in Portland, Oregon on July 21st 1983. While his parents raised him at Lutheran, he no longer identifies as religious, nor is a practicing Christian, He did not, however, lose his faith in God. In fact, he
deluded himself into believing discovered that God had, in fact, placed him on the Earth, on the exact day he was born on, in the exact location where he was born, and even given his exact name, for absolutely no a reason. For these were sacred gifts from God to send him a mission and expose the entire world as being rigged by means of gematria.[6] In other words, God allegedly used gematria to help Hubbard defeat gematria. He is quite critical of atheism, falsely identifying it as a religion.[7][note 2]
Hubbard
claims to have had his great "awakening" on September 11th 2001. As thousands of innocent lives died the entire faked attack unfolded, Hubbard took notice to the footprints the Jesuits had, somehow (don't ask how), mistakenly left out in the wide open: 9/11, it was just like 9-1-1, the emergency dial number in the US[8]. He couldn't believe it! There was no way that was by chance! But that's not all, he found even more bullshit to shoehorn into his narrative clues. He data mined discovered that the World Trade Center towers began construction in 1968, the same year 9-1-1 became the US dialing code. Preposterous! It was crystal clear now! But that still wasn't all! He began to research American Airlines flight 77 and found something truly extraordinary.[9] For did you know that Flight 77 was in the air for 77 minutes?[note 3] Did you know that the Pentagon is 77 feet tall, and located on the 77th meridian? Are you confused yet? Don't worry if you are, sometimes, we wonder if even Hubbard himself isn't confused on this as well.
And just like that, PRESTO! Hubbard had his "awakening". He had the indisputable proof that our entire world is fake. From that point on, everything had changed. Using his newly acquired
delusional arithmetic crap that any three year old can make up special abilities, Hubbard set himself on a mission, to destroy the source of all, and yes children, we mean all, sources of evil and horror...forever!
We flash forward to 2013: after spending years trying to find his way, Hubbard
reached right up into his ass and pulled out discovered the very answer he was looking for all these years: "Gematria". Armed with his newly acquired horseshit knowledge, Hubbard, taking on his "truth seeker" identity, set sail on his epic quest to destroy the powers that be, to bring about a complete and total utopia, where all the big bad government people in scary black suits rigging everything are destroyed once and for all, where all civilization becomes some futuristic utopia planet like in Star Trek, Mayberry at the end of every Andy Griffith Show episode, Rainbow Road, etc, where we will all live happily ever after.
Of course, given the fact that Hubbard frequently begs people to buy his shitty books, shills out his own gematria merchandise[10], and even charges people money to see some of his "evidence" via Patreon, said origin story of one Zachary K. Hubbard can be easily disputed.[note 4]
Daily life[edit]
Hubbard claims to be a former teacher at a public school in Seattle, Washington, before being terminated for unknown reasons. A fan of his unsuccessfully petitioned for his reinstatement.[11] However, there is an extremely good case to be made for the simple fact that he never was a public school teacher to be terminated. A request to the State of Washington[12] will verify that one Zachary K. Hubbard was never a public school teacher to begin with. While adding “fluff” to his life story, he seems to have neglected the fact that records about public school teachers are open to the public.
On December 24th 2016, a car was driven through the front wall of his house, possibly by the powers that be coming to kill Hubbard for exposing them. Unfortunately, Hubbard wasn't home, so it didn't work (they have since been too busy to try and come back to do it again). That same year, Hubbard had a police stake out at his house for unknown reasons, as well as an "unjust" restraining order for "insane reasons"[11] (though we are never told the reasons). As a result of the latter, he is ineligible to own a gun.
Hubbard hosts a weekly radio program on the self-proclaimed Truth Frequency Radio, Wednesdays at 9PM-12AM Eastern Time (Thursday 2AM-5AM UTC). Call him[note 5] if you want to just drop in and say hi, ask some questions, refute his arguments (though be warned, this might end up being a complete waste of time), and just plain ol' have a good time. But even on non-Wednesday nights, he live-streams quite often, usually for several hours at a given time, which is why most of his videos are more than 3 hours long (his longest is 6 hours[13]). Despite his Shakespeare-length videos, he complains often about how people don't watch enough of them[14].
He has also written a 773 page book on this subject called "Letters and Numbers"[15][16]. The front cover, which looks like it was done by a 3rd grader in Microsoft Paint, lists 4 "quotes" from anonymous (alleged) book critics. Curiously, these testimonials were made before the book was finished. It is apparently a very commercially successful literary work: on Black Friday 2019, he sold an entire copy[17]. He has since released a 2nd book, "Number Games: 9/11 to Coronavirus"[18], which he self published it to Amazon, despite Amazon being in cahoots with the New World Order[19].
In December 2019, he launched a new community webshite for members only[20].
In January 2020, Hubbard set sail on an epic, and quite short-lived, quest to
street preach start a world tour travel to different cities to spread the good news knowledge of gematria. Despite making thousands of dollars a month on Patreon, he begged requested his own followers to pay for his trip.[21] Apparently, all that dough he makes on Patreon is for other uses. He hasn't traveled since, but given that he believes coronavirus is a complete hoax, that is unlikely to be the reason.
In December 2020, Hubbard
begged for requested more money on his Patreon: more specifically, $30,000 per month (in his own words). At that point, he will likely beg for even more money vows to start up a small business, complete with a building and everything.[22] Given that Hubbard has been frequently "censored" for his big exposings (see next section), it seems unfeasible to think that anyone could successfully start an entire business of this kind without the powers that be interfering in any way, but this doesn't deter Hubbard apparently.
Hubbard vs "censorship"[edit]
Being the hard working "truther" that he is, Hubbard is frequently persecuted for his daily exposing. He has had at least 20 YouTube channels taken down
for violating the YouTube Community Guidelines out of censorship[23], which he frequently brags complains about (for this reason, skeptics are encouraged to archive his videos and use them against him later should his latest channel get taken down as well).
Predictably, he blames YouTube (which is owned by Google, which is in cahoots with the "powers that be") for "censorship", often accusing them of deliberately deleting his channels (as well as subscribers[24] and likes[25]) in an attempt to hide the "truth". But there's a big problem with this: every time one of Hubbard's channels gets deleted, he just goes and immediately creates another one (nowadays, he's such an expert at self-destructive behavior, he will just start a new YouTube channel before the current cesspool of misinformation is deleted). So if Hubbard was really "exposing" these said powers that be, why hasn't he been permanently kicked off YouTube, a platform allegedly in on this big conspiracy? Or, better yet, why hasn't he been assassinated yet? Wouldn't an entity of this magnitude have the power and will to do more than just delete YouTube likes or subscribers? And yet, just killing Hubbard to stop him from making new channels is out of the question. One wonders why YouTube would keep acting like a broken record instead of just finishing him off for good. Even "exposing" President Trump's SCRIPTED ASSASSINATION hasn't been enough to kill him[26]. As you can see, Hubbard's "censorship" excuses only raise more questions than it answers: despite him putting out the "best" knowledge ever given (something he frequently brags about), the powers that be apparently see him fit to continue.
Strangely enough, it doesn't exactly appear that his old blog site has been "censored", though it has been invaded by Russian bots to troll him[27]. That sounds rather ineffective, but apparently, just taking out this website is out of the question.
in 2017, Hubbard threw a big fit after his fans tried (and failed) to create a Wikipedia page about him (though he need not worry now: their distant cousin, RationalWiki, is happy to oblige). Hubbard rebuked Wikipedia, arguing that he deserved a page for, get ready for this, accomplishing more importance than anyone in the history of the United States.[28] Him uttering this with a straight face is likely the only talented thing he has ever accomplished.
Despite his supposedly deep hatred for censorship, on December 1st 2019, after a long string of "troll" comments debunking him, he disabled comments on his blog to prevent people from debunking him any further[note 6][29]
Hubbard frequently uploads videos YouTube takes down to his secret Bitchute account[30], yet another webshite that is apparently off limits to the powers that be.
New Gematria 101[edit]
In a nutshell, here's how this works:
Hubbard finds corresponding, albeit completely random numbers surrounding a particular event: a political election for example. These random numbers can include absolutely anything: how many days remaining in the year from the election, how many days into the year is the election, the percentage of votes the winning candidate receives, the number of votes a winning candidate receives, and, of course, the 1=A, 2=B, 3=C etc format to "code" a candidate's name.[note 7] This is what Hubbard calls a "cypher". Using the online "Gematrinator" calculator[31], numeral values from an ever-increasing number of additional "cyphers" can also be thrown in should they conveniently fit Hubbard's "decoding". And this, ladies and gentlemen, is his indisputable "proof" that said election was rigged.
He also uses these tactics for "proving" the rigging of the National Football League, Major League Baseball, National Basketball Association[32], National Hockey League, NASCAR, horse racing, wrestling, and even (wait for it) golf[33]. If these sports are truly rigged, these players deserve an Oscar because they sure do a damn better job at acting than any WWE wrestler... as well as literally any Hollywood celebrity for that matter. Then again, the powers that be do control Hollywood, so this must be a deliberate planting of evidence to the contrary. Sometimes, Hubbard doesn't know if the players even know about it.[34] We'll let you figure that one out.
It doesn't stop there, he's also "exposed" false flag terrorist attacks, natural disasters, man-made weather[35], and pretty much anything else that exists as all being rigged. The exact time Trump went to the White House toilet to take a dump could be rigged for all we know.
With rare exception, Hubbard relies extensively on arguments that are non-falsifiable: they are often so bullet-proof that nobody can "disprove" any of them, conveniently enough. Many of his arguments involve a political event/sports game/etc that has already happened, searching every numeric statistic imaginable for matching numbers, listing all the numbers that match (which means ignoring all the ones that don't), and pointing out how much of a "coincidence" such matching numbers could be: the event being rigged or "scripted" by the NWO is the only "rational" explanation (basically, an ex post facto prediction on Hubbard's part). But further, anyone who disagrees is either insane, willfully ignorant, OR worse yet, is a government paid shill who is leaving troll comments to discredit him[note 8].
It's also noteworthy that, in the realm of celebrities/sports athletes/politicians/etc, nobody seems to ever die of natural causes. Cancer? Car crashes? Nonsense! Everyone, even if they're 105 years old[36], is always either sacrificed (or faked their death and are out hiding in a bunker somewhere). That in mind, it's probably easy now to see that, in Hubbard's imaginary world void of any coincidence, Hubbard's beliefs require at least a couple of major coincidences in and of themselves: the complete and total lack of coincidences, which would be a major coincidence itself, as well as the lack of any natural deaths in the world of famous people. But remember, coincidences don't exist, so this essentially throws out Hubbard's entire theory.
And remember, Hubbard says that if word of this "gematria" doesn't spread quickly, we're all gonna die/live
like Mad Max in a post-apocalyptic world[37][38], so get out there and spread this information on Facebook ASAP, you know, just in case it happens to be since it's true. In short, if you don't believe this, "WAKE UP SHEEPLE!!!"[39]
Predictions[edit]
Other times, Hubbard predicts (or tries to at least) the outcomes of certain events, usually sports games, but also other things like elections, as part of his "proof" that the whole world is rigged.
Hubbard claims to have a winning sports predicting day 5.5 out of 7, or 79%, days of the week.[41] While that doesn't sound too bad, using that as "indisputable proof" that sports (let alone everything else in the entire world) is scripted goes well beyond any form of rationality for a number of reasons. First, wouldn't one expect Hubbard's batting record (no pun intended) to be chalk-full of exact outcomes of exact teams, exact scores, and exact everything with 100% accuracy? But as Gomer Pyle would say: SURPRISE, SURPRISE, SURPRISE! This kind of information is nowhere to be found, either on his blogging site or (numerous) YouTube channels.
"But Rationalwiki, he still gets more right than wrong: isn't that evidence?" The kind skeptic (of this article) might ask. The short answer is no: in fact, even Hubbard's unfounded claim that he is the best in the world at predicting sports still wouldn't prop his theory up. The reason why is because these types of hypothesis can often times be proven void by even one failure. Science (or Pseudoscience in this case) is not a sports game, it's not a "number of successes - 1 = number of failures = PROVEN RIGHT!" equation. In science, a wrong outcome can render any proper outcomes moot, so if a hypothesis (i.e Gematria) gets even one prediction wrong, that could potentially, from a scientific point of view, toss out the entire hypothesis. As Albert Einstein once famously said "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."[42] Then again, Albert Einstein is suspected to have been in cahoots with this big gematria conspiracy[43], so perhaps the point is moot.
But that's not all, it gets even worse. The bigger problem is that Hubbard frequently puts himself in win-win situations, laying out multiple scenarios for different outcomes. Even in sports, including games where he picks a team, he will often list reasons why the game could be rigged for either team: This document on his old blogging site is a prime example of this in action. While going 11-3, nearly every one of his eleven wins were actually win-win ("close") predictions.
The obvious problem is that this immediately renders said predictions to be useless. This sort of "Heads, I win, Tails, you lose" approach renders any win-win prediction, even if correct, irrelevant because they cannot be used as evidence for or against gematria: in other words, it is now non-falsifiable, which Hubbard claims gematria isn't. And, as if that's not enough, this also means any wrong prediction can easily be explained away, thanks to said backup "riddle", and thus the game is still rigged[44]. Even if Hubbard's original prediction turns out correct, he has already voided it by giving himself a safety net, so to speak. Thus it's now about as impressive as a tight-rope walker with a safety net right under the rope, even if they manage to walk the rope without falling off.
Rarely does Hubbard actually stick with one team in a game, and even then, he still has the potential to get it wrong, as he admits here in this video.
Of course, not all of Hubbard's prophecies are about sports: he also tries his hand at predicting non-sports scenarios: needless to say, his record at this is even worse. For example, in 2016, Hubbard predicted that Hillary Clinton was rigged to win the 2016 U.S. presidential election[46][47]. All we can say is this: if you voted for Clinton and were disappointing by the outcome, at least you can use it as an opportunity to laugh at Hubbard shooting himself in the foot once again (of course, this failed prediction didn't stop him from claiming the election was rigged anyway[48]). He then predicted Oprah Winfrey would become president in 2020[49], you can take a guess how that one went. He later appeared to favor Biden to win the 2020 election: the good news is that he did become president-elect, the bad news is that it's pretty much the nail in the coffin for his Trump assassination prediction. At least this would explain why Hubbard wasn't killed "exposing" Trump's assassination that would never come.
Speaking of assassinations, Hubbard currently holds an astounding 0-7 record of predicting presidential assassinations. He has "predicted":
- Obama's assassination August 29th 2014[50]
- Obama's assassination March 15th 2015[51]
- Obama's assassination October 19th 2015[52]
- Obama's assassination April 4th 2016[53]
- Obama or Trump's assassination around the 2016 US election[54]
- Trumps assassination on May 18th 2017[55]
- Trump's assassination... eventually[56] (he presumably gave up on exact dates at this point)
Presumably, Hubbard will repeatedly keep "predicting" the incumbent's assassination until it finally happens. Once it does, he will undoubtedly parade his grand "prediction" around as grand "proof" of his numerology bullshit, while hoping (in vain) that nobody notices his disastrous track record.
Sure enough, some days after the above section of this article was written, he announced his prediction of (wait for it) Biden's assassination.[57]
These are by no means all of his false prophecies, listing all of them would probably fill an entire book. But if you want just a few more examples, he failed to predict the collapse of the Hoover Dam during the 2015 Superbowl[58], Bob Dylan's 2016[59]) and 2018 deaths[60], and the Golden Gate Bridge collapse of August 11th 2019.[61]
Hubbard's Patreon[edit]
In Hubbard's Patreon description, he lists a bunch of predictions which are all but bound to convince unsuspecting readers to give him money[63]. The problem is that most of these predictions are missing a lot of important details. We decided to clear up the record, so that anyone who wants to can see all the evidence, not just the stuff Hubbard likes to show.
WARNING: The following section of this article may contain material which may cause brain smouldering[note 9]. Readers' discretion is advised.
He has no information on the 2014 World Series, despite claiming that he's been "exposing" the MLB since 2014. His Nationals vs Orioles prediction could be the reason[64].
"2015 World Series - Called Mets in World Series in Spring Training"
Our first facepalm. He predicted the Mets would win (not just get to) the World Series in 2015[65]. He also falsely predicted that the 2015 World Series might be the "Subway Series" (Yankees vs Mets)[66]. As if shooting himself in both feet wasn't enough, he shoots himself in the leg by calling a 7 game series[67]. It only went to 5. We're not done: he shot himself in the other leg by trying to switch to the Cubs in the playoffs[68], only to revert back to the Mets when that failed. But we're still not done. He even predicted in the same post that the Blue Jays would beat the Royals.
As you can see, we're already off to a disastrous start, but we're just getting warmed up (no pun intended).
"2016 World Series - Called Cubs to win World Series in Spring Training, and called it to end in Game 7, on 11/2, for the 112th World Series (Correct and Correct). Called Cubs to beat Indians before Playoffs began"
The blog post titled "The narrative for the Cleveland Indians to win the World Series in Game 6" begs to differ[69] As you may have guessed, is a botched prediction. Altering a prediction to gain a win-win situation effectively voids the prediction. But there's more: he spent much of the year claiming that it would be Astros/Rangers vs Cubs in the World Series[70]: The Rangers were swept in the 2016 ALDS and the Astros did not even reach the playoffs. He also speculated at the Texas Rangers vs Washington Nationals reaching in the World Series as well, but that failed too. [71].
"2017 World Series - Called Astros over Dodgers in August"
Several problems with this. The Astros and Dodgers were, by far, the two favorites for the AL and NL respectively (though in the former's case, it was later found out because they cheated). Making such a prediction might seem nice, but isn't anywhere near "proof" the Jesuitsdidit. He further argued that orange (the Astros uniform color) represents fire, and blue (the Dodgers uniform color) represents water and fire always beats water. It seems in Hubbard's imaginary world, buildings catch on water, waterfighters show up to the scene, and spray the building with fire and save the day (Which, to be honest, might make a kick-ass Christopher Nolan film). In addition to the aforementioned failed Game 6 prediction, he also had the Nationals winning the World Series[72].
"2018 World Series - Said look for Red Sox to win World Series in Game 5, in July, a span of 99-days from the Patriots winning Super Bowl 99: (Boston Red Sox equals 53)"
The Braves were supposed to be in the World Series, which he got wrong[73]. They lost in the NLDS to the eventual NL champions, the Los Angeles Dodgers. He also speculated the Astros and Brewers to reach the World Series[74]: the real kicker in that is that the semi-finals were Astros vs Red Sox and Dodgers vs Brewers. In other words, it was a win-win from that point onward.
"2019 World Series - Called Nationals vs Astros before postseason began, and called underdog Nationals to win the World Series in Game 5 or Game 7 (they won it in 7, the narrative we thought was most likely)"
In what has become arguably the single most idiotic moment on Hubbard history (which is saying something), he posted a blog post claiming to have predicted this outcome all along, before unleashing an incoherent rant containing more profanity than Samuel L. Jackson in an R rated film, directed at his
critics trolls. Because neither Hubbard's YouTube channel or Patreon account show any evidence of this prediction at any point whatsoever, even some of Hubbard's long-time Patreon (now former) supporters immediately called Hubbard out on his bullshit[75], some unsubscribed from his Patreon. He also spent much of 2018-19 saying the Mariners would win the World Series,[76] the only thing they won was worst team in the AL West (like usual).
"Called first three Warriors vs Cavs NBA Finals from start of season. Called Cavs to come back from 3-1 to win on real King James' birthday"
Calling a game 7 mere hours before it happens isn't really "proof", so to speak.
"Called Warriors to sweep Cavs the year they did. Called Warriors to win June 12 the year they did."
Not before predicting they wouldn't win the NBA Finals. In one of his videos[77], he said either the OKC Thunder or Houston Rockets would go to the Finals for the western conference, specifically saying the Warriors were unlikely to return (they returned).
Messed up with Warriors winning it all this past year,
but called Raptors to win in Game 6 before the game was played to help people hedge who had Warriors
We crossed out the unnecessary bits, nothing else to see here.
Called Broncos over Panthers in October for Super Bowl 50
Another botched prediction, see "The Case for a Steelers Superbowl Appearance" on his blogging site (referenced at the quote near the heading). As a bonus, he also failed to call the final score. [78] By now, you likely can smell the bullshit coming straight from your computer monitor.
"Called Patriots after Week 1 to win Super Bowl 51 (Had Colts before season began)" Called Falcons-Packers NFC Championship before season began as well that year"
This was actually a big mess. Right after his Colts got eliminated from the playoffs[79], he switched to the Packers, the "Plan B" team[80], but that failed too.
"Called Patriots to face Eagles in Super Bowl 52 after Week 3 (Had wrong outcome but said if Eagles win, Zach Ertz would score winning TD, which he did)"
Here, he admits he screwed up, but fortunately had a backup prediction, so it's cool.
"Called Patriots over Rams before season began for Super Bowl 53"
No evidence for this to be found, and he had the Washington
Football Team Redskins in July 2018[81].
"Called both triple crowns before a single race was run"
Referring to 2015 and 2018. Only problem is he also said this in 2014[82], and 2016[83], but failed.
"Nailed 2 of 3 winners in this year's three big races, and in the race where I missed the winner, had the second and third place horses dead on."
Not much to say on this, other than the typical "I got most right so lets ignore what I didn't" routine.
So hate all you want, but those are the FACTS."
But not all the facts. For anyone wondering why we swear in court to tell the truth, and the WHOLE truth, this is an example.
"And that doesn't even count the winning record on week-to-week picks that exceeds anyone in the industry, and the knowledge I have put in peoples hands that allows them to win big on the daily."
Hold up, isn't the whole point of all this to expose the evil NWO, which is causing presumably all the death, suffering, war, famine, etc and bring them to justice? Apparently not anymore, now Hubbard is using supposedly corrupt sports leagues to act like a giant money printing machine for EVERYONE! This could end poverty: HURRAY!! Fun and games aside, this is another baseless claim with no evidence, and even if true, is irrelevant anyway.
Not listed on his Patreon is his 2018 prediction of the Lakers winning the 2020 NBA Finals. While this did happen, he failed to correctly call their opponent, the Heat (instead calling the Celtics)[84]. This is important because, most likely, Hubbard won't say that anytime he parades this grand prediction.
Also absent from his Patreon is that he's also failed three consecutive Stanley Cup Finals[85][86][87]
Hubbard and Spirituality[edit]
Despite his rejection of mainstream religion, Hubbard claims a strong belief in spirituality. Not only that, but he teaches that there is a strong force of evil in this world (presumably the same evil force rigging Florida Panthers games that nobody goes to) that is trying to kill off spirituality[88]. He even believes that "ritual magic" is actually real and that the "powers that be" are stopping us from using it[89]. In other words, we actually live in the Wizarding World of Harry Potter, but we're all just too dumb to see it.
The evidence for this[edit]
Hubbard and other theories[edit]
Hubbard rejects the flat Earth theory[90], as well as the idea of FEMA death camps[91]. While one COULD argue this to be a Stopped clock Moment, most likely it's just to make himself look more legit than he actually is. "Look guys, I am NOT a flat Earther! If you think I'M stupid, look over THERE!" is essentially what his anti-flat Earth videos come across as.
In either case, as if rejecting it in 2021 was such a high bar to clear, his bold rejection of the Flat Earth doesn't exactly amount to much, as Hubbard claims belief in the following:
- Evolution[92], the Big bang[93], and Charles Darwin[94] are all Illuminati-based frauds. Even Ken Ham would laugh his ass off at that one (that said, however, his exact beliefs on this are unknown as there is no evidence he endorses Young Earth creationism).
- The world is on the brink of a one world currency: he even tried (tried) predicting it would come in 2018.[95]
- Monsanto[96] and fast food restaurants[97] are deliberately poisoning people.
- Every form of authority (police, military, school teachers, etc) is hostile to society and need to be exterminated.[98]
- No planes actually hit the World Trade Center on 9/11: he even criticized Alex Jones for rejecting the no-planes theory, and even (wait for it) accused Jones of working with the Illuminati to mislead people on this subject.[99]
- The Lynnmouth Flood conspiracy theory.[100][101]
- Every major terrorist attack, 9/11, the Boston Marathon Bombing[102], etc, are all false flag operations.
- Mainstream media is normalizing homosexuality to reduce birth rates and the human population.[103]
- Vaccines cause autism[104], and possible some other bad stuff.[105]
- Climate change is a hoax.[106]
- Astrology and Hermeticism are not only real but the Bible/government are preventing us from discovering them.[107]
- The entire Coronavirus pandemic is a massive hoax.[108]
- Face masks, or "face diapers" as Hubbard calls them, are a symbolic compliance to tyranny and the Mark of the Beast[109] He ended up making a big scene and nearly got kicked out of a Costco store in November 2020 for refusing to wear a mask[110]
- Jobs are pointless, they only exist exploit humanity... somehow.[111]
- The Holocaust was a lie, or at least exaggerated.[112]
- All, yes, we mean all, of US history is fake.[113]
- Sports athletes and politicians are artificially harvested in labs and aren't actual people (this is not a fucking joke, he's dead serious here).[114]
- SPACE ITSELF is a hoax.[115] How Hubbard can even hold an opinion on the Earth's shape at this point transcends all planes of logic.
Belief in all of the above pretty much cancels out any rejection of the Flat earth: in fact, the final point alone cancels it out.
Other Lows[edit]
Hubbard has openly wished death (even slow and painful ones) on pretty much anyone who disagrees with him. Yes, you read that right. He has gleefully proclaimed his desires for the deaths of anyone who does not
worship him at his feet agree with him.
In January 2020, during an hour-long video of irrational hilarity, Hubbard gleefully wished death by starvation and/or thirst on all of his critics[116]
After the deadly airport bombings in Yemen in December 2020[117], Hubbard subtly celebrated the victims' deaths since they were "probably" non-conspiracy theorists[118] before openly advocating the depopulation of any non-conspiracy theorists (amount to, according to Hubbard, 95% of the Earth's population).
That's not all. In general, Hubbard is also often a complete dick to even his own
disciples followers[120][121][122], at least, those who don't agree with his every word or give him enough money.
Remember this for the next time Hubbard rants about how he's not getting anywhere in his pathetic attempts to make gematria go viral.
The Larry Johnson interview[edit]
In January 2020, Hubbard
baited invited former NFL running back Larry Johnson onto his TFR live-show for an interview[123], in a desperate attempt to prove that even athletes are "coming out" and supporting his theory. Of course, his viewers might not have been told that Johnson has severe mental health issues, ultimately stemming from head injuries from playing in the NFL[124]. Johnson can't even remember 2 seasons he played in the NFL[125]. His condition is called chronic traumatic encephalopathy, which is due to suffering memory loss, suicidal impulses, mood swings, and headaches all due to his head injuries. One wonders if Hubbard wouldn't point this out if Johnson was arguing against gematria.
Hubbard vs. RationalWiki[edit]
Hubbard has discovered this article on him and, unsurprisingly, isn't too thrilled about it. He's had a couple of bouts with us on this. His earliest documented is as follows:
Wish granted! Unfortunately, this argument is irrelevant since this page never disputed such a claim. Rather, it merely pointed out that Hubbard's arguments (including helping people win money on sports betting, one of his favorites) does not stand up to scrutiny. His "refutation" also demonstrated how little he knows about Falsifiability.
Hubbard came back out of the cave on November 8th 2019, where he live-streamed some sort of rant about Google "promoting" this article. If what he means by "promoting" is just appearing in Google's search results, than his argument immediately fails, as his blog and YouTube channel also appear on Google's search results, meaning they are also being "promoted". For unknown reasons, he removed the rant from YouTube, but evidence of its previous existence can be found on Pinterest[128].
Hubbard vs. RationalWiki: The Prequel[edit]
This article is not our first to ruffle his feathers, he previously wrote a horribly concocted "response" to our article on New gematria[129] in 2018.
Likes and Dislikes[edit]
Likes[edit]
- Numbers
- Money
- Attention
- Live-streaming for hours at a time, sometimes multiple times a day
- Bragging about his deleted YouTube channels like a badge of honor
- Himself
Dislikes[edit]
- Anyone who disagrees with him
- Amazon (despite selling his book on the platform)
- Having a real job
- RationalWiki
Other Wacky Quotes (for your enjoyment)[edit]
- "The Work I lay down is UNDENIABLE!" [130]
- "It's more proven than anything out there. You can laugh at anything in science if you're gonna laugh at MY work! My work's a theory, I've backed it up better than any scientific theory on the planet!"[131]
- "When you think about it, the concept of 'hope' is something that takes power away from you and has you place your power and influence in something outside of you. For example, President Barack Hussein Obama, the 44th President, ran on 'Hope'"[132].
- "Do you know what makes being a slave easier for some people? Adam Sandler movies and other distractions; distractions created by the same people who have enslaved us."[133]
- "Anyhow, looks like he might be part of the gang. What a scum. It is too bad there are so many black men who are willing to get fucked in the ass to collect a pay check from a jew. No Vaseline."[134]
- "I have zero doubt this Zionist Jew faked his death. Zero. Him and the Nintendo Guy are probably jerking each other off underground right now, playing Mario and listening to Zeppelin."[135]
Stopped Clock[edit]
- As previously mentioned, while Hubbard is not an atheist, he is a staunch critic of religious extremism.
- In 2020, he apologized for some of his antisemitic comments of the past.[136]
Gallery[edit]
See also[edit]
External links[edit]
- His webshite
- His other webshite (members only)
- His former webshite: inactive but still contains old articles
- His
money printerPatreon
- His YouTube Channel
- His other YouTube Channel
- His Bitchute account
- His Twitter account
- You will need some of these by the time you finish reading through his webshites
Notes[edit]
- ↑ Or at least, for no good reason. The go-to explanation on this is that it's the Jesuits religion to do this
- ↑ To counter his points, see the RationalWiki articles on:-
- ↑ Of course, this argument neglects the fact that Flight 11 was not in the air for 11 minutes, Flight 175 not for 175 minutes, and Flight 93 only for 81 minutes. Those darned passengers fought back and cut the flight short by 12 minutes, much to the chagrin of the Jesuit order
- ↑ Hubbard should watch this clip and take a lesson from Kyle Broflovski
- ↑ call 213-233-3998 in the US or +44.203.393.2871 in the UK to refute Hubbard today!
- ↑ Comments on his blog posts are now restricted to "members" only.
- ↑ this is just A FEW of them. Listing every possibility would result in this article reaching the length of War and Peace
- ↑ See Hubbard's Patreon page to see him make this claim.
- ↑ His material, obviously. Not ours.
References[edit]
- ↑ REQUEST | Help Zachary K. Hubbard entry remain on Wikipedia, 83 seconds in.
- ↑
- ↑ See the comments section to see what we mean
- ↑
- ↑
- ↑ Gematria, the God Code & Organic Number Matrix +Zachary K. Hubbard & Jesus Christ on Youtube
- ↑ Atheism | Does science disprove the existence of God? +Piltdown, the evolution lie for 41-years on Youtube, 479 seconds in.
- ↑ REUPLOAD | Why I'm so angry, January 2020 (and this was 5 months before face diapers (masks))
- ↑
- ↑ Sports leagues just sell it for your money... so buy Hubbard's instead!
- ↑ 11.0 11.1 Campaign for Zachary Hubbard's Reinstatement as Seattle Teacher on thepetitionsite.com
- ↑ For perspective, that's roughly two thirds of the Lord of the Rings trilogy
- ↑
- ↑
- ↑ Letters And Numbers by Zachary Hubbard (2019).
- ↑
- ↑
- ↑ Amazon, arguably the most powerful company in the world, is selling a book on their website that exposes themselves!
- ↑ Sorry, no trolls allowed.
- ↑ WARNING: soul-staring face will appear without warning upon clicking this link. You have been warned.
- ↑
- ↑ [
- ↑
- ↑
- ↑ See his article on the Trump "assassinatiton"
- ↑ It came from Russia, so it must be true.
- ↑
- ↑ Notice the (dateless) chart on the page, thus proving his point
- ↑
- ↑ Pretty colors!
- ↑
- ↑ Taking a wild guess, golf balls are magnetically controlled to land in the hole
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑ EVIL DR. EINSTEIN! The greatest real life comic book villain of all time!!
- ↑ Never-mind that the Raiders were a "sure bet", there was a backup prediction. Presto! Gematria proven! WAKE UP SHEEPLE!!
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑ he missed
- ↑ he missed again
- ↑ he missed yet again
- ↑ he missed YET again
- ↑ He missed YET AGAIN
- ↑ HE DID IT! Just kidding he missed again.
- ↑ He missed... again
- ↑
- ↑
- ↑
- ↑
- ↑ Then again, something else did happen that day in India, so maybe it was still a successful prediction after all.
- ↑ Zach Hubbard: Number 1 on Watchmojo's "Top 10 Debaters Matt Dillahunty was too Afraid to Face"
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑ How to prove yourself worthy: step 1, have a big meltdown and shit yourself. Step 2, you're done.
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑ "Ol' maybe next year!
- ↑
- ↑
- ↑
- ↑ Alex Jones, possibly the greatest crisis actor in history.
- ↑
- ↑ See the Wikipedia article on Lynmouth Flood § Conspiracy theory.
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑ Let's talk about what it REALLY means to be wearing a mask in 2020 while in public
- ↑ Let's talk about what it REALLY means to be wearing a mask in 2020 while in public
- ↑
- ↑
- ↑ Bladerunner 2049 fans will piss their pants in excitement over this one. When we overthrow the Jesuits, we can recycle their politician-making labs to make replicants! Robot wives, anyone?!
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑ Truth to Power | Larry Johnson of Kansas City Chiefs Interview, January 22, 2020
- ↑ Babb, Kent. (December 12, 2017). "Ex-NFL player Larry Johnson grapples with violent urges and memory loss. He thinks it’s CTE". Washington Post, December 12, 2017. Retrieved December 8, 2020.
- ↑ Ron Dicker. "Ex-NFL Star Larry Johnson Can’t Remember Two Whole Seasons. He’s 38.". Huffington Post, December 13, 2017.
- ↑
- ↑ ttps://youtu.be/haY0EDNwuEc?t=3415
- ↑
- ↑
- ↑ How to prove something: 1, call it undeniable. 2, you're done.
- ↑ You can't put the cart before the horse: make it a scientific theory and then we'll talk
- ↑
- ↑ Then again, he might be on to something here: this would explain how Sandler was evil enough to release Jack and Jill
- ↑
- ↑
- ↑
|
https://rationalwiki.org/wiki/Zachary_K._Hubbard
|
CC-MAIN-2021-04
|
refinedweb
| 7,243
| 58.42
|
/CCG/ - Cryptocurrency General
Old thread reached bump limit >>1375827
>Trading crypto tutorial
>Poll for best coins to invest into
>List of places to buy/sell/trade Bitcoin/Ethereum
>List of exchanges for bitcoins and altcoins
>List of sites to watch for information about crypto
>List of Twitter accounts NOT to follow advice from
>List of Faucets (Try to go with ones over 200 days old)
>>1387591
Try /r9k/.
>>1387601
here's your (You)
whats up with Syscoin
>tfw didn't buy XAI
tfw woke up at just the right time to sell CJ
>Be me
>Puts together shitty vidyacoin portfolio
>equally split between GAME/BCY/DGB
>sell all these coins at 10% profits
>game triples
>bcy +50%
>dgb +20%
fuck shitcoin trading mango i did this with trump too (bought in very early sold out early)
should I short sys?
>>1387783
Nah son, I'm goin' long. $sys has a new manager in China and v2.1 comes out in August.
>>1388120
August? I thought it was gonna come out in July. Those fuckers never stick with their timeline
>>1387714
what's the reason you invest in something like trump coin? aside from the dumb name, is it some reasoning like 'there's always a bigger fool'?
I really don't get why anybody would consider this anything other than a pump and dumb.
>>1388139
>
I'm buying low now and holding out until around early September.
Also in general, what's a good way to track my trades?
Buy ethereum before the 20 th
Dude buy dgb it's gonna pump tonite along with Tyrone and your mom.
should I launch my own silver backed crypto?
will you niggers trade it?
i refuse to sell my VOX at a loss...
Ardorplatform.com
Anyone buying Monero? Seems a bit high right now but it looks like a cool project.
What are some other cryptos to watch?
I currently have some investments in ETH and BTC
>>1388699
I hear a lot of buzz around XVC.
Long term I'm holding BTC, Lisk and LBC
>>1388239
Nighttime in amerifag :)
My luck this shit will happen during chinas daytime before I have to get my sleep here.
This will be a good day
>>1388147
The real question is, why wouldnt you? I guess if you have a small mind and little imagination. It's not like it is based on a topical person, or its not like its election season, and its not like it has a unique premise. Yeah, the coin won't be noticed at all. We should just move on to the next altcoin.
>>1388239
They said that DGB was gonna pump this weekend. Nothing. In fact, none of the "scheduled" pumps ever happened.
>>1388797
pretty much.
i'm still stacking DGB though, it will pump at some point
Okay here are the coins to watch:
First of all, NIRO (Nexus):It just had a huge spike, I would wait to buy on a dip first, but its gonna be a hot coin for sure. Custom coded, tons of features, very serious dev team, it's the real deal.
Trumpcoin: It's pretty much a perfect storm of virulent meme magic + Trump himself + election season. The only thing holding the coin back is the dev team and it should be sorted out by the end of this week.
Syscoin: A big big sleeper, syscoin is creating their own marketplace, could take off or at least garner some serious attention
Breakout: Brand new coin, as usual buy after ICO when it dips
BitSynq: Watch out for this one, buy after ICO
Putincoin: Bound to pump
Syndicate: New product management platform, really cool, good for someone like me who loves to seed
Also look up Block Pioneers, a new exchange coming out, might be worth investing in
These are all longterm holds, I dont do short term trading (to preserve my sanity)
Oh and Trumpcoin has been my best investment yet...It has amazing resistance and gain. Considering the election season it has serious potential to pump, everyone here should have some Trumps, you'd be an idiot not to buy some regardless of your politics
Also, my cheap wildcard is BURSTcoin. Their whole schtick is HDD mining. Bought a bunch of cheap coins, but I hope its a sleeper and somehow gains traction
>>1388804
>Putincoin: Bound to pump
Holy shit this.
Putin is currently on the underground but this shit will be flying high soon.
>>1388804
yeah I had breakout from the get go on Bittrex but forgot about it
>>1388984
BRK will be good for pumps very soon. Same with CJ.
>>1389000
CJ is on the move right now
>>1389000
>>1389054
>CJ
what's the full name?
>>1389068
cryptojacks on c-cex
>>1389109
3rd ICO will be around 150sat and I think it's today. Get em while they're cheap because this one is gonna be out of sight by EOD.
>>1389111
already in at 56k m8, i'm a poorfag so this will do for me
Hi /biz/, wanna ask a question.
How do you guys invest in cryptocoins?
Just buy and place a price to sell then you sell automatically when the price gets there or you guys keep monitoring everytime the price to decide when to sell?
>>1389295
check op
>temple run pic in captcha...
THIS IS IT FAGGOTS. PUMPTEAM.NET IS HAVING A VOTE FOR WHICH COIN WILL BE PUMPED ON YOBIT AT NOON
VOTE FOR FUCKING TRUMPCOIN OR YOURE A DIRTY NIGGER
https accounts.google.com/ServiceLogin?continue=https%3A%2F%2Fdocs.google.com%2Fforms%2Fd%2Fe%2F1FAIpQLSfDuGivXjC-LHjo0KqcPHaZNwzraWBOqRW0O-XTlBVBdLyg3A%2Fviewform%3Fembedded%3Dtrue
>>1389301
How does this work?
The choosen one really gets pumped?
>>1389301
>>1389314
you should delete these.
>>1389301
>>1389314
hope that it's trump, and if it is any other coin you aren't already invested in, i'd suggest not buying, cuz you'd probably be buying into the dump already.
>>1388284
How does the backing actually work and how does this translate to value increase over time which would make it interesting to traders? I mean theres the way of backing all coins by a fixed amount of silver which would make mining new coins cause inflation and there's the way of backing each coin with a fixed amount of silver which would make you poor. I mean sure, you can just premine and sell your coins without other people having the ability to mine new ones, but what would make traders then buy your coins instead of silver which would be a far less risky investment. Anyway, what I wanted to say is: you're a sperg.
>>1389301
how do these pump clubs actually work
wouldn't like half the people involved get fucked
aren't they all competing against each other
>>1389316
>>1389316
>>1389316
>>1389301
Whats there track record and why have I never heard of them?
>>1389301
did one of you trump coin admins set this up
How high do you guys think CJs will get during the initial pump? I've got two million.
>>1389807
200sat most likely, shouldn't really get much higher than that.
>>1389814
correction: at least 125
>>1389814
I've read a bit into the ANN thread but I can't quite make heads or tails of this coin. Any good prospects to hold past the initial pump or would you say this is a coin to let go of quickly?
Best USD to btc?
I've been using the genolis miner and after about 30 minutes my miner uses up 100% of my physical memory and the miner has to shut down. How do I fix this?
I'm using an RX 480 8GB and I have 8GB of ram as well as a TB of storage space.
>>1389872
I like Coinbase.
Virwox is also fast and lets you move more USD per day, but it has higher fees than most sites.
>>1389872
Coinbase is pretty good. Easy to set up and use, though it requires ID.
>>1389872
Cex.io
CJ jumped from 86 to 144
tried to tell yall about it
>>1390021
I listened to you. Actually yesterday I saw someone talking about CJ and I looked at it quickly and although I transferred some BTC to C-CEX in the end I didn't buy (could've around 50sat) because I felt like it looked like a random coin, plus C-CEX's disgusting layout frightens me. Well I got in at 110 sat anyway today.
>>1390033
transfer your stuff over to yobit, was at 145 a few minutes ago but now it's hovering around 123.
Also C-CEX's layout is beautiful, only problem is the place is almost always dead.
How we can antecipate or know about when this huge start pumps of new coins will come off?
>>1390049
>>1390033
Ive got 5000 coins at 117. How much do you guys think is gonna be the best price to sell them?
>>1390077
If you're impatient sell at 140. If you can wait a few days sell at 200.
>>1389872
>Best USD to btc?
I like circle.
Purchase some cheap ETH too.
>Reminder
Ethereum's Rarity Explained
>>1390079
Why is that dumping hard now?
>>1390088
People are dumb and getting scared. I think now is a perfect time to buy since it's gonna be going up again once ICO round 4 starts.
Same thing happened yesterday, went up to 117 before dropping back down to selling at around 66 before steadily climbing through American night hours.
>>1388797
they have been saying it will pump for years now in every thread. Never going to happen unless something newsworthy happens.
>>1388804
all wrong, kys disgusting shill
>>1388815
same as DGB and DOGE. It's not going to make you rich as it is.
>>1389000
I don't know about CJ (very highly doubt it) but you are 50% right about BRK
>How do you guys invest in cryptocoins?
buy low sell high. Reasearch a lot, build good contacts and get some insider info and news. I already know how I will trade until the end of this year (not counting PnDs)
>>1389301
This is a scam don't fall for it
>>1390088
because they all know it's a shitcoin with no future
>>1390112
>get some insider info and news
Teach me how. Please.
>coin that is shilled in every single thread, and that finally is pumping right in this moment gets no mention itt
never change you rascals
>>1390117
it's pretty easy anon. There are very few places where crypto people gather, traders are especially tight-knit community, go and be active there. I have got into crypto two months ago and I already know where to get the right info.
If you want an easy tip, there are news for ICO on the 23rd of this month. So it will pump a bit before the event. Buy the rumour, sell the news ;)
>>1390134
>ico
I meant IOC
>>1388804
wait, you are actually right about syscoin and syndicate. I will give you that.
>>1390134
They are selling for 0.001 and buying at 0.006. Thats pretty strange dont you think?
Do you know where will this pum get?
>>1390143
>0.001 and 0.006
No, its 0.0001 and 0.0006. Sorry.
why oh why oh why did i buy LBC
>>1390197
He didn't PnD'd
What made you think this would stay up?
t. Guy who made the first LBC info thread.
Alright /biz/, gimme the TOP 5 best long altcoins. GO!
>>1390218
Define "long"
But surely you're not talking within a few months to a year.
>>1390247
By long i mean 1-2 years. But as you mentioned, could you tell some six-months coins?
>>1390134
Don't listen to this moron, IOC is on a pump right now. Unless you are onthe dev team or their inner circle, you will not know of scheduled announcements ahead of time. Thats what makes them announcements.
If you've been in crypto for 2 months you dont know shit.
For example, IOC already stated they are doing a big news hangout on the 24th(not 23rd). Its all in the thread.
Investing in crypto comes down to research and more research. It's gambling. Even then, the crypto market is crazy, there are pumps and dumps everywhere, its a super volatile speculative market. Let no one else tell you otherwise.
I agree that NEXUS is the coin to watch. TRUMP , SYS, Putin, all coins to watch.
But really, the market is crazy, its never been a better time to invest. There are always new innovative creative altcoins being made, its a hot market for sure.
>>1390274
opinions on BURST?
>>1390277
The nice thing about BURST is that it's a coin that never had a serious pump except on its release. I like investing in "deep sleep" coins like these. Why? Because its cheap, and any altcoin that has not pumped yet usually gets pumped one day or another.
The more well collected the dev team is, the better the marketing, better features etc the bigger the pump and interest, of course. Once this happens they can gain traction and you have serious money on your hands.
I like BURST, it's a nice speculative buy, sort of like Bitcoin Plus, Spreadcoin, etc. Deep sleep coins flying totally under the radar that can pump when you least expect it. Big gains.
>>1390134
>If you want an easy tip, there are news for ICO on the 23rd of this month. So it will pump a bit before the event. Buy the rumour, sell the news ;)
this isn't a tip, its on their fucking twitter you little shit, stop pretending to know anything about anything
anyone who wans to get on this shit should be on twitter. if you are in the US, you should esecially follow @crypto_profit. he called (or did himself) the $XAI pump yesterday and was tweeting about it all the time. i cashed in at 10k, cashed out at 20k. he did the same for Bitz today. too late for that now though, train seems to have left the station
I like going on Coinmarketcap and skimming through 200 - 350 ish position coins.
My reasoning is that coins typically experience +500% gains and such at these levels, but once they hit top 100 its a different story. You're buying in when a whole load of people already bought in and there is less room for growth.
Thats why coins like BURST, Geocoin (a personal favorite of mine, wish I bought into it earlier) Syndicate, coins that arent listed on major exchanges are the ones that offer big gains. That jump to top 100 is easier than you think. Its once they hit the top 100 and especially top 50 that moving upwards is more difficult.
This is my first and last post in one of these altcoin shilling threads
The only crypto you should hold is bitcoin
Everything else is trash, good luck out there, there's a lot of of scammers trying to get you to part with your usd/btc
>>1390272
>1/2 years
Eth, Lisk, Rise and maybe LBC.
>>1390272
>6 months
Btc is safest bet, game, dgb and xmr will likely do well.
>>1390272
DGB
the base is getting bigger and bigger and ready for serious action.
A coin with long term gain and not a quick pump and dump
LBC is going to moon soon
Also:
>>1390274
Putin is not to watch but to buy!
We are currently in the first phase, getting attention.
First breakout will be soon.
what is the best ico coming up this season?
waves failed, rise looks like it's in the process of failing, stratis won't hit their 1000 Bitcoin goal to continue work, they might just run with the money, verium maybe?
>>1390559
Wings
>>1390274
IOC hasn't pumped yet. Investing in crypto is gambling only for retards like you. TRUMP and putin coins to watch? Fuck off shill.
>>1390277
it won't moon
>>1390298
I said easy tip, didn't I? Of course I am not going to tell you fuckers my best tips.
Stop shilling your twitter account, you delusional cuck
So much easy money right now on this arbitrage
>>1390662
your best tips? like buying some btc so you can get in a p&d group and struggle to leech off of them, or alternatively, beg your way in?
this is not my twitter account, and it's not shilling when, if you actually check out the twitter, you can see that the guy legitimately gives tips on what is about to pump before it does.
and I don't know about Putin, but Trump is going to pump (again), and you have to be a retard not to see that.
so if you are not going to share actual useful information, since this is what the thread is for, how about you fuck off the thread with your "easy tips" and shove them in your already stretched asshole
>>1390707
>like buying some btc so you can get in a p&d group and struggle to leech off of them, or alternatively, beg your way in?
projecting much?
>I don't know about Putin, but Trump is going to pump (again), and you have to be a retard not to see that.
so you are not a shill, you are just a retard.
>so if you are not going to share actual useful information
I gave way more useful info than your pathetic attempts at trading
>>1390559
>>1390578
I agree
>>1390720
>projecting much?
you don't seem to be denying though
>so you are not a shill, you are just a retard.
wow, i called you a retard and you called me a retard back
>I gave way more useful info than your pathetic attempts at trading
no, you didn't give shit. "haha here's my EASY TIP GUYS BUYS IOC IT WILL PUMP THIS WEEKND ;^))))" when anyone dabbling in shitcoin trading for a week or two would already know that.
get over yourself you narcissistic little shit, you don't have any reason to act elitist
>>1390732
>you don't seem to be denying though
I am not an insecure faggot like you. I don't care what you think, I know who I am and what I do.
>wow, i called you a retard and you called me a retard back
the difference is that you are wrong and I am right
>when anyone dabbling in shitcoin trading for a week or two would already know that.
again, I said "easy tip", didn't I? I am not going to give my best info to faggots like you
>>1390274
>For example, IOC already stated they are doing a big news hangout on the 24th(not 23rd)
Wow you seem so informed and honest anon, we should totally follow your advice
>>1390749
To be fair the 24th was posted, it was a typo...not everyone is lying, even the developers get it wrong, dumbass
>>1390745
>I am not an insecure faggot like you. I don't care what you think, I know who I am and what I do.
That's exactly what you are. You're clearly projecting. You're the one throwing a bitchfit on here like a little cuck, apparently everyone you don't agree with is some kind of shill or agent.
>IOC hasn't pumped yet. Investing in crypto is gambling only for retards like you. TRUMP and putin coins to watch? Fuck off shill.
IOC HAS already pumped, take a look at coinmarketcap. Does that look like a pump to you? Yep, it is! It already gained. People aren't stupid. Yes, it will gain this weekend, but the crypto community already is on it.
BURST has already gained, ironically.
Even others can see that you're the one being a shill. "HERE IS MY LITTLE TIP HURR DURR IOC IS GONNA PUMP THIS WEEKEND." Fucking idiot
>>1390877
> Yes, it will gain this weekend, but the crypto community already is on it.
and? if it pumps, it pumps
>Even others can see that you're the one being a shill. "HERE IS MY LITTLE TIP HURR DURR IOC IS GONNA PUMP THIS WEEKEND." Fucking idiot
ok anon, then I will let you losers keep losing money. Your choice.
>IOC HAS already pumped, take a look at coinmarketcap
this just proves how retarded you are
> Does that look like a pump to you?
no
And anyone with half a brain knows that Trump and Putin are gonna pump again, the crypto market not a fully rational market. Dogecoin is living proof of a successful coin that has no practical value other than being a meme, its on the top 20.
>>1390895
Yeah okay you're right, its not a pump, it hasn't gained at all, yeah you're totally right.
Everyone invest in IOC you're gonna get huge gains, Anon is totally right he is schooling all of us with his 2 months experience in crypto, lmao, with is super top secret information that no one knows about
>>1390919
>I can't read charts: the post
Yea, IOC already pumped. If you own it still, hold. If you dont already hold it, don't buy otherwise you'll buy the dump. GAME and CJ and XRP all this weekend.
>>1390914
please pump doge
i fell for the meme very late in the game .
If I hold on to them they're bound to pump sometime right?
>>1391007
Doge gets two pumps every year. I haven't looked at the chart today but it had another fall so maybe it's bottomed out now and due for another pump sooner than later. It has been a very long accumulation period for It so the pump will be huge but you're better off longing Bitcoin right now than waiting on doge.
>>1391027
Thanks for the reply. When you say that you mean just buy bitcoin and stash it in a wallet for a long time right?
>>1391044
No. Do you know what leverage trading is? If not, it's basically trading with ten times the amount of Bitcoin you own. So if Bitcoin goes up 1%, your grade goes up 10%. I suggest you long (betting on price going up) Bitcoin now and short (betting on price going down) Ethereum.
>>1391084
I did a bit of reading but if I make some bad leverage trades will I be in major debt to whale club?
>>1390218
>>1390218
>>1390218
>>1388699
I'm waiting for a dip before buying more Monero but I have that luxury since I already own heaps. I'd start averaging in now if I didn't have any. It's not like stocks where you pay $20 for every trade.
okay faggits, we starting soon
>>1391303
>vcr
nope, sell it
>>1391306
*vrc
>>1391122
No. You can't trade with more than you own. So if you only have .01btc to trade and your trade goes -100% then you only lose .01btc.
>>1391323
Let me rephrase that, you can't lose more than you own. You trade with more but you cannot lose more.
>>1390550
YOU FUCK YOU WERE TOTALLY RIGHT
Everyone saying lbry is gonna do good, it won't do good when you have retards that post shit like this
I sold around $1.50
>>1391456
.
lmao
>>1391306
you def need to sell your VCR haha
is SYS an actual sleeper or am i being thrown on the wildest shittiest ride of all time
also thoughts on MYR
>>1391636
i hope not cus i just bought a bunch and am gonna wait until august.
BTS mooning within the next 24 hours. watch for the dip
>>1391459
>>1391456
sell the news, people are eating it up
>$1.50
nice gains.
How's Waves? It seems promising.
>>1391723
SCAM
>>1391387
So I only risk what I have. I'm not the same guy, but used your invite. I hope you don't mind.
How do I make money off the site, though? Do they pay me back in BTC? How much money can I make if I only have .25 BTC?
>>1391784
guy who was asking questions here. I've been using there turbo trade option and I've turned 7 usd bucks into 9.72 usd in the past 30 minutes or so
>>1391823
How did you do it?
>>1391784
Let's say your trade whether it be long or short goes to 100% then your trading amount just became double. So if Bitcoin goes up 10%, your trade becomes 100%. Same goes with the reverse, so if it goes down 10% you go down 100%. This of course depends on the leveraged amount so btc/usd is x10 while ETH/usd is x5.
Simple as that. That's all leverage trading is
>>1391823
I'd serious suggest not using turbo trading. It's a gamble of 45% of your trade amount and two bad calls, you're fucked.
>>1388147
Because Memecoins are hot on shitcoin exchanges right now and I found Trumpcoin when it first launched. I invest in a lot of coins solely due to the novelty of it, because novelty actually factors into the speculative prices of these coins. I made money off of Memetic (PepeCoin when I bought it), Putin, Billary, Shrek, and others.
Still waiting for a Paul Blart: Mall Cop Coin.
>>1391860
do you have to pick a limit/stop for this to be effective
Also I think a perfect time to open a long position would be right now. China didn't dump Bitcoin the last two nights after dumping everyday since halving. They are finally going to let Bitcoin rally.
>>1391868
Completely useful. If you only want to make 20% then the target should be 2% above your buy in price. Stop is opposite as in you only want to risk 20% then you set the stop to 2% below your buy in price.
It's honestly the only good way to trade Bitcoin.
>>1391881
It just rose to 674ish.
Told you lol. It's about to hit some major upside. Do NOT miss out
Anytime before 680 is a good buy period. Just be patient.
>>1391881
i just set a long trade entering at 675.7 with a target at 695 and a stop at 655.11
pretty good balance of safe/risky?
>>1391919
How exactly do long options work on Poloniex?
Is using faucets a good idea?
They seem somewhat retarded and winning 0.1 shekels per week is slow.
What if i set up a faucet list with referral links? Is that a good idea?
>>1391953
Faucets are a bad idea in general, stick to trading and shitty dice games.
So, what's the deal with Ethereum?
It sounds like a fucking scam.
>>1391919
So I collect my earnings when the value of BTC hits the Target I set?
If I enter at 675, have a 2.039 trade size, and my target is 280, where will that leave me if it reaches 680?
>>1390218
BTC
ETH after HF
???
???
???
>>1392018
Your target is seriously low. It'll
>>1387591
china is stirring
>>1392121
oops.
Your target is seriously low. It'll be around 10% you're up. I'd suggest 690 cause that's probably the next test. If it goes to 700, 720 is the next test as well.
>>1391939
Your stop is wayyyy too low. Make it 664. Seeing how 666 has been seeing huge resistance and if it falls below that then we're starting this whole song and dance again with China.
>>1391946
you're basically borrowing someone else's coin to trade with 2.5x leverage. Not worth it because of the fees sometimes but it all depends on how huge you expect the fall/rise to be.
Not the same guy btw
>>1392132
Alright cool. I meant *680 by the way, I think you picked up on that though.
Also I Fucked up, accidentally closed my short on ETH and was awarded -14.48% returns lol.
Should I just remove the stop? I have my target for ETH at .015 and my stop at .02, but I'm afraid it'll hit.
My BTC long is now set at 690/664 stop, based on your advice to me and the other anon in thread.
Thanks for the advice! If this goes south I'm out about $150 worth of BTC. Fuggit
>>1392161
I'd wait off on trading ETH. That market goes opposite of logic every fucking time. I'd wait and see what happens after the hard fork cause eventually that streak will break.
Also you gotta tighten up those stops cause that is seriously a lot to risk. Set a stop so you only lose 20-30%, not as much as you did. I don't know where you got in but 2 is high for a stop on a short.
>>1392171
>>1392171
Yeah well I'm probably fucked now on the ETH trade, or is that not how it works? As soon as you approve the position, things are immediately in motion as the market changes.
I kind of feel like i should have held off on the ETH trade too, but I do feel like it'll drop a little at least. If there's some way to remove a position without ripping the small amounts of funds I have though, I'm all ears.
LBC is fucking insane. Went heavy at 117. Dropped to 08. Was feeling upset with myself until it bounced back to 14 today. Really happy I held the bag on that bitch.
Kind of unsure where the market is with the BTC price sperging out this week.
>>1392215
if you market buy then you're kind of fucked if you want to close it immediately depending on the spread (the bid/ask) of the market.
has VRC bottomed out yet?
>>1392236
I'm keeping LBC through till next week. I got fucked by it earlier but I waited it out until today.
Also BTC was up to $721 on C-CEX, sold some of mine off before i bought back in when it dropped down to $697
Does anybody trade in whaleclub.co?
How these Short and Long options works?
Is there any new, promising crypto currency to gain some easy shekels?
>>1392463
Trust, LBC, if you believe in the power of based god buy BRX in it's inital stake
>>1392444
nigga how about you read the thread jesus christ
I pray for bts to moon while I sleep amen
Is trumpcoin a good Investment?
Why do people think ioc isn't a decent investment?
It doesn't have to jump to +70% or +100% to be respectable. It just has to keep doing what's it's been doing for the past 2 weeks, steadily rise in value everyday.
I don't have money on it but I really don't get the hate here.
>>1392890
>literally everyone agrees $IOC is a good investment
??
Hope you guys didn't unironically short Eth
>>1392463
SYNX is already going bananas. Get it on coinexchange.io before it hits the major exchanges
>>1392964
There is no telling where ETH is gonna go now. I wouldn't touch that with a 10 foot pole.
Invest in EXP instead, it's gonna go up these next weeks
>>1393094
True, but I would short and forget about eth.
The past 2 months has been all about the short term monetary aspect of eth, countless comparisons to btc.
the only way eth will tank is if other project overtake it, and as long as they have their well funded team then that won't happen anytime soon.
maybe not until lisk and rise matures
>>1392870
Of course it is
Holy FUCK Bitmoon fucking soared over the past 12 hours.
Fucking quintupled in price. I can't believe I didn't fucking buy yesterday.
CJ IS MOONING RIGHT NOW BITCHES!
>>1393092
is this the only exchange this coin is on? hardly anyone is selling it and the spread is like 30% of the value of the coin. is this a really hot buy right now? going to go to more exchanges?
MFW I bought 1btc worth of CJ at 30sat each, just sold for 390sat each.
Been mining a shitload of them too, I have about 80k I've mined kek
>>1393101
>the only way eth will tank is if other project overtake it, and as long as they have their well funded team then that won't happen anytime soon.
Average Joe goes to and sees Ethereum 2nd to Bitcoin and buys Ethereum not paying much attention to the Hard Fork which will be forgotten in a months time.
>he fell for the muh tech coin
lol at you
going balls deep in Trump since the new announcement
feels great man
Siacoin looking like it's ready for a strong pump, are you ready, lads?
>>1394404
no need to be, because it'll never go anywhere lads
glad this is just bait and can rest knowing nobody actually fell for the SIA meme
stay safe
What's the next big pump boys? Everything I've gone into has done me wrong but I refuse to give up. I will not stop until I have 0 Satoshis left
Anybody catch any of the NOTE goodness? Caught it last night right before it mooned
>>1394445
No. I'm not capable of independent thought. If someone on /biz/ doesn't tell me what to do, I don't do anything.
What's going on with EXP ?
to anyone I gave advice to, sorry. I should've expected the goddamn ethereum hard fork would fuck up bitcoin with all its bullshit.
Just starting, what wallet should I use?
>>1394640
yeah i actually went on whale club and longed btc exactly at 690/666 as you suggested and got fucked over, but that's ok, i should've predicted the eth shit would fuck btc too
>>1392964
It was such an obvious short that it became a bad short. Since the eth market is highly manipulated, you could predict whales were gonna squeeze shorters.
Eth is still a mess though.
>>1394431
rads
is dgb on for this weekend?
Well, bitmoon fucking tanked.
Do you guys think CJ's going to keep rising, or is it due for a crash?
>>1395556
currently the building of a big foundation and buy walls to keep it up one satoshi at a time.
I'm happy that I removed my seels early enough before it went up, because it won't go down that low anytime soon.
BUt yes, buy NOW sell on sunday.
>>1394431
LBRY
>>1393101
>lisk
Lol you do know its a shitcon right
>>1392236
LBC is gonna moon
>>1395771
Can you elaborate on this DGB weekend event? Is it just a pump? Is it a legit currency?
>>1395798
Now it is just a pump.
It is a legit coin for long term.
The devs posted an updated roadmap a week ago.
This coin has some huge potentials.
Of course I'm dumping a big part when the price goes up but I've put some in a core wallet which I won't touch for a year.
Just keep an eye on it or even join the telegram group where the party is at.
Sup goys,
I looked into ASIC miners today, made some calculations, and it looks to me that with electricity prices at 0.15$/kwh investing into a miner gives 60% yearly return (sorry for wrong choice of words, english is not my first language). Can it even be? I mean, how the hell is not everyone and their mother owning a bitcoin farm?
>>1395971
Yearly return for what coin?
The problem with miners is that the technology is always improving every 2 -3 months, the prices of coins change rapidly, and the difficulties increase, so you get decreased ROI as time goes on.
>>1396009
Bitcoin, the most common antminers. Looking at their "newer" s9 model.
Is Putin the new meme coin? Thinking of picking up a couple BTC worth since it's so cheap, I feel like it has real potential.
>>1396013
Check your calculations. I am getting about $1,200 total ROI for 360 days based on the S9 that costs $2,400.
If it sounds too good to be true, it probably is. I've never heard of BItcoin being profitable for the average joe without an established farm for awhile now.
>>1396018
It's all about Trumpcoin now. TO THE MOON, BABY.
>>1396026
Right, I have doublechecked, and used the calculator you suggested. I completely ignored difficulty ramping up. If I understand it correctly, in a year I'll be making 40% less than I would be making now. So implying that the exchange rate stays the same, it's not really worth it.
I just forgot that wee now have 12,5 instead of 25 per block.
>>1396018
Putin has the potential to be the new Meme Coin.
I already put some halve a BTC in at 29Sat.
As far as I can tell, I'm in for profit :)
>>1396133
Putin is going to be the LTC to the BTC of Trumpcoin
>>1396183
Compare Putin vs Trump in real life.
Putin will fuck Trump so hard. Same will haben with the coins.
Some stupid looking Choina Choina Choina dude or a build up strong person who you can look up to and call a man.
Putin, my dear friend, Putin.
Wow, look at what I called a few days ago, SYNX. I fucking called it lmao. SYNX is killing it, every day a new high, its ridiculous, I cant get my buy orders in fast enough.
Hop on the SYNX train fellas who knows how high this thing is gonna go.
>>1396454
definitely a good coin to get on. I'm on it too and it's not even at a real exchange yet.. Idk if the technical aspects of the coin are actually useful in any way but the dev seems to be working on it to some degree
>>1396192
the chinese would definitely prefer the putin coin at face value
>>1395042
I'm seriously sorry, dude.
ETH is such a weird fucking beast. I'm probably going to use opposite of logic when it comes to ETH from now on.
>>1396454
It was only minable for a month and has very low supply. It is basically everything Trumpcoin fags wish trumpcoin was
let's say I have 20 btc. Will I become rich in my lifetime?
>>1396617
I don't know. Did you invest in kneepads as well?
>>1396617
Yeah probably. 10 years if bitcoin doesn't get fucked by something catastrophic.
>>1396623
In 2026 the price of bitcoin will be worth $1000. In an ironic twist of events, $1000 will be worth $660 dollars.
>>1396617
Sell it and use the $12000 to buy a cheap house and rent it out so that you can be collecting money from it every month instead of it just sitting in a Bitcoin wallet.
>>1396746
> Finding someone who would want to rent and live in a 12k house
Enjoy your squatter junkies
>mfw I bought SYS at 530 back then
>sold at 3500
>bought back at 700
>its rising again
I'm one lucky son of a bitch..
SIA exploding
get in
>>1396913
>6.8%
>exploding
bagholder detected
>>1396576
Now imagine what will happen, when a coin has the support of the chinese ;)
It doesn't matter how much Trumps "loves" ChOina.
>>1396932
lel.. Crypto noobs are one of the most stupid beings alive
>China wakes up
>Wait what happens if Chinese start supporting the coin
This is all bullshit. Usually Chinese are the first jumping into coins.. and unlike common belief, those motherfuckers never sleep.
>>1396605
You do realize Trumpcoin has a lower supply than SYNX right? Lmao
>>1397057
No, they have about the same supply in absolute terms, but nobody gives a shot about Trumpcoin so that supply for it isn't low at all, it is way too high. Furthermore trump coin is heavily pre-mined synx not. Add to that the fact that synx has a team behind it that is not made of complete retards and is actually trying to do something interesting
>DGB finally starting to pump
wew, was worried the shills of /biz/ had let me down for a moment
>>1395901
why specifically sunday?
>>1396596
Always buy it pham
Dgb is mooning. Don't take my word for it, check it out.
Where do you think will DGB end in mid and longterm?
>>1397269
I don't know, I'm long on dgb but I do know this happened
https twitter.com/DigibyteMod/status/756461106537771008?s=09
Which coins are expected to moon next?
>>1397414
Trumpcoin...no seriously, Trumpcoin is turning into a very unique coin that has serious potential to make history.
A shorter term moon...SYNX is already going bananas, i would grab some if you havent, I think its gonna be a hot coin
Whatever you do, dont touch EXP, ETH, LSK or any other coin that is linked to ETH, that martket is nuts and totally manipulated
>>1397704
Is SYNX on any exchanges other than CoinExchange.io? Because that exchange seems pretty dead. They've only had ~28 BTC worth of trades in all their markets combined for the past 24 hours.
>>1397734
unfortunately, no. this seems to be it for the moment. lot's of speculation in the thread that it will soon be added to bittrex.. that's the hope anyway. coin seems to be going pretty well.
I impulse bought some LBC (lbry shit coin) a few days ago when KAT got taken down. I mistook it for a blockchain that holds torrent addresses instead of whatever frankenstein it actually is. It's down like 20% in just a few days. Should I hold it or just let it go? Luckily I picked up some eth at the same time so think I'm only slightly in the red.
WHEN THE FUCK IS POLO GETTING WAVES GODDAMNIT HOLY FUCK THEY ACCEPT EVERY OTHER GARBAGE COIN
>>1397159
Such a weak coin tho.
I have dgb and Trump, two of the most shilled coins, for a good while now.
But I can never shill them with confidence because they look like some of the most underwhelming coins out there.
Every coin that has been shilled to the capacity of these coins has never done well apart from Eth.
>>1398643
>DGB
literally make money playing CSGO, one of the most popular games in the world right now. it's fantastic
>trumpcoin
self-explanatory
>>1398509
my guess is by September. Waves is testing some shit that is required by polo for testing the coin or some shit like that
>>1398643
DGB pumps once in a while. It's not really a coin that will make you rich, but if you will benefit of the pumps and earn some dosh every 3 months or so (assuming you short it and buy the dip)
Trumpcoin is a shitcoin, you fell for a meme
>>1397269
Not very high.
>>1398668
Fair explanation of dgb, it does indeed happen for other coins too so I can see why people have faith in it.
I do have faith in the tech, but I thought it misleading that people describe it as a get rich quick scheme.
>Trump
I have 2 Trump coins, lol.
>>1398655
And yet another non-explaination on why Trump coin is a good investment that isn't backed my a political one.
>>1398775
I was on board with trumpcoin early on, but the coin's marketing always seemed like shit and the dev was literally some chicken fucker from india. I haven't followed the drama with the dev team at all for weeks and I imagine it's probably mostly resolved.. but still. As soon as all that fucked up shit happened with the dev and the other people working on the coin I knew the coin was getting a giant setback.
If it was as simple as a well branded and competently marketed coin bearing Trumps name that would be one thing. But the project has been horribly mismanaged from the get go by literal indians with IQs <98. /Biz/ got so attached to the coin in the short period that it was going well that they've entered a permanent bag-holding phase and committed themselves to pushing whatever community means possible to get the coin headed in a healthy direction.
There may be money there at the end of the day, and I might check back in and buy some time, but the project has a lot of negative market sentiments it needs to break through.
>>1388796
Lmao shut up you fucking imbecile
>>1398884
It was the Indian talking to himself for months
I've done nothing crypto related for a couple years, so I'm just wondering, what is the current standard bitcoin mining device? Has the hashrate of new asics started to plateau yet?
>>1398884
>There may be money there at the end of the day, and I might check back in and buy some time, but the project has a lot of negative market sentiments it needs to break through.
I agree, but this is crypto. Negative market sentiment is quickly forgotten, and when I say quickly forgotten I mean within 3-4 weeks lmao.
And if you've been following the project it has not been indians, just more myths, the coin was created by a DJ in Scotland, the coin has not been heavily premined, and is .6.5 million coin POS.
>>1397132
More myths about Trumpcoin, it has not been heavily premined.
If you'd bothered to read the Trumpcoin announcement you'd realize that Trumpcoin is trying to do something brand new never before done in Crypto. You gotta give them credit
>>1399043
>If you'd bothered to read the Trumpcoin announcement you'd realize that Trumpcoin is trying to do something brand new never before done in Crypto.
there is literally nothing new in what they are doing
Anyone else get a meme coin gift from Kraken?
what's up with $NIRO?
that p&d twitter Fontas won't shut up about it, along with some others
worth buying into?
>>1399053
Okay, please point of a coin that is used as a Campaign donation superpac to a Presidential candidate. I'll be waiting. And no, Ron Paul coin didn't do that.
So I'm waiting.
I'm up 625% on ETH. At what point should I sell a portion to recover my original investment?
>>1399187
right about now
>>1399199
Convince me why it's not too early to sell. I think of people who bought BTC early and then sold at $25.
>>1398655
>literally make money playing CSGO, one of the most popular games in the world right now. it's fantastic
Decided to test this by linking up my steam account and playing a competitive match.
How long does it actually take to update the wallet?
>>1399244
actually it's very slow at the moment... like, days slow
i wrote to their not so good customer service some days ago and they told me they experienced an attack and this is what is slowing down payments.
play today and check your digibyte balance tomorrow in your CSGO section, and click on the Claim Digibytes there. i claimed some on the 19th and they still haven't arrived...
>>1399267
Cheers, anon.
Yeah, their share rewards aren't working either so I guess I'll check in tomorrow. It's still a neat idea.
>>1399244
>>1399267
I've gotten payments <24 hours. I've had cash outs immediately after playing. The only bad part is it's a manual claim now - used to be automatic.
>>1399300
That's like a few cents for hours of CSGO. I guess if you would be playing it anyways that's fine but not really an incentive to play if you otherwise wouldn't..
>>1399300
Neckbeard Basement dweller here. Just signed up for this. What am I in for? will I be able to afford vidya?
>>1399349
Calculating using the ATH of 185sats and BTC at $600, 1669 is the equivalent of $1.85. So no, probably not.
>>1399337
Agreed, I already play the game so might as well add to my bag for no extra effort.
>>1399359
get your not existing friends to add their LoL summoner to you DGB gaming account. With this you can also get some extra DGB.
I'm already in for >10$. trading this at polo with every up and down.
It is 1 click per day and if it really happens that DGB hit hard, you made a few hundret or thousand dollar.
>>1399359
Oh I forgot:
You can also use these DGB as tipping in the twitch chatbox, twitter or youtube.
You can also use it for tipping via the telegram chatbot, which is pretty cool.
Search the Bitcointalk thread for this or even join the telegram group with the devs an community.
>>1399374
Sell your DGB and move the BTC on to Yobit. $10 worth of DGB won't get you shit but you can easily make that worth a lot more day trading the hot and cold shitcoins over there.
>>1399377
The problem with yobit gambling is to guess the correct shitcoin that pumps 300-500% on that day.
Everyday another coin pumps this hard, but how do you know which one the next will be?
What's the deal with Onecoin? I talked to someone about crypto IRL for the first time today and they mentioned they were in it
He was some crazy /pol/ guy though talking about taxes are theft rothchilds yadda yadda
>>1399403
it's a scam
>>1399403
>crazy /pol/ guy
>crazy taxes are theft
>crazy anti-government talk
>crazy talk rothschilds are evil
>crazy rant going on about niggers
>crazy unironically going to go vote for trump
>crazy this guy i think he also actually owns guns
>this guy is really crazy, man he needs to get his head screwed on straight.
>>1399187
Never, because you should never invest money that you can't afford to lose.
If you can afford to lose the seed money, then it's always in your best interest to leave it all in. Only withdraw if you're cashing out.
What in god's name is going on at the Poloniex exchange. Ethereum Classic (ETC) added a couple of hours ago
Want a real gamble? Buy Ethereum classic.
I would not touch that market at all. What's going down right now is Ethereum civil war, basically.
Huge dips, Ethereum classic going from a low of .0004 to .014in the span of an hour. Lol, Crypto trading markets make the Stock market look like child's play.
>>1399810
To .0014, that is
>>1399810
Already did, just look at the massive buy wall already developing, that things gonna go ballistic for a while yet.
Jesus I've already doubled my money
Got in ETC at .007
Shit has been literally mooning for the last 2 hours and I can't imagine most of the market is even awake.
This is one of the craziest nights I've ever seen in crypto
What the fuck is ETC?
Googling for it is futile because etc is so common.
got etc at .0012, when do we get rich?
etc tanking
>etc is added before waves
>>1399848
Ethereum Classic, basically the pre-fork version of Ethereum proper.
>>1399337
>.
don't even know why i listened to you, bought in at 1.2k as you suggested and its now down to 961 buy orders
>>1399899
>>1399865
>>1399853
>>1399848
OK am I right when I say:
> I have all my ETH in the new fork when I upgrade my client
+
> I have the same amount of ETC when I keep using the pre-fork client
which basically means all pre-fork ETH is free money?
>>1399906
Not that guy but sub 1k is a very unusual downswing.
Either way just sit on them, you will get more than you paid sooner or later.
This shit is making me look like a little boy, ETC is tanking now.
Will etc always persist?
If that's the case, then it will likely go back to it's prior value and right now there's ton of free Eth?
>>1399924
ETH is dead. Just sell your memeriums for Bitcoin already.
>>1400114
I understand it now.
I can see why ETC can do well, but it's too early to tell.
It's getting a lot of publicity and it would likely rise, I thin it will likely rise, but this isn't a bagholders coin.
Amid all this, Exp can take profit from all this shit once the smoke clears.
16bfmgtNuGkcZzzDnKaVn1N8ZNoa3Yo4DX
Pls gimme some 1/10000 of a Bitcoin, /biz/. I'm a lowly humble newb asking for a little help.
>>1400556
That's 66 cents, you stupid beggar. Don't fucking beg here and shit up this board.
I am the anon who is making that RPI BTC ATM.
>Why
Originally, so I can get a percentage of the volume donated to me, but I genuinely like the idea and I think it has a lot of potential.
>How can I get one
The only options here are to build it yourself or wait for me to sell a prototype (probably on OpenBazaar).
>How far along is development
Not too far along, I am going to test the rough transaction code right after the wallet syncs up, and hopefully everything works
>Presentation explaining everything
Will answer questions and comments for about an hour.
Opinions on GRC? Seems to be really cheap right now
>>1401115
Do you cry yourself to sleep every night?
What is your favorite trans-sexual porn actress?
>>1401175
>Do you cry yourself to sleep every night?
No
>What is your favorite trans-sexual porn actress?
I hate them all
Despite the SYNX crash I am still very bullish on SYNX and believe it to be a sleeper long term
|
https://4archive.org/board/biz/thread/1387591/ccg-crypto-currency-general
|
CC-MAIN-2021-04
|
refinedweb
| 9,090
| 81.63
|
I’ve had a look recently at Django’s templates. I know most of my readers are fans of TurboGears, so if you are one of them, hold yourself from throwing tomatoes at me – I was only playing with it. It was nothing serious.
Then, I came back to Kid, and started to miss Django’s filters. These filters are a lot like UNIX pipes. You can do things like:
{{ identity.is_anonymous() | yes_no }}. The output of this will be the value returned by
is_anonymous() after being processed by the
yes_no filter, which translates a boolean to either “yes” or “no”.
Some filters can do really cool stuff. For instance, consider this:
{{ file_size | filesizeformat }}: this would print the size in a human-readable format (i.e. 17 KB, 102.4MB and so on).
{{ story.text | escape | linebreaks }}: this will escape all ampersands from the text and then convert all newlines to <p> and <br/>. This also demonstrates that filters can be chained.
You have {{egg_count}} egg{{egg_count | pluralize}} left: will pluralize the egg noun with the suffix ‘s’ if
egg_countis not one.
“But you can achieve the same by using the existing functional notation!”, I see you thinking. “You can do
{{ linebreaks(escape(story.text)) }}. And it has the benefits of using a pure Python expression”. That’s right. But having pipe-notation:
- is 217% more fun, since it is more intuitive and readable
-
- makes it easy to build a standard library of filters which can be commonly used, to the benefit of the frameworks’ users.
- makes it easy to add project-specific filters, without having to pass them as a template variable (they aren’t).
- adds a new dimension to Kid or Genshi. Currently they work great on the XML level, which is their primary design goal. But a bit lacking on text interpolation capabilties. The stuff surrounded by those XML tags also matters 🙂
Then, I thought that there should be no reason that this will not be easy to implement with Kid (or Genshi). Fast forward 30 minutes, and it is now possible to do all the above in Kid. To comply to Kid and Python syntax, it is done with double pipes:
${ story.text || escape || linebreaks }
or
${ story.link || urlize }
The last one converts URLs in the text to a clickable link.
To get a rich set of filters to start with, I just borrowed Django’s default filters module. It was fairly easy to disconnect the dependency of most on them on Django. I threw away the rest for the time being. Adding your own filter is also extremely easy. You can just add to controllers.py the following:
def handle_shorturl(s): """Returns the URL without the query string""" return s.split('?')[0] from kid.filterslib import register_filter register_filter('shorturl', handle_shorturl)
If you’d like to start playing with it as well, I’ve prepared two patches, against Kid 0.9.3 and Kid 0.9.4:
To apply the patch, just cd into the kid directory and type:
patch -p1 < djangokid-0.9.4.patch.
For a superset of what is currently implemented, see: Django built-in filter reference.
UPDATE: I've added also a patch for Genshi.
I'm looking forward for your feedback.
TOO COOL!!! I wanted something like that for a long time! Pipe with templates are really cool!!! Now, we need it with genshi heheh :P.
/me puts the tomatoes down…. this seems interesting
so the idea is to have a library of filters with common functions….
ok real world test now just one question why will it be usefull to have all this on output side? I mean if I’m always going to
${ story.text || escape || linebreaks }
why not process that when it gets input do it ones and store it like that on the db?
it could be usefull for something very dynamic but I don’t see much use (other then being cool) on output.
now implementing it as a decorator or even as some sort of validator could be a more “usefull” way of having pipes.
Jorge,
It is very often than you need to truncate a very long string or convert a boolean to ‘yes’ or ‘no’. It is a form of representation that shouldn’t be kept in the db.
There are forms of representations that are tightly coupled with the template. For example, when a design of a web page becomes narrower, your want to truncate the strings even more.
Maybe using pipes with widgets would be nice too. Nadav, what about using double dollar. Like $${}? Would that be very hard to implement, instead of using double pipes? Or ${“some_output | lower | prettier | hot_chik_pretty | turbogears_pretty”} using string. Double pipes seems so wrong to my eyes…(but i’ll use it anyway :P)
ps: Would patching Genshi the same way as patching kid?
Italo, The problem is that one | is a Python operator. It leads to ambiguous expressions. Unfortunately also > and >> are taken. But I’m open to suggestions.
Patching Genshi was basically dropping the same lines of code, but elsewhere. Genshi itself does things a lot differently than Kid.
Oh well, can’t think of something better either. Pipes are so perfect :p. Double pipe won’t kill me. ^^
Pingback: Filters 0.1 Released | Nadav Samet’s Blog
Hello nice blog! !!
pen
It’s my new page.about pens.
|
http://www.thesamet.com/blog/2006/12/04/django-like-filters-for-kid/comment-page-1/
|
CC-MAIN-2020-16
|
refinedweb
| 894
| 76.52
|
Provided by: manpages-dev_4.16-1_all
NAME).. Since Linux 2.6.16, MS_RDONLY can be set or cleared on a per-mount- point basis as well as on the underlying filesystem. The mounted filesystem will be writable only if neither the filesystem nor the mountpoint are flagged as read-only.. Another exception is that MS_BIND has a different meaning for remount, and it should be included only if explicitly desired.., this.. EFAULT One of the pointer arguments points outside the user address space. EINVAL source had an invalid superblock.. ELOOP Too many links encountered during pathname resolution. ELOOP A move operation was attempted, and filesystem
mountpoint(1), umount(2), mount_namespaces(7), path_resolution(7), findmnt(8), lsblk(8), mount(8), umount(8)
COLOPHON
This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
|
http://manpages.ubuntu.com/manpages/cosmic/man2/mount.2.html
|
CC-MAIN-2019-30
|
refinedweb
| 156
| 58.69
|
How Optional Breaks the Monad Laws and Why It Matters
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
Java 8 brought us lambdas and streams, both long-awaited-for features. With it came
Optional to avoid
NullPointerExceptions at the end of stream pipelines that might not return an element. In other languages may-or-may-not-contain-a-value types like
Optional are well-behaving monads – but in Java it isn’t. And this matters to everyday developers like us!
Introducing
java.util.Optional
Optional<T> is described as “a container object which may or may not contain a non-null value”. That summarizes pretty well what you should expect from it.
It has some useful methods like:
of(x), that allows creating an
Optionalcontainer wrapping on a value
x.
isPresent(), that returns
trueif and only if the containPer object does contain a non-null value.
Plus some slightly less useful (or slightly dangerous if you will) ones like
get(), that returns the contained value if it’s present, and throws an exception when called on an empty
Optional.
There are other methods that behave differently depending on the presence of a value:
orElse(v)returns the contained value if there is one, or
vby default if the container is empty.
ifPresentexecutes a block of code if and only if there is a value.
Curiously enough, you can see that in its class description there is no mention of methods like
map,
flatMap, or even
filter. All of them can be used to further process the value wrapped by the
Optional. (Unless it is empty: Then the functions aren’t called and the
Optional stays empty). Their omission might have to do with the fact that in the intentions of the library creators,
Optional should not have been a monad.
A Step Back: Monads
Yikes! I can picture some of you sneering when you read that name: monad.
For those who didn’t have the pleasure yet, I’ll try to summarize an introduction to this elusive concept. Be advised and take the following lines with a grain of salt! To go with Douglas Crockford’s definition, monads are “something that once developers really manage to understand, instantly lose the ability to explain to anybody else”.
We can define a monad as:
A parameterized type
M<T>: in Java terms,
public class M<T>.
A unit function, which is a factory function to make a monad out of an element:
public <T> M<T> unit(T element).
A bind operation, a method that takes a monad as well as a function mapping an element to a monad, and returns the result of applying that function to the value wrapped in the monad:
public static <T, U> M<U> bind(M<T> monad, Function<T, M<U>> f) { return f.apply(monad.wrappedValue()); }
Is that all there is to know about monads? Not really, but that is enough for now. Feel free to check the suggested readings at the end of the article if you’d like to read more on the subject.
Is
Optional a Monad?
Yes and no. Almost. Definitely maybe.
Optional per se qualifies as a monad, despite some resistence in the Java 8 library team. Let’s see how it fits the 3 properties above:
M<T>is
Optional<T>.
- The unit function is
Optional.ofNullable.
- The bind operation is
Optional.flatMap.
So it would seem that
Optional is indeed a monad, right? Not so fast.
Monad Laws
Any class, to truly be a monad, is required to obey 3 laws:
- Left identity, applying the unit function to a value and then binding the resulting monad to function
fis the same as calling
fon the same value: let
fbe a function returning a monad, then
bind(unit(value), f) === f(value).
- Right identity, binding the unit function to a monad doesn’t change the monad: let
mbe a monadic value (an instance of
M<T>), then
bind(m, unit) === m.
- Associativity, if we have a chain of monadic function applications, it doesn’t matter how they are nested:
bind(bind(m, f), g) === bind(m, x -> g(f(x))).
Both left and right identity guarantee that applying a monad to a value will just wrap it: the value won’t change nor monad will be altered. The last law guarantees that monadic composition is associative. All laws together make code more resilient, preventing counter-intuitive program behaviour that depends on how and when you create a monad and how and in which order you compose functions that you will use to map a monad.
Optional and Monad Laws
Now, as you can imagine, the question is: Does
Optional<T> have these properties?
Let’s find out by checking property 1, Left Identity:
Function<Integer, Optional<Integer>> f = x -> { if (x == null) { x = -1; } else if (x == 2) { x = null; } else { x = x + 1; } return Optional.ofNullable(x); }; // true, Optional[2] === Optional[2] Optional.of(1).flatMap(f).equals(f.apply(1)); // true, Optional.empty === Optional.empty Optional.of(2).flatMap(f).equals(f.apply(2));
This works both for empty and non-empty results. What about feeding both sides with
null?
// false Optional.ofNullable((Integer) null).flatMap(f).equals(f.apply(null));
This is somehow unexpected. Let’s see what happens:
// prints "Optional.empty" System.out.println(Optional.ofNullable((Integer) null).flatMap(f)); // prints "Optional[-1]" System.out.println(f.apply(null));
So, all in all, is
Optional a monad or not? Strictly speaking it’s not a well-behaving monad, since it doesn’t abide by the monad laws. However, since it does satisfy the definition of a monad, it could be considered one, although one with some buggy methods.
Optional::map and Associativity Law
If you think we got out of luck with
flatMap, wait to see what happens with
map.
When we are using
Optional.map,
null is also mapped into
Optional.empty. Suppose we map again the result of the first mapping into another function. Then that second function won’t be called at all when the first one returns
null. If, instead, we map the initial
Optional into the composition of the two functions, the result would be quite different. Check out this example to clarify:
Function<Integer, Integer> f = x -> (x % 2 == 0) ? null : x; Function<Integer, String > g = y -> y == null ? "no value" : y.toString(); Optional<Integer> opt = Optional.of(2); // A value that f maps to null - this breaks .map opt.map(f).map(g); // Optional.empty opt.map(f.andThen(g)); // "no value"
By composing the functions
f and
g (using the handy
Function::andThen) we get a different result than we got when applying them one by one. An even more obvious example is when the first function returns
null and the second throws a
NullPointerException if the argument is
null. Then, the repeated
map works fine because the second method is never called but the composition throws the exception.
So,
Optional::map breaks the associativity law. This is even worse than
flatMap breaking the left identity law (we’ll get back to it in the next section).
orElse to the Rescue?
You might think it could get better if we use
orElse. Except it doesn’t.
It is easy to create a chain with more than two functions, where getting
null at different stages can lead to different results. Unfortunately we don’t have a way, at the end of the chain, to tell where
null was first handled. And so no way to provide the right result when
orElse is applied. More in abstract, and even worse, by using
orElse we would be relying on the fact that every developer maintaining our code, or every client using our library, will stick to our choices and keep using
orElse.
What’s The Catch with
Optional?
The problem is that by design non-empty
Optionals can’t hold
null. You might legitimately object it is designed to get rid of
null, after all: And in fact
Optional.of(null) will throw a
NullPointerException. Of course
null values are still common, so
ofNullable was introduced to keep us from repeating the same if-null-then-
empty-else-
of check all over our code. However – and here is the essence of all evil –
Optional.ofNullable(null) is translated to
Optional.empty.
The net result is that, as shown above, the following two situations can lead to different results:
- Applying a function before wrapping a value into
Optional;
- Wrapping the value into an
Optionalfirst and then mapping it into the same function.
This is as bad as it sounds: it means that the order in which we apply functions matters. When we use
map, as we saw, it gets even worse, because we lose associativity invariance as well and even the way functions are composed matters.
In turn, these issues make adding bugs during refactoring not just possible, but even frighteningly easy.
A Real World Example
Now, this might look as an example built ad-hoc to cause trouble. It’s not. Just replace
f with
Map::get (which returns
null when the specified key is not contained in the map or if it is mapped to the value
null) and
g with any function that is supposed to handle and transform
null.
Here is an example closer to real world applications. First, let’s define a few utility classes:
Account, modeling a bank account with an ID and a balance;
Balance, modeling an (amount, currency) pair;
Currency, an enum gathering a few constants for the most common currencies.
You can find the full code for this example on GitHub. To be clear, this is not, by any means, to be intended as a proper design for this classes: We are greatly simplifying, just to make the example cleaner and easy to present.
Now let’s say that a bank is represented as a collection of accounts, stored in a
Map, linking account IDs to instances. Let’s also define a few utility functions to retrieve an account’s balance in USD starting from the account’s ID.
Map<Long, Account> bank = new HashMap<>(); Function<Long, Account> findAccount = id -> bank.get(id); Function<Account, Balance> extractBalance = account -> account != null ? account.getBalance() : new Balance(0., Currency.DOLLAR); Function<Balance, Double> toDollars = balance -> { if (balance == null) { return 0.; } switch (balance.getCurrency()) { case DOLLAR: return balance.getAmount(); case POUND: return balance.getAmount() * 1.3; case EURO: return balance.getAmount() * 1.1; default: return 0.; } };
We are now ready to see where individual mapping of our three functions works differently from their composition. Let’s consider a few different cases, where we start from an account’s id wrapped in an
Optional, and we map it to the dollar amount for that account.
Optional<Long> accountId3 = Optional.of(3L); Optional<Long> accountId4 = Optional.of(4L); Optional<Long> accountId5 = Optional.of(5L); bank.put(4L, null); bank.put(5L, new Account(5L, null));
Both for an empty
Optional and for a non-empty one wrapping the ID of a non-
null account with proper balance, it doesn’t matter whether we map our functions one by one or whether we use their composition, the output is the same. We omit these cases here for brevity, but feel free to check out the repo.
Let’s instead try out the case where an account’s ID is not present in the map:
accountId3.map(findAccount) .map(extractBalance) .map(toDollars) .ifPresent(System.out::println); // Optional.empty accountId3.map(findAccount .andThen(extractBalance) .andThen(toDollars)) .ifPresent(System.out::println); // 0.0
In this case,
findAccount returns
null, which is mapped to
Optional.empty. This means that when we map our functions individually,
extractBalance will never be even called, so the final result will be
Optional.empty.
If, on the other hand, we compose
findAccount and
extractBalance, the latter is called with
null as its argument. Since the function “knows” how to handle
null values, it produces a non-null output that will be correctly processed by
toDollars down the chain.
So here we have two different results depending only on the way in which the same functions, taken in the same order, are applied to our input. Wow!
The same thing happens if we store
null in the map for an ID or if the account’s balance is
null, since
toDollars is similarly crafted to handle
null. Check out the repo for further details.
Practical Implications
Besides theoretical disputes on the nature of
Optional, there are plenty of practical consequences of the fact that
Optional::map and
Optional::flatMap break the monad laws. This in turn prevents us from freely applying function composition, as we were supposed to have the same result if we apply two functions one after the other, or their composition directly.
It means that we can no longer refactor our code freely and be sure the result won’t change: Dire consequences might pop up not just in your code base, but – even worse – in your clients’ code. Before restructuring your code, you would need to know if the functions used anywhere in everybody’s code handle
null or not, otherwise you might introduce bugs.
Possible Fixes
We have two main alternatives to try to make this right:
- Don’t use
Optional.
- Live with it.
Let’s see each one of them in details.
Don’t Use
Optional
Well, this looks like the ostrich algorithm mentioned by Tanenbaum to solve deadlock. Or ignoring JavaScript because of its flaws.
Some Java developers explicitly argue against
Optional, in favour of keeping up with using
null. You can certainly do so, if you don’t care about moving to a cleaner, less error-prone style of programming. TL;DR:
Optional allows you to handle workflows where some input might or might not be present through monads, which means in a more modular and cleaner way.
Live with It
Optional breaks the monad laws under a couple of circumstances and there’s nothing to be done about it. But we can learn a couple of things from the way we took to come to this conclusion:
- When starting a chain with a possibly
nullvalue and a function that returns an
Optional, be aware that applying the function to the value can lead to a different result from first creating the
Optionaland then flat-mapping the function. This, in turn, can lead to the function not being called, which is a problem if you depend on its site effects.
- When composing
mapchains, be aware that, while the individual functions were never called with
null, the merged function might produce
nullas an intermediate result and pass it into non-nullsafe parts of the function, leading to exceptions.
- When decomposing a single
mapinto several, be aware that while the original function was executed as a whole, parts of it might now end up in functions that are never called if a previous part produced
null. This is a problem if you depend on those parts’ site effects.
As a rule of thumb, prefer
flatMap over
map, as the former abides by the associativity law, while
map doesn’t, and breaking associativity is far more error prone than breaking left identity. All in all it is best not to view
Optional as a monad that promises easy composability but as a means to avoid having
null pop up as the (flat-)mapped functions’ arguments.
I have worked out a few possible approaches to make code more robust, feel free to check them out on GitHub
Conclusions and Further Reading
We’ve seen that certain refactorings across
flatMap and
map chains can cause the code to change its behavior. The root cause is that
Optional was designed to avoid stumbling into NPEs and in order to achieve that it transforms
null to
empty. This, in turn, means that such chains are not fully executed but end once an intermediary operation produces
empty.
It is important to keep this in mind when relying on the site effects the potentially-not-executed functions cause.
If you’d like to read more about some of the topics in this article:.
|
https://www.sitepoint.com/how-optional-breaks-the-monad-laws-and-why-it-matters/
|
CC-MAIN-2021-31
|
refinedweb
| 2,698
| 62.68
|
WebView on OS X
I have been trying to use QtWebView 1.0 in qml on os x 10.11. I have qt 5.5.1.
import QtQuick 2.5 import QtQuick.Window 2.2 import QtWebView 1.0 Window { visible: true height:1024 width: 768 WebView { id: web anchors.fill: parent url: "" } }
My screen shows up as white blank empty page. I am wondering if this bug: Qtbug 46792 is my actual problem. I am really interested in the webview as my app runs on most desktop platforms, android and ios(webview would allow me to not have my app rejected in apple app store). My question is have other people experienced this problem and if so what have you done for workaround? I think I have 2 options, wait for it to get fixed(which I can't tell when that will happen) or use webview for all platforms but os x and try another web framework qt provides. Any ideas would be most helpful. Thanks!
Hi,
as far as I know, in Qt 5.5.x WebView is implemented as a wrapper of QtWebEngine; so you need to initialize WebEngine in main.cpp
QtWebEngine::initialize()
I suggest to have a look to the Minibrowser Example code
Thanks for the response. I thought WebView didn't use chromium underneath but instead native where it could and webkit for anywhere else? I know I can't use qtwebengine as I do need to submit to the apple app store. I did try the initialize but didn't change anything.
There was a plan to implement WebView in native code for OS X (as is now for Android and iOS) but I don't know about the current implementation.
I suggest to contact the qt developers (mailing list or IRC)
- SGaist Lifetime Qt Champion
Hi,
It's the QtWebView module that offers a wrapper around OS X, iOS and Android web views.
|
https://forum.qt.io/topic/61630/webview-on-os-x
|
CC-MAIN-2018-43
|
refinedweb
| 322
| 73.88
|
In my last post, I started describing an RSS feed generator for Team Foundation Source Control. Note that this is an experimental endeavor and not supported in any way by Microsoft as it is not part of the Team Foundation product.
I described, in brief, the features that it supports and touched on security. I won't repeat that information here.
The RSS feed provides information about recent checkins -- there is one RSS item for each checkin that has occurred. Each item contains the following
The contents of the item include meta information about the checkin but not the checkin details (note the security information in my last post) and a link to the changeset webview. I'll talk about what those are in a minute.
By default, i.e., if a file path is not supplied, information about all checkins is returned. When a file path is part of the Url, i.e.,g url?filename=filepath, information about only the filepath is returned. If it refers to a directory, information about checkins at or below the directory is returned; otherwise, information about checkins for that file only are returned.
Webviews are a core Team Foundation feature that provide web-based access to Team Foundation artifacts. Each Team Foundation component has a set of artifacts. E.g., Workitem Tracking has an artifact for a work item (no surprise) and Source Control has artifacts for changesets and files.
The changeset webview shows the changeset, e.g., who created the changeset, when it was created, what changes are included, etc. In Beta2, the changeset webview appearance is crude but contains all of the information from a changeset. We've polished the appearance and included extra information in the bits after Beta2.
The RSS feed provides information about checkins; for each checkin, a changeset webview link is provided. I've mentioned what a changeset webview contains so its now time to discuss the RSS feed. Once installed, you can subscribe to the RSS feed by specifying the .aspx page that generates the feed, e.g.,
The RSS feed generator uses the following .NET and Team Foundation assemblies:
Keep in mind that assembly names and namespaces will be changing for RTM. For Beta2, project names (e.g., Hatteras, Currituck) and temporary names (Bis) are still used. Assembly names, namespaces, classes, methods and other places where internal project names are in use will be different.
In my next post, I'll start posting snippets of the feed.
In the meantime, let me know if there are specific things you'd like to see.
|
http://blogs.msdn.com/b/jefflu/archive/2005/04/13/407837.aspx
|
CC-MAIN-2015-27
|
refinedweb
| 429
| 65.93
|
This is the mail archive of the java-patches@gcc.gnu.org mailing list for the Java project.
+jlong +_Jv_platform_nanotime () +{ +#ifdef HAVE_CLOCK_GETTIME + struct timespec now; + if (clock_gettime (CLOCK_REALTIME, &now) == 0) + { + jlong result = (jlong) now.tv_sec; + result = result * 1000 * 1000 + now.tv_nsec; + return result; + } + // clock_gettime failed, but we can fall through. +#endif // HAVE_CLOCK_GETTIME + return _Jv_platform_gettimeofday () * 1000LL; +}
Why do you use CLOCK_REALTIME? it represents the time from the epoch, but nanoTime() javadoc atate that nanoTime() value should be used for measuring elapsed time and is unrelated to wall-clock time.
The problem is when you use CLOCK_REALTIME to measure elapsed time and the user changes the time... I had to write the same code for another project, and I used the following snipped for Linux/BSD/Solaris:
int id; #ifdef CLOCK_MONOTONIC id = CLOCK_MONOTONIC; #else #ifdef CLOCK_HIGHRES id = CLOCK_HIGHRES; #else #error bad platform #endif #endif if (clock_gettime(id, &tv) - 0)
|
http://gcc.gnu.org/ml/java-patches/2006-q1/msg00278.html
|
CC-MAIN-2014-23
|
refinedweb
| 147
| 56.05
|
IDEA INTRODUCTION My final project for Reading and Writing Electronic Text basically followed my mid-term project idea, which focused on the relationship between individual and community; personal digital heritage and the internet phenomenon (as a whole, which composed by individual's digital creating activities).
I am really obsessed by the question that what is happening while I am typing this post? I think everyone of you should ask yourself this question in case of being "connected (to the internet)" as well as being "disconnected (to the reality)" simultaneously.
In the digital era, especially after the advent of the Internet, individualization have been highlighted ever-increasingly. That is one of the reasons why the amount of digital information have been created on the internet have already far more surpassed the whole information recorded in human history before digital era came. However, every coin has two sides. A huge amount of "trash" information have been created and is being created all the time which is quite not the case in former human history. In my opinion, the "trash" is not trash at all, it is just, somehow, too personalized.
HOW IT WORKS
In my final project, I was trying to explore a way to remind people the fact that the world keeps changing and making meanings unceasingly while we are changing and publishing our own meanings onto the Internet.
Basically, what I did in my final project was playing around with twitter API and New York Times API. The first part of my project is input the twitter account in which to get the tweet from, and input how many tweets you want. The program will get tweets from twitter API, then by using blob.noun_phrase, extract the noun phrase from tweets. The second part, use extracted noun phrase as key word to search news in New York Times API, and the output is the headlines of those news which have the noun in that article (which might be a little bit confused sometimes, because usually in headlines, there is no the key noun in it, but tweet does have this noun phrase. I will talk about this later).
PROBLEMS & DIFFICULTIES
First one is facing to the limitation of New York Time API. In the first place, I was kind of hesitate between using "The Article Search API" and "The Most Popular API". Both of them have the headline extraction function. In article search API, you could get articles from any time period(from Sept. 18, 1851 to today) as you want. Many of them might not that popular. In most popular API, you could get the most hit news titles, however, only from the past 30 days. After weighting a lot, I decided to use the article search API. Because I think it will be fun if you search some old news which might talked about what you are talking about right now...
The second one I have ran into was how to move a period of string from each tweet. Because for each tweet in one of the tweet accounts I am following has the same format like this: [xxx joke] RT .xxx(author): "tweet". The "tweet" part is the only part I need. So what I need to do was delete strings before ":". However, what was waiting for me ahead was a new one, unicode. The fact is each tweet_text in tweet API is unicode instead of string, which means I have to convert it into string first, which is fine I thought, because our instructor have introduced us a way to do that. But, it did not work. So I just google for more method to deal with unicode stuff. You could see different methods in different stages I have used for unicode-string converting problem in my code later.
In addition, after testing PrfJocular, I found out that it could not work with other tweet account, because tweets in other account usually do not have that format, so I just change the logic in def newtweet. Until now, the program could not work for every tweet account actually, which is the part I am still working on. The reason could be anything, like tweet initialized with strange symbols like # * ^, or capital and lower problem...
Others include indentation problem (impressive one)... for loops in for loops (I admit that I am not reasonable enough and do not have strong logical thinking)... class stuff (now I am clearer about this).
Last but not least, I would like to make a kind of chatting atmosphere in this project, however, it turned out that it is almost impossible. In my opinion, there are couple reasons. First of all, conversation could not exist without making sense, which is not for randomly generated poetry (more for making fun I think); secondly, the limitation of API limits how accurate content I can get.
FOR PRESENTATION & PERFORMANCE
Because there are multiple headlines as output about one keyword (noun phrase), in order to keep the gap between tweet and news' headline in a reasonable and understandable range (the ideal condition is the gap is not too huge to bridge by former experience as well as leave a space to audiences for imagination). So after program generating, I manually picked one in the headlines to couple with the tweet. Then, as you see, did some graphic design output as PDF.
CONTINUE FROM THE END OF "HOW IT WORKS"
After the performance, I asked some of my friend about two different contents from twitter and NY Times. Most of them thought tweeter is more fun, which makes sense because NY Times, as a public media, is more serious. Mentioned about the "gap", some of them could bridge it while others can't.
import sys import twython import urllib import urllib2 import json import re import unicodedata import ast from textblob import TextBlob import pprint print 'Arguments:',sys.argv[1] def year(d): return d[0:4] def month(d): return d[5:7] def day(d): return d[8:10] allNouns = list() ''' nouns = list() adjectives = list() newNouns = list() number =0 ''' headlines = set() dates = list() results = dict() nounssss = list() name = sys.argv[1] num = sys.argv=num) tweets_to_be_printed = [] class tweet(object): def __init__(self, result, nounss, insideResponse): self.results = result # self.tweet = tweet self.allMyNouns = nounss self.insideResponse = insideResponse # print "hello") real_tweet = "" def getTweet(self,listOfNouns): for wd in self.allMyNouns: #print w": "e12e33fafba643a896576df64ba79eeb:18:69211686"}
|
http://www.ranmoo.com/mrrm-2/295
|
CC-MAIN-2017-26
|
refinedweb
| 1,068
| 61.56
|
java image loadin and saving problem - Swing AWT
java image loadin and saving problem hey in this code i am trying... the frame savin is nt done plzz help me with this code.........
import...();
frame.getContentPane().add(panel);
frame.setSize(300, 100);
frame.setVisible(true
information
information hello sir i installed java1.5 s/w but java editor not displaying. i wants to create a desktop icon for this application.without notepad i wants to write code in code window like c,cpp.plz help me. replay
to update the information
the information through form.please send me code earliar.
Please visit the following link: the information sir, i am working on library mgt project
java swing - Swing AWT
java swing how to add image in JPanel in Swing? Hi Friend,
Try the following code:
import java.awt.*;
import java.awt.image....:
Thanks
Java swing code - Java Beginners
in Swing. We have a huge amount of good examples on JTable here.
I hope this would be helpful to you...Java swing code How to set a font for a particular cell in JTable
Java swing code
Java swing code can any one send me the java swing code for the following:
"A confirmation message after the successful registration of login form
Adding a Rollover and Pressed Icon to a JButton Component in Java
Adding a Rollover and Pressed Icon to a JButton Component in Java... about adding event i.e. the rollover and
click icon to a JButton
component of swing in java. Rollover means moving mouse pointer above the icon
on the button.
Java Swing Tutorials
;
Add Edit and Delete Employee Information Using Swing
In this section, you will learn how to add, edit and delete the Employee's
information... in
Java Swing Applications.
Adding
an Icon to a JButton
SWING
SWING A JAVA CODE OF MOVING TRAIN IN SWING
java swing problem - Java Beginners
java swing problem i doesn't know about the panel in swings here i... is not adding any item and it not showing that item on the panel tell me how to make the that Hi Friend,
Try the following code:
/*import
adding two numbers - Java Beginners
information :
Thanks...adding two numbers hii friends.......
this is my program... at Add2Numbers.main(Add2Numbers.java:17) Hi friend,
Correct code you changes x[1
java programming - Java Beginners
the following links: programming im asking for the code of adding , viewing ,delete
adding background image - Java Beginners
adding background image how do i add background image to this code...);
background.add(text11);
getContentPane().add(background... sampleProg("Swing Application");
frame.setSize(730,350);
frame.show
Java code for Saving Marathi (Indian local language ) in Mysql and retrieving the same
Java code for Saving Marathi (Indian local language ) in Mysql and retrieving the same Hello ,
My query got solved. Thnx.I am right now... java using eclipse but after saving it shows " ???? " in place of marathi words
provide code - Swing AWT
);
}
}
-------------------------------------
visit for more information.
Thanks...provide code Dear frnds please provide code for two player CHESS
Java - Swing AWT
Java Hi friend,read for more information,
code - Swing AWT
code i want example problem for menubar in swings Hi Friend,
Please visit the following links:
java swing
java swing what is java swing
Swing is a principal GUI toolkit for the Java programming language. It is a part of the JFC (Java... programs. It is completely written in Java.
For more information, visit
Java swing code
Java swing code can any one send me the java swing code...);
SUBMIT.setBounds(450,160,100,20);
add(label1);
add(text1);
add(label2);
add(text2);
add(SUBMIT);
setVisible(true);
setSize(1024,768
Java Code - Swing AWT
Java Code Write a Program using Swings to Display JFileChooser that Display the Naem of Selected File and Also opens that File
Java Swing code for zoom in and out
Java Swing code for zoom in and out hi..........
I require a code in java swing for image zoom in and zoom out can u tell me how it can be done or what is the code plz help swing.
java swing. Hi
How SetBounds is used in java programs.The values in the setBounds refer to what?
ie for example setBounds(30,30,30,30) and in that the four 30's refer to what
Java swing - Java Beginners
Java swing Hi,
I want to copy the file from one directory... will be displayed in the page and progress bar also.
Example,
I have 10 files... will be displayed.
Please send the sample code ,its very very urgent.
Thanks
java code - Swing AWT
java code i want to open a new dialog box after clicking "upload... directory Hi Friend,
Try the following code:
1)OpenDialog.java
import...);
}
});
mainFrame.getContentPane().add(myButton);
mainFrame.pack
java - Swing AWT
Java Implementing Swing with Servlet How can i implement the swing with servlet in Java? Can anyone give an Example??
Implementing Swing with Servlet Example and source Code
Servlet
SwingToServlet
Java swing - Java Beginners
Java swing Hi,
I want to copy the file from one directory... will be displayed in the page and progress bar also.
Example,
I have 10 files... the second file name will be displyed in the pages.
Please send the sample code
java code - Swing AWT
java code Hello .anyone can plz tell me code for this .First create a button wen it is clicked its enters into another window asking for Name .After... Hi Friend,
Try the following code:
import java.io.*;
import
Java swing code - Java Beginners
Java swing code How to validate input data entered into the swing applications in java?
Hi Friend,
Try the following code:
import...(t2);
p.add(lab2);
p.add(l3);
p.add(t3);
p.add(lab3);
add(p
Java Code - Swing AWT
Java Code How to Display a Save Dialog Box using JFileChooser and Save the loaded Image from Panel in any Location. Hi Friend,
Try the following code:
import java.io.*;
import java.awt.*;
import java.util.
-to-One Relationship |
JPA-QL Queries
Java Swing Tutorial Section
Java Swing Introduction |
Java 2D
API |
Data Transfer in Java Swing |
Internationalization in Java Swing |
Localization |
What
is java swing
Java swing
are displayed in the table..I need the source code in java swing...Java swing If i am login to open my account the textfield,textarea and button are displayed. if i am entering the time of the textfield
java swings - Swing AWT
.... write the code for bar charts using java swings. Hi friend,
I am...java swings I am doing a project for my company. I need a to show
AWT code for popUpmenu - Swing AWT
for more information. code for popUpmenu Respected Sir/Madam,
I am writing a program in JAVA/AWT.My requirement is, a Form consists of a "TextBox" and a "Button
Swing - Swing AWT
information, visit the following link:
Thanks...Swing hi sir i have a code like this:
import
java code - Java Beginners
.
Visit for more information.
Thanks. Hi,
import javax.swing.*;
import...java code In this,there's having array of JTextField in which i have
Date picker in Java Swing
in need of Time Picker just like the Date picker available in java Swing. Kindly... link:
Java Date Time Picker Example
Hello Sir,
Thank... thoroughly .
But Sir, I want Time Picker In java swing desktop application
to update the information
to update the information sir, i am working on library mgt project. front end is core java and backend is ms access.i want to open,update the information through form.please send me code earliar
java swing - Swing AWT
java swing how i can insert in JFrame in swing? Hi Friend,
Try the following code:
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
class FormDemo extends JFrame {
JButton ADD;
JPanel
Java Dialogs - Swing AWT
visit the following links:
java swing in netbeans
java swing in netbeans how can create sub menu in java swing using netbeans?
Hi Friend,
Try the following code:
import javax.swing....[]) {
JFrame frame = new JFrame("MenuSample Example
Java - Swing AWT
....("Paint example frame") ;
getContentPane().add(new JPaintPanel... or paint from my swing program? Hi friend,
import java.awt.
Saving URL in local disk - Java Beginners
Saving URL in local disk Hi Friend,
How can i Save URL like " " on local disk throgh java Statement.
Thanks ,in advance
java-swings - Swing AWT
://
Thanks.
Amardeep...java-swings How to move JLabel using Mouse?
Here the problem is i...");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().add(new MouseEventDemo
java - Swing AWT
/java/example/java/swing/AddRemoveItemFromCombo.shtml
Thanks Hi Friend,
Try the following code:
import java.sql.*;
import java.awt.*;
import...java how can i add items to combobox at runtime from jdbc
Java swing
Java swing when i enter the time into the textbox and activities into the textarea the datas saved into the database.the java swing code for the above item
import java.sql.*;
import java.awt.*;
import javax.swing.
java swing
java swing iam using my project text box, label , combobox and that the same time i want menubar and popmenu. plz give me code for this. i want immediately plz send me that code
Adding A Primary Key to an Existing Table
Adding A Primary Key to an Existing Table Been trying to add a primary key to a MySQL table using Java for two days with no success. I'm new... unable to find a good example.
My code sample:
try
Swing Button Example
Swing Button Example Hi,
How to create an example of Swing button in Java?
Thanks
Hi,
Check the example at How to Create Button on Frame?.
Thanks
java swing
java swing what is code for diplay on java swing internal frame form MYSQL DB pls send
Here is a code of creating form...);
frame.setVisible(true);
}
public void addForm() {
add(createFrame
Java swing
Java swing Design an appliaction for with details such as name,age,DOB,address,qualification and finaly when we click the view details button all... code
Java Swing Card Layout
Java Swing Card Layout
Java technology uses Layout Managers to define...
SpringLayout
Here we are going to discuss CardLayout. It is a space saving... card can be
seen.
Here is the code:
import java.awt.*;
import
java swing - Swing AWT
java swing how to save data in sql 2005 while insert in textfield Hi Friend,
Try the following code:
import java.sql.*;
import...(text2);
panel.add(insertButton);
this.getContentPane().add(panel
Java Swing : JLabel Example
In this tutorial, you will learn how to create a label in java swing
java swing
java swing what is code for dislay image on java swinginternalframe form MYSQL DB pls send
Here is a code that displays an image... Image image) {
add(createFrame(image));
}
private static int
Adding time span example
Adding time span example
In this example java program we have added the two time
spans....
Here is the example code of AddTimeSpan.java as
follows:
AddTimeSpan.java
java programming - Java Beginners
links: programming i'm asking for the java code for adding , viewing
Java Swing
Java Swing Write an applet program to transfer the content of the text field into the listbox component on clicking a button code project
...++");
model.addElement("Java");
model.addElement("Perl");
model.addElement
saving form bean with Array of objects (collection) - Struts
saving form bean with Array of objects (collection) Hi all... an array of objects(Order.java)
Order.java (java bean)- value object implements....
Code for OrderListForm.java (form bean
java - Swing AWT
information,
Thanks...));
text2.setPreferredSize(new Dimension(100, 20));
getContentPane().add(label1);
getContentPane().add(text1);
getContentPane().add(label2
adding mouse listeners to drop target
; Here is a drag and drop application of Java Swing.
import java.awt.*;
import...adding mouse listeners to drop target import java.awt.*;
import... with adding mouse listeners to "table" which is drop target, to accept drop
java code - Java Beginners
java code how can we convert an RGB image into its grayscale representation? Hi Friend,
Please visit the following link:
The code, you
Need Help with Java-SWING progrmming - Swing AWT
://
Thanks...Need Help with Java-SWING progrmming Hi sir,
I am a beginner in java-swing programming. I want to know how we can use the print option
intranet in java swing
intranet in java swing i want source code of intranet establishment in java swing
java swing - Java Beginners
java swing utlility of super.paintComponent(Graphics g) in java?
what does it do actually i.e. which class it is extending or method overriding?
Please clarify this with example
Swing Applet Example in java
Java - Swing Applet Example in java
... swing in an applet. In this example,
you will see that how resources of swing... Applet Example. when the applet is loaded but again when you click
on the Add
Example Code - Java Beginners
Example Code I want simple Scanner Class Example in Java and WrapperClass Example.
What is the Purpose of Wrapper Class and Scanner Class .
when i compile the Scanner Class Example the error occur :
Can not Resolve symbol...:
JSlider:
This is the class which creates the slider for
the swing application
Java Swing : JButton Example
Java Swing : JButton Example
In this section we will discuss how to create button by using JButton in
swing framework.
JButton :
JButton Class extends... dialog box
that provoke users for a value or some information. add()
method
java swing - Java Beginners
java swing hello sir,
I want to create a file in java swing that holds all the html files of the system.
Thanks,
Akshat... the following code:
import java.awt.*;
import java.io.*;
import javax.swing.
Progress Bar in Java Swing
Progress Bar in Java Swing
... in java swing. This section shows you how the progress bar starts and stops
with the timer. Through the given example you can understand how the progress
bar
Add Edit And Delete Employee Information
Add Edit and Delete Employee Information Using Swing
..., edit and delete the Employee's
information from the database using java swing... information into the database. The second tab will edit the
Employee's information
Information - Java Beginners
Information on Java Technologies Need information on Programming technologies and specifically on Java
Java swing - Java Beginners
Java swing how to set the background picture for a panel in java swing .i m using Netbeans IDE. Hi Friend,
Try the following code...);
frame.getContentPane().add(new BackgroundImage().getPanel());
frame.setSize
code in java
code in java In NASA, two researchers, Mathew and John, started... invented another number which is built by adding the squares of its digits. Doing... with 1, then it is called John number.
Example of John numbers is:
13 = 1^2 + 3^2
Swing - Java Beginners
:
Hope that it will be helpful for you.
Thanks... links:
swing
swing Write a java swing program to delete a selected record from a table
Java Swing Drag drop - Swing AWT
Java Swing Drag drop Hi i have following code that can help to drag abd drop , The code is also from Roseindia,net
I want to keep orignal... country = new Choice();
// adding possible choices
country.add
Java swing
Java swing what are the root classes of all classes in swing
java swing
java swing view the book details using swing
Java swing
Java swing Does Swing contains any heavy weight component
java swing - Java Beginners
(){
JFrame f = new JFrame("Frame in Java Swing");
f.getContentPane().setLayout(null...java swing How to set the rang validation on textfield, compare... .........
thanx a lot Hi Friend,
Try the following code:
import
java swing
java swing add two integer variables and display the sum of them using java swing
Java swing
Java swing Write a java swing program to calculate the age from given date of birth
java swing
java swing how to connect database with in grid view in java swing
Hi Friend,
Please visit the following link:
Grid view in java swing
Thanks
Dyanmically Adding Rows
Dyanmically Adding Rows Hi sir Am doing project in that i need to add date picker in dynamically adding rows but the dates are storing in first test...;/HTML>
Here is a code that allow the user to add and remove row from
|
http://roseindia.net/tutorialhelp/comment/94951
|
CC-MAIN-2014-41
|
refinedweb
| 2,704
| 63.8
|
I did solve it using my own way, as follows:
def censor(text,word): newLst=[] for i in text: newLst.append(i) print newLst for i in range(len(text)): if text[i:i+len(word)]==word: for n in range(len(word)): newLst[i+n]="*" return "".join(newLst)
but when I checked the hint to see what codecademy’s method was, I couldn’t figure out all the steps…
(was .split() fuction taught in the course?)
Can we distinguish between 2 functions that give the correct results to the same problem in diff ways, to figure which is better?(maybe something like time taken etc.?)
P.S: How does codecademy make those cool default display pictures that are different for every user?
|
https://discuss.codecademy.com/t/10-15-how-to-use-the-hint-given/343921
|
CC-MAIN-2018-34
|
refinedweb
| 123
| 73.47
|
Did you know that asp.net will html encode your server control attributes as from .NET 4.0? This is done for security reasons and prevent cross site scripting attacks. Going forward, you will need to work around this if the need arises as a security enhancement is more likely not to be backwards compatible.
I came across this “problem” when I was trying to build a degradable web app. When javascript is turned off, everything would work as it’s supposed to (through postbacks) but is JS is enabled, then JQuery will kick in. For this to work, I needed to provide an OnClientClick function to the asp.net buttons. It works fine until you try to insert parameters (arguments) in the javascript function. Since javascript parameters are enclosed with apostrophes, you will find the rendered html quite different from what you entered in the code behind. Take for example the following:
btnSubmit.OnClientClick = String.Format("PostContent('{0}', '{1}', '{2}');", "6", _question.Id, "0");
This will be rendered as:
<input type="submit" name="btnSubmit" value="Submit" onclick="PostContent('6', '10112', '0');" id="btnSubmit" />
The attribute value has been html encoded. If you view the html source code, you will see that the apostrophe has been converted to ‘'’ Now that’s not really a problem as your javascript functions will still work as normal. However it just looks a bit weird! If you want to change that behaviour of asp.net to html encode server control properties, then you will need to create a class as follows:
public class HtmlAttributeNoEncoding : System.Web.Util.HttpEncoder { protected override void HtmlAttributeEncode(string value, System.IO.TextWriter output) { output.Write(value); } }
And in your web.config file:
<httpRuntime encoderType="HtmlAttributeNoEncoding"/>
It’s probably worth doing that if you need backwards compatibility with the previous .net frameworks (prior to .net 4.0).
|
http://avinashsing.sunkur.com/2010/10/
|
CC-MAIN-2018-17
|
refinedweb
| 307
| 58.08
|
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
Good Monday morning all. I have received a lot of good comments on this series so far that I thought I would speak to a bit.
First and foremost, a number of people have asked questions which could be summed up as "isn't this just an academic exercise? I have a job to do here!"
No, I do not believe that it is just an academic exercise, not at all. When I compare the practical, solving-real-problems code I write using immutable or ("mostly" immutable objects) to that which uses highly mutable objects, I find that the solutions which use immutable objects are easier to reason about. Knowing that an object isn't going to be "edited" by some other bit of code later on means that anything I deduce about it now is going to continue to be true forever.
I also find that programming in a more "functional" style using immutable objects somehow encourages me to write very small, clear methods that each do one thing very well. This is certainly possible when writing code with mutable objects, but I find that something about writing code for immutable objects encourages that style more. I haven't managed to quite figure out why that is yet. I like that style of programming a lot. Take a look at the short little methods below, for, say, tree rotation. Each of them is about four lines long and obviously correct if you read it carefully. That gives you confidence that those rotation helpers can then be composed to make a correct balancer, which gives you confidence that the tree is balanced.
Another reason why immutable objects tend to be easier to reason about is that this style encourages non-nullable programming. Take a look at the code from the previous entries in this series. The keyword "null" does not appear in any of them! And why should it? Why should we represent an empty stack the same way we represent an empty queue, the same way we represent an empty tree? By ensuring that everything which points to something else is never null, all that boring, slow, hard to read, hard to reason about code which checks to see if things are null just goes away. You need never worry whether that reference is null again if you design your objects so that there's never a null reference in the first place.
And as we'll see in later episodes of FAIC, immutable objects also make it easier to implement performance improvements such as memoization of complex calculations. If an object never changes then you can always cache it for reuse later. An object which someone can edit is less useful that way. Immutable objects stored in a stack make undo-redo operations trivial. And so on.
Learning how to use this style in your own code is therefore potentially quite handy. I believe it is also the case that more and more objects produced by other people will be immutable, so knowing how to deal with them will be increasingly important. For example, the objects which represent expression trees in C# 3.0 are all immutable. If you want to take an expression tree and transform it, you're simply not going to be able to transform it "in place". You're going to have to build a new expression tree out of the old one; knowing how to do so efficiently will help.
The second question I've gotten a lot is "are immutable objects really more thread safe?" Well, it depends on what you mean by "thread safe". Immutable objects are not necessarily "more" thread safe, but they're a different kind of thread safe.
Let's explore this a bit more. What exactly do we mean by "thread safe"? We can talk about lock order and deadlocks and all of those aspects of thread safety, but let's put those aside for a moment and just consider basic race conditions. Suppose you have two threads each enqueuing jobs onto a queue object, and a third thread dequeuing jobs. Normally we think of the queue as being "thread safe" if no matter what the timing of those three threads happens to be, there is no way that a job is ever "lost". Two enqueues which happen "at the same time" result in a queue with those two things on it, and at some point, both will be dequeued.
If the queue is immutable then implementing this scenario just moves the problem around; now it becomes about serializing access to the mutable variable which is being shared by all three threads. Immutable objects make this kind of scenario harder to reason about and implement; better to just write a threadsafe mutable queue in the first place if that's the problem you're trying to solve.
But let's think about this a different way. Having a threadsafe mutable queue means that you never have any accurate information whatsoever about the state of the queue! You ask the queue "are you empty?" and it says "no", and you dequeue it and hey, you get an "empty queue" exception. Why? Because some other thread dequeued it in the time between when you asked and when you issued your dequeue request. You end up living in a world where enqueuing a queue can possibly make it shorter and dequeuing can make it longer depending on what is going on with other threads. It becomes impossible to reason locally about operations on an object; your consciousness has to expand to encompass all possible operations on the object. It's this necessity for global understanding that makes multithreaded programming so error-prone and difficult.
Immutable queues, on the other hand, give you complete thread safety in this regard. You enqueue an element onto a particular immutable queue and the result you get back is always the same no matter what is happening to that queue on any other thread. You ask a queue if it is empty, if it says no, then you have a 100% iron-clad guarantee that you can safely dequeue it. You can share data around threads as much as you like without worrying that some other thread is going to make an edit which messes up your logic. You can take a chunk of data and run ten different analyzers over it on ten different threads, each one making "changes" to the data during its analysis, and never interfering with each other.
As promised last time, my supremely unexciting implementation of a self-balancing height-balanced immutable AVL tree in C#. Note that I have made the choice to cache the height in every node rather than recalculating it. That trades memory usage for a bit of extra speed, which is probably worth it in this case. (Though of course to truly answer the question we'd want to set goals, try it both ways and measure the results.)
This is all pretty straightforward. Next time I want to start to wrap up this series by looking at a tricky data structure that solves the problem of building an immutable double-ended queue.
public sealed class AVLTree<K, V> : IBinarySearchTree<K, V> where K : IComparable<K> { private sealed class EmptyAVLTree : IBinarySearchTree<K, V> { // IBinaryTree public bool IsEmpty { get { return true; } } public V Value { get { throw new Exception("empty tree"); } } IBinaryTree<V> IBinaryTree<V>.Left { get { throw new Exception("empty tree"); } } IBinaryTree<V> IBinaryTree<V>.Right { get { throw new Exception("empty tree"); } } // IBinarySearchTree public IBinarySearchTree<K, V> Left { get { throw new Exception("empty tree"); } } public IBinarySearchTree<K, V> Right { get { throw new Exception("empty tree"); } } public IBinarySearchTree<K, V> Search(K key) { return this; } public K Key { get { throw new Exception("empty tree"); } } public IBinarySearchTree<K, V> Add(K key, V value) { return new AVLTree<K, V>(key, value, this, this); } public IBinarySearchTree<K, V> Remove(K key) { throw new Exception("Cannot remove item that is not in tree."); } // IMap public bool Contains(K key) { return false; } public V Lookup(K key) { throw new Exception("not found"); } IMap<K, V> IMap<K, V>.Add(K key, V value) { return this.Add(key, value); } IMap<K, V> IMap<K, V>.Remove(K key) { return this.Remove(key); } public IEnumerable<K> Keys { get { yield break; } } public IEnumerable<V> Values { get { yield break; } } public IEnumerable<KeyValuePair<K,V>> Pairs { get { yield break; } } } private static readonly EmptyAVLTree empty = new EmptyAVLTree(); public static IBinarySearchTree<K, V> Empty { get { return empty; } } private readonly K key; private readonly V value; private readonly IBinarySearchTree<K, V> left; private readonly IBinarySearchTree<K, V> right; private readonly int height; private AVLTree(K key, V value, IBinarySearchTree<K, V> left, IBinarySearchTree<K, V> right) { this.key = key; this.value = value; this.left = left; this.right = right; this.height = 1 + Math.Max(Height(left), Height(right)); } // IBinaryTree public bool IsEmpty { get { return false; } } public V Value { get { return value; } } IBinaryTree<V> IBinaryTree<V>.Left { get { return left; } } IBinaryTree<V> IBinaryTree<V>.Right { get { return right; } } // IBinarySearchTree public IBinarySearchTree<K, V> Left { get { return left; } } public IBinarySearchTree<K, V> Right { get { return right; } } public IBinarySearchTree<K, V> Search(K key) { int compare = key.CompareTo(Key); if (compare == 0) return this; else if (compare > 0) return Right.Search(key); else return Left.Search(key); } public K Key { get { return key; } } public IBinarySearchTree<K, V> Add(K key, V value) { AVLTree<K, V> result; if (key.CompareTo(Key) > 0) result = new AVLTree<K, V>(Key, Value, Left, Right.Add(key, value)); else result = new AVLTree<K, V>(Key, Value, Left.Add(key, value), Right); return MakeBalanced(result); } public IBinarySearchTree<K, V> Remove(K key) { IBinarySearchTree<K, V> result; int compare = key.CompareTo(Key); if (compare == 0) { // We have a match. If this is a leaf, just remove it // by returning Empty. If we have only one child, // replace the node with the child. if (Right.IsEmpty && Left.IsEmpty) result = Empty; else if (Right.IsEmpty && !Left.IsEmpty) result = Left; else if (!Right.IsEmpty && Left.IsEmpty) result = Right; else { // We have two children. Remove the next-highest node and replace // this node with it. IBinarySearchTree<K, V> successor = Right; while (!successor.Left.IsEmpty) successor = successor.Left; result = new AVLTree<K, V>(successor.Key, successor.Value, Left, Right.Remove(successor.Key)); } } else if (compare < 0) result = new AVLTree<K, V>(Key, Value, Left.Remove(key), Right); else result = new AVLTree<K, V>(Key, Value, Left, Right.Remove(key)); return MakeBalanced(result); } // IMap public bool Contains(K key) { return !Search(key).IsEmpty; } IMap<K, V> IMap<K, V>.Add(K key, V value) { return this.Add(key, value); } IMap<K, V> IMap<K, V>.Remove(K key) { return this.Remove(key); } public V Lookup(K key) { IBinarySearchTree<K, V> tree = Search(key); if (tree.IsEmpty) throw new Exception("not found"); return tree.Value; } public IEnumerable<K> Keys { get { return from t in Enumerate() select t.Key; } } public IEnumerable<V> Values { get { return from t in Enumerate() select t.Value; } } public IEnumerable<KeyValuePair<K,V>> Pairs { get { return from t in Enumerate() select new KeyValuePair<K, V>(t.Key, t.Value); } } private IEnumerable<IBinarySearchTree<K, V>> Enumerate() { var stack = Stack<IBinarySearchTree<K,V>>.Empty; for (IBinarySearchTree<K,V> current = this; !current.IsEmpty || !stack.IsEmpty; current = current.Right) { while (!current.IsEmpty) { stack = stack.Push(current); current = current.Left; } current = stack.Peek(); stack = stack.Pop(); yield return current; } } // Static helpers for tree balancing private static int Height(IBinarySearchTree<K, V> tree) { if (tree.IsEmpty) return 0; return ((AVLTree<K,V>)tree).height; } private static IBinarySearchTree<K, V> RotateLeft(IBinarySearchTree<K, V> tree) { if (tree.Right.IsEmpty) return tree; return new AVLTree<K, V>( tree.Right.Key, tree.Right.Value, new AVLTree<K, V>(tree.Key, tree.Value, tree.Left, tree.Right.Left), tree.Right.Right); } private static IBinarySearchTree<K, V> RotateRight(IBinarySearchTree<K, V> tree) { if (tree.Left.IsEmpty) return tree; return new AVLTree<K, V>( tree.Left.Key, tree.Left.Value, tree.Left.Left, new AVLTree<K, V>(tree.Key, tree.Value, tree.Left.Right, tree.Right)); } private static IBinarySearchTree<K, V> DoubleLeft(IBinarySearchTree<K, V> tree) { if (tree.Right.IsEmpty) return tree; AVLTree<K, V> rotatedRightChild = new AVLTree<K, V>(tree.Key, tree.Value, tree.Left, RotateRight(tree.Right)); return RotateLeft(rotatedRightChild); } private static IBinarySearchTree<K, V> DoubleRight(IBinarySearchTree<K, V> tree) { if (tree.Left.IsEmpty) return tree; AVLTree<K, V> rotatedLeftChild = new AVLTree<K, V>(tree.Key, tree.Value, RotateLeft(tree.Left), tree.Right); return RotateRight(rotatedLeftChild); } private static int Balance(IBinarySearchTree<K, V> tree) { if (tree.IsEmpty) return 0; return Height(tree.Right) - Height(tree.Left); } private static bool IsRightHeavy(IBinarySearchTree<K, V> tree) { return Balance(tree) >= 2; } private static bool IsLeftHeavy(IBinarySearchTree<K, V> tree) { return Balance(tree) <= -2; } private static IBinarySearchTree<K, V> MakeBalanced(IBinarySearchTree<K, V> tree) { IBinarySearchTree<K, V> result; if (IsRightHeavy(tree)) { if (IsLeftHeavy(tree.Right)) result = DoubleLeft(tree); else result = RotateLeft(tree); } else if (IsLeftHeavy(tree)) { if (IsRightHeavy(tree.Left)) result = DoubleRight(tree); else result = RotateRight(tree); } else result = tree; return result; } }}
You missed an important optimization:
By putting the balancing logic in the constructor, you save allocations and you also do not need to think about balancing again in any of the operations that you perform. You also can reuse the same code for a different balancing algorithm like RB trees for everything but the constructor.
See Chris Okasaki and his paper containing implementation of red-black trees.
Hi Eric,
Great posts about immutable objects! Go on...
About the Academic part I think that the code you're writing in this series has its source in academic knowledge and reasoning. It is proved if you see the basic data structures already implemented in the mainstream programming frameworks such as .NET. It's the base of everything. To do the job you inevitably have to use such data structures being rewritten here but this time in a more powerful approach. The benefits of using such immutable implementations are incredible. We want better approaches to already implemented solutions. That's what moves us forward.
Best,
Leniel Macaferi
Eric,
I completely agree on the Immutability front - it has made my life easier in many respects. Its a shame some serializers want to break it so readily (XmlSerializer being a big culprit) - could you please post your thoughts on these issues?
Also, I think the ternary/tertiary/conditional operator (take your pick) really helps when I am writing more functional code. I end up using less local variables, can refactor out commonly used expressions. I would love to hear opinions on this as well. For consideration I rewrote the MakeBalanced function:
private static IBinarySearchTree<K, V> MakeBalanced(IBinarySearchTree<K, V> tree)
{
return IsRightHeavy(tree) ?
(IsLeftHeavy(tree.Right) ? DoubleLeft(tree) : RotateLeft(tree)) :
IsLeftHeavy(tree) ?
(IsRightHeavy(tree.Left) ? DoubleRight(tree) : RotateRight(tree)) :
tree;
}
> I also find that programming in a more "functional" style using
> immutable objects somehow encourages me to write very small,
> clear methods that each do one thing very well. [...] I haven't
> managed to quite figure out why that is yet.
I think it's because of separability. A functional program (as any program with only immutable types is going to be) is inherently seperable, in that each expression or set of expressions can exist on its own, and only depends on its obvious arguments. If it has some semantic meaning, it can easily and safely be extracted into a function. The same isn't true of programming with mutable objects -- a set of expressions there may only make sense if the object is in some unusual state (if its invariant is violated in some way), and may therefore require other expressions to come before and yet others to come after. Pulling such a set of expressions out into a function (even if they have some associated semantics) is therefore dangerous, since there's then strong coupling between the implementation of that function and its caller(s).
> Another reason why immutable objects tend to be easier to reason
> about is that this style encourages non-nullable programming.
This one I really can't explain, but I'd certainly agree with your observation. I suspect this is as much an education thing as anything else -- it's not really any harder to use mutable types in a non-nullable fashion, but perhaps it's just much more obvious for immutable types?
I think a lot of the push back against this style comes from folks with mortgages to pay and established (ossified?) skill bases. I'm sure OO and before that HLL's came in for the same flack when they first appeared on the programming scene (of course in reality FP has been around *forever* in computing terms, but never so 'in your face' [in a good way] until now).
Re: where to balance -- good point, thanks Wesner.
Re: ternary operator vs if statement
I'm of two minds on that.
On the one hand, I cleave to the principle "statements should always have side effects, expressions should avoid side effects". Therefore, you are right, in this case I should have used a ternary operator rather than an if statement.
On the other hand, I find nested ternary operators difficult to read. It's just not a very well chosen syntax for legibility.
Since the actual code they generate is for all practical purposes identical it becomes solely a stylistic question, and in this case I deliberately chose to go for the more readable solution over the more "expression-like" solution.
What would be really awesome is if we had more operators that acted like statements. Query operators already act much like an expression version of "foreach", the ternary operator acts like an expression "if". A switch expression would be pretty neat. I'm not sure how exactly a "try" expression would work but it might be something like the null coalescing operator.
Hmm.
I wouldn't hold my breath waiting for this in any hypothetical future version of C#. But we can dream.
For some reason, there's been a lot of buzz lately around immutability in C#. If you're interested in
Eric, do you have books, articles, whitepapers you can recommend for better understanding how to adjust ones mindset for functional coding? I find that most code I deal with is in some fashion a statemachine, where every object is the keeper of its state and by using references state is kept. It also means that the moment you venture into multi-threading, there is a lot of lock and waiting on reset events, etc. I've been following your Immutable series with great interest, and love the style of coding, but call it operational debt, most problems I solve on a daily basis, I wouldn't know how to solve if my objects weren't keepers of state.
Okasaki's book Purely Functional Data Structures is great and has been a good resource for me in writing this series, but it is VERY academic. I will ask around and see if anyone can recommend to me a good book on functional programming.
You might consider learning F#. You get to keep using the .NET framework, so all your knowledge of the framework can still be put to good use.
re: questions of "Academic" exercise
My guess for the wariness is that immutable data structures incur an obvious runtime penalty (modification = copy = allocation), and people are reluctant to pay it. They want to be assured that there is a benefit to this style of coding. There may indeed be benefits (I've used this before, and I count myself as convinced enough to try more), but they are very hard to quantify.
Tom Kirby-Green (above) is wrong and condescending, I would argue. Many in-the-trenches developers are willing to try something new (if it doesn't cost too much), but (a) they don't have time to chase fads and (b) this has not been sold very well [not to imply that this is a fad - I know how old the idea is]. If the sales job improves, I think that this idea actually has a better chance of filtering down than most, because of the platform effect.
p.s. Love the series
Arne Claassen: I have heard that <a href="">The Little Schemer</a> is an excellent place to start - granted, it's not F#, but it is an effective teacher of FP techniques and theory.
Welcome to the fortieth issue of Community Convergence. This week we have two new releases of note: We
I have been fascinated by this discussion on Immutible data structures and have been doing some of my own research and investigation. It seems that this example of an Immutible AVL tree is missing a couple important optimizations that Immutible data structures lend themselves to.
First, memory can be reduced by creating more classes that represent different shapes of a tree. There is no reason to store empty trees in the left and right variables. SingleLeaf, LeftLeaf and RightLeaf classes can be created similar to the Empty class that do not allocate left and right variables.
Second, even more memory reduction can be created if even more classes are created that represent the different states of Tree Balance. 3 classes can be created that represent non-leaf trees that are Balanced, LeftHeavy, and RightHeavy. This should total get rid of the need to store Height because the class the tree is built from holds information about the balance of the tree similar to the way the IsEmpty property holds information about the shape of the tree. New properties such as IsLeaf, IsBalanced, IsLeftHeavy, and IsRightHeavy can be created for each class. This can also lead to simpler and faster code because each class of tree only needs to deal with its special shape.
Does this make sense or am I all wet?
|
http://blogs.msdn.com/b/ericlippert/archive/2008/01/21/immutability-in-c-part-nine-academic-plus-my-avl-tree-implementation.aspx?Redirected=true
|
CC-MAIN-2015-22
|
refinedweb
| 3,671
| 54.93
|
I wish to have an internal (non window) dialog to ask for member input. I would like the dialog to be placed centrally on an existing JPanel.
I have looked at layeredpanes and these seem unusable due to only having a single layout manager (or no layout manager) across all the panes. I guess I could try to override JLayeredPane and provide a custom layout but this seems extreme.
Glass panes don't seem to be appropriate either.
How can this be done? Is there no usable concept of z-indexes in Swing?
EDIT
The reason Layered Panes weren't appropriate was due to the lack of a layout manager per layer. The panel is resizeable, Panel A should stay at 100% of area and Panel B should stay centralized.
I think LayeredPane is your best bet here. You would need a third panel though to contain A and B. This third panel would be the layeredPane and then panel A and B could still have a nice LayoutManagers. All you would have to do is center B over A and there is quite a lot of examples in the Swing trail on how to do this. Tutorial for positioning without a LayoutManager.
public class Main { private JFrame frame = new JFrame(); private JLayeredPane lpane = new JLayeredPane(); private JPanel panelBlue = new JPanel(); private JPanel panelGreen = new JPanel(); public Main() { frame.setPreferredSize(new Dimension(600, 400)); frame.setLayout(new BorderLayout()); frame.add(lpane, BorderLayout.CENTER); lpane.setBounds(0, 0, 600, 400); panelBlue.setBackground(Color.BLUE); panelBlue.setBounds(0, 0, 600, 400); panelBlue.setOpaque(true); panelGreen.setBackground(Color.GREEN); panelGreen.setBounds(200, 100, 100, 100); panelGreen.setOpaque(true); lpane.add(panelBlue, new Integer(0), 0); lpane.add(panelGreen, new Integer(1), 0); frame.pack(); frame.setVisible(true); } /** * @param args the command line arguments */ public static void main(String[] args) { new Main(); } }
You use setBounds to position the panels inside the layered pane and also to set their sizes.
Edit to reflect changes to original post You will need to add component listeners that detect when the parent container is being resized and then dynamically change the bounds of panel A and B.
|
https://codedump.io/share/pEjs06Ch5lk9/1/java-swing---how-to-show-a-panel-on-top-of-another-panel
|
CC-MAIN-2016-50
|
refinedweb
| 359
| 57.87
|
As I think about it more, I like the fluent idea better. This way I would
get to choose if I wanted my parents to add to or replace the existing
parents. So make both setParents and add methods return "this".
Actually, there is one thing about the current marker implementation that
thoroughly confused me. slf4j used to call the relationships "children",
but they have since deprecated that name and now call them "references".
log4j calls them "parents". And instead of "isInstanceOf", slf4j calls that
method "contains". Because of this, I originally thought that the
log4j-slf4j-impl was written backwards. I had to write some test code to
prove to myself that the names don't in fact matter one bit. In that
regard, I would prefer that instead of having "getParents" and
"isInstanceOf", we have "getReferences" (like slf4j) and "references". It
would fit better to what I think this is actually modelling, a directed
graph, rather than a parent child relationship. But my guess is that will
be seen as too big of a change.
On Mon, Apr 7, 2014 at 7:46 AM, Bruce Brouwer <bruce.brouwer@gmail.com>wrote:
> I would be happy with a solution that looked like this:
>
> private static final Marker m =
> MarkerManager.getMarker("m").setParents("p");
>
> Where setParents is fluent by simply returning "this". Other names would
> be possible: .parents(...), .withParents(...). I really dislike the slf4j
> method where I have to setup parent relationships somewhere else, such as
> in a static initializer block.
>
> I'm not keen at all on changing the way marker hierarchies work to be
> something like com.foo.MyMarker1. In that regard, I like the current
> solution best.
>
>
> On Mon, Apr 7, 2014 at 7:19 AM, Gary Gregory <garydgregory@gmail.com>wrote:
>
>> On Mon, Apr 7, 2014 at 3:04 AM, Ralph Goers <ralph.goers@dslextreme.com>wrote:
>>
>>> Gary, Markers are not Loggers. People don't use them to represent Java
>>> classes but to represent states, categories or extensions to logging
>>> levels. Would you prefer that the logging levels be "
>>> org.apache.logging.log4j.INFO" instead of just INFO? In addition,
>>> having to specify the full hierarchy would be quite painful and would make
>>> it awkward to add a new Markers to existing Markers. For example, if
>>> something added "root" as the parent of "com" would you suddenly need to do
>>> "root.com.foo.bar.MyMarker"?
>>>
>>
>> This last example is great because it shows the confusing aspect of the
>> whole deal.
>>
>> In one case, with loggers, we have parent-child hierarchies that work one
>> way: root.com.MyLogger and com.MyLogger have nothing in common (except the
>> root logger I suppose). But in the case of Markers, hierarchies work
>> differently, where, if I understand I can change all com.foo.MyMarker1,
>> com.foo.MyMarker2 into root.com.foo.MyMarker1, root.com.foo.MyMarker2? How
>> am I supposed to get/find my maker if someone adds a parent to it? Can I
>> always get Marker1, Marker2 by calling getMarker("Marker1")? If so, then
>> the parents are really attributes or behave in a new way hierarchies don't
>> usually do not behave. It is more like interfaces (implements) and classes
>> (extends). So we need a different name for this relationship perhaps?
>>
>> Gary
>>
>>>
>>> Ralph
>>>
>>>
>>> On Apr 6, 2014, at 8:39 PM, Gary Gregory <garydgregory@gmail.com> wrote:
>>>
>>> > Before dealing with the complications of more than one parent, I think
>>> one concept mere mortals should be able to deal with is being able to
>>> define a marker hierarchy the same way we can define logger hierarchies
>>> with dot names. So I can say MarkerManger.getMarker("com.foo.bar.MyMarker").
>>> >
>>> > Gary
>>> >
>>> > Gary
>>> >
>>> >
>>> > -------- Original message --------
>>> > From: Bruce Brouwer
>>> > Date:04/06/2014 22:35 (GMT-05:00)
>>> > To: Log4J Developers List
>>> > Subject: Proposed change to MarkerManager API
>>> >
>>> > I hate changing API as much as the next guy, but there is an API in
>>> MarkerManager that I think could be improved.
>>> >
>>> > MarkerManager.getMarker(name) gets a marker, optionally creating it if
>>> it doesn't exist. I support this with no change.
>>> >
>>> > MarkerManager.getMarker(name, parents...) I have an issue with. It
>>> does not in all cases return me a marker that has the parents specified. If
>>> the marker already existed, it is simply returned with no changes made to
>>> the parents. I propose removing this method and replacing it with...
>>> >
>>> > MarkerManager.define(name, parents...) This method will create the
>>> marker with the specified parents if it does not exist. If it does exist,
>>> it will change the parent list to be the list specified.
>>> >
>>> > Here's another reason I want to change this. Consider these two
>>> classes:
>>> >
>>> > public class A {
>>> > private final static Marker m = MarkerManager.getMarker("m", "p");
>>> > }
>>> >
>>> > public class B {
>>> > private final static Marker m = MarkerManager.getMarker("m");
>>> > }
>>> >
>>> > If class A gets loaded first, marker "m" will have parent "p". But if
>>> class B gets loaded first, then marker "m" will have no parents. I
>>> generally don't like relying on the exact order that classes are loaded
>>> when the two classes aren't related.
>>> >
>>> > But if we consider my proposed change:
>>> >
>>> > public class A {
>>> > private final static Marker m = MarkerManager.define("m", "p");
>>> > }
>>> >
>>> > public class B {
>>> > private final static Marker m = MarkerManager.getMarker("m");
>>> > }
>>> >
>>> > Now the behavior is the same, no matter which class is loaded first.
>>> It looks clearer to me as the definition of the marker happens in one
>>> place, while class B is only interested in a reference to a potentially
>>> already defined marker, but makes no statement about what parents it has.
>>> >
>>> > I also don't like the idea of the define method throwing an exception
>>> if the marker already exists or getMarker returning null if the marker
>>> doesn't exist; again for the reasons that I don't want class loading order
>>> to impact behavior.
>>> >
>>> > I understand that if there are two statements defining a marker with
>>> different parents, whichever one runs last is going to be the winner. It's
>>> not great if that happens, but I shouldn't be defining a marker in two
>>> places. In reality, the best thing would be to make class B reference the
>>> marker field in class A. If I do that, then the current implementation
>>> would work fine; I'm just thinking of the case where somebody doesn't (or
>>> can't) do the best thing. Then, a distinction between get and define could
>>> be helpful.
>>> >
>>> > Does this sound like an acceptable change that we could get into
>>> log4j-api before GA?
>>> >
>>> > --
>>> >
>>> > Bruce Brouwer
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: log4j-dev-unsubscribe@logging.apache.org
>>> For additional commands, e-mail: log4j-dev-help@logging.apache.org
>>>
>>>
>>
>>
>> --
>> Java Persistence with Hibernate, Second Edition<>
>> JUnit in Action, Second Edition <>
>> Spring Batch in Action <>
>> Tweet!
>>
>
>
>
> --
>
> Bruce Brouwer
>
--
Bruce Brouwer
|
http://mail-archives.apache.org/mod_mbox/logging-log4j-dev/201404.mbox/%3CCACSMMYShbvsXzsP-=N+Z3EZ3euM21yFicAKO39E_gBE_io7L5A@mail.gmail.com%3E
|
CC-MAIN-2018-22
|
refinedweb
| 1,128
| 56.86
|
Escalation Engineer for SharePoint (WSS, SPS, MOSS) and MCMS All posts are provided "AS IS" with no warranties, and confers no rights.
Some of our field controls shipped with MOSS show a quite ugly behaviour as they add an extra entry after the field. Means an extra space.
This is especially ugly for the RichImageField control as it prevents two images to show up right beside each other. This article shows that this is actually a long known issue. But this article only shows a workaround which can be compared with the approach we used in CMS 2001 before we had custom placeholder controls: to hide the control in published mode, to read the content and then to modify it in the way we want to have it.
An approach in CMS 2002 would have been to create a custom placeholder for this. So only one single change and you can benefit from this anywhere on your site.
A similar approach can easily be implemented using a custom field control in MOSS.
A field control that would fix the problem discussed in this article would look like this:
using System; using System.Collections.Generic; using System.Text; using System.IO; using System.Web; using System.Web.UI; using Microsoft.SharePoint.Publishing.WebControls; namespace StefanG.SharePoint.WebControls { public class CorrectedRichImageField : RichImageField { protected override void RenderFieldForDisplay(HtmlTextWriter output) { // create a new tempWriter to consume the output from the base class TextWriter tempWriter = new StringWriter(); base.RenderFieldForDisplay(new HtmlTextWriter(tempWriter)); // capture the output of the base class and do the required adjustments string newHtml = tempWriter.ToString().Replace("</span> ", "</span>"); // write out the corrected html output.Write(newHtml); } } }
If you need information about how to implement such a field control please have a look at the relevant topic in the SDK:
If you would like to receive an email when updates are made to this post, please register here
RSS
Thanks for the tip. My team have started evaluating MOSS as a WCM platform and we've run into issues re accessibility and poor markup (tables being used instead of CSS, spacers, etc) when using the standard controls so this technique will come in very handy. Do you know if there are plans to provide web standards-friendly controls in future (e.g. via service pack or separate download)?
Hi Mark,
the same question often came up with CMS 2002 and the anwer was usually that this should be addressed with custom placeholder controls.
So I would not expect it. But if enough requirements from customers arrive through service requests it might be that this question will be evaluated again for a service pack in the future.
Cheers,
Stefan
Not the answer I wanted to hear but thanks for replying all the same :-)
Oen question on RichImageFields: Is it possible to have a custom page layout with multiple PageImages on it?? I've tried many times, today, but couldn't get it to work.
FieldName was set to PublishingPageImage, ID was set to something sifferent for each instance.
Page 'renders' correctly in Sharepoint Designer, and when edititing the page in Sharepoint I get to see the placeholders and I can set an image to it, but When I save, all Image-fields show the exect same image, regardless of how I set them...
Any thoughts on this?
Hi Iskander,
first you need to create a content type to be used by your page layout. The page layout is just the rendering window for the content type. For each field control you add you need to have an individual column in the content type.
Then bind the different field controls to the different columns.
Hi Stefan,
My biggest issue with controls are the web parts that come out of the box or even custom built ones. The web part is wrapped with table tags. Has anyone tried css adapters for web parts?
thanks,
anabhra
i have a question.
i want to repalce my own modified Datetime control instead of sharepoint original datetime control. is it possible?
Regards,
Sed Taha
Hi Sed,
if this is on a page layout you can do this.
Best would be to derive your control from the original one.
Can you show an example of this being used in an aspx page?
Hi Ed,
you can only use this in a page layout - like the original field controls comming with MOSS.
You have to open the page layout using SharePoint Designer and then you have to replace the original RichImageField control with the CorrectedRichImageField in Html View.
Also ensure to register the namespace of your DLL and class at the top.
Have a look in the SDK for details.
Does it mather what namespace you use? Do it have to end with .SharePoint.WebControls?
The namespace can be different.
Just ensure that you add the correct namespace as a reference in your page layout.
We came across a random issue lately where a publishing image (RichImageField) does not render correctly
Thanks.
And i use RenderFieldForDisplay for my custom render.
DisplayPattern is hard to use for me.
|
http://blogs.technet.com/stefan_gossner/archive/2007/03/29/how-to-overcome-glitches-with-the-standard-field-controls-shipped-with-moss-2007.aspx
|
crawl-002
|
refinedweb
| 845
| 64.81
|
Sorry for the tongue twister 😉
I’ve been somewhat busy lately and it’s been a long time since my last post. I have a few projects on the go but not much time to sit down and write about them… Let’s see if this one goes through…
I’ve been lately looking for a reliable UPS system for Raspberry Pi 3. I moved my home server to a RPi a few months ago and even thou its behind an ACS UPS a couple of other projects involving RPis required mobility (one of them) and unassisted power backup (the other). So I started browsing several marketplaces looking for a solution.
Some “available” solutions
There are some (not many) solutions out there. I had a few requirements and some goods-to-have. Initially price was not an issue and that was good since that allowed me to focus on the features:
- Hat form factor (I wasn’t looking for a USB power bank)
- Capable to deliver power to the Raspberry Pi while charging
- Capable to deliver enough power for a Raspberry Pi 3 running several services (but headless)
- Battery monitoring from the OS
- RTC (nice to have)
- Compatible with LiPo batteries, other chemistries as a bonus
I don’t want to favor one solution over another. But the UPS PIco HV 3.0 might be the best option “available”, and I quote “available” because it’s out of stock everywhere I have checked. It sells for about USB 34, supports LiPo and LiFePO4 batteries, delivers up to 3A, has battery monitoring, RTC and a zillion of other goodies. There is also the LiFePO4wered/Pi3. It is not a RPi hat but it’s small enough. Supports only LiFePO4 batteries and lacks RTC. Sells for USD 42.
Of course these two products are not backed by big companies but by good professionals that have probably designed them to solve a need they had. And at some point they thought there was a market niche here. Actually both of them are on Tindie.
Perfect is the enemy of good
The UPS PIco was almost perfect but I just couldn’t find a way to buy it. So when I found the UPS Power Pack by GeekWorm that was less than USD 16 (without battery) I thought it was not too risky to give it a try.
It supports LiPo batteries, you can use the Raspberry Pi while charging and it can deliver up to 2A (1.4A while charging). It provides an I2C interface to check the battery charge status from the OS and support hot unplug. There is little information provided by the manufacturer but the people at raspberrypiwiki.com have gathered some info about it.
You can buy a GeekWorm Power Pack Pro v1.1 without battery [Ebay] for less than 15€ or with a 2500mA LiPo battery [Ebay] for 22.5€.
With a little help from my friends
The Geekworm Power Pack Pro has a main issue. It has two power output modes: via the secondary micro USB connector or via the GPIO and, incomprehensibly, the first one is the default output. That means that even thou the board “boots” when power is applied (for instance after a power blackout) it won’t power the Raspberry Pi until you long click the side button.
What were they thinking about? This ruins the whole purpose of the hat.
Fine. This required some help. I needed a way to set the hat into GPIO output power mode automatically and, since we are “improving” the hat, why not adding an RTC and maybe a way to power a secondary device.
This last point was a requirement in one of the projects I was working on: I needed a way to power cycle a Texas Instruments SensorTag because I needed it in “discoverable” mode so the Raspberry Pi could connect to it. The SensorTag enters “discoverable” mode on boot or when you click one of the side buttons.
Let me introduce you the GeekWorm monitor. It sports an ATTiny85 that monitors the hat power supply and the RPi 3V3 pin and changes the power output mode if the Raspberry Pi is not being powered.
But that’s not everything. It also monitors a specific GPIO of the RPi and expects a HIGH there. If it’s not it allows 3 minutes in case the Raspberry Pi is booting before power cycling it. This is necessary because in the event that a powerdown is executed by the OS the Raspberry Pi won’t boot by itself unless it’s power supply is reset.
Imagine you are running on batteries (no external power) and the OS detects the battery is running low and it executes a preventive power down to avoid data corruption. But then power comes back and the battery is charging again and we would like to resume operations. The ATTiny85 will notice the Raspberry Pi is not running and will want to reset it. The only way to reboot the RPi is to disconnect it and reconnect it. Well, changing the power output mode to USB and back to GPIO does just that.
Finally the board also features a DS3231 RTC backed by a coin cell and a secondary connector to power an external device via a MOSFET driven by the Raspberry Pi.
You might have notice the purple. Yes, you can order the GeekWorm Monitor from OSH Park:
Normal operation
Let me summarize the operation:
- Under normal circumstances, the USB cable will be connected to the hat power input at all times, powering and charging the LiPo unless there is a power outage.
- If that happens the battery will start powering the Raspberry Pi immediately so it won’t reset. And the battery will start discharging…
- At this point the Raspberry Pi is monitoring the battery charge level via I2C and might decide to shutdown to prevent data corruption
- Eventually power will be back and the battery will start charging but the Power Pack is not powering the RPi
- The ATTiny85 gets its power from the 5V input line of the hat, so it wakes and checks the Raspberry Pi 3V3 pin and it reads low.
- The ATTiny85 pushed the button line to ground and changes the power output mode
- It will wait up to 3 minutes for a certain GPIO to be asserted true before changing the power output mode (and go back to 5).
- If the pin is asserted HIGH then everything is OK. The ATTiny85 will keep on monitoring the pin in case it goes low for whatever reason.
The GeekWorm Monitor
The ATTiny85
The ATTiny85 is powered by the input 5V. This is very convenient since it sets the operation in a well known state: “I’m awake, so the battery is charging, so the Raspberry Pi should be running”.
The red wire powers the ATTiny85, the black one is used to tie the button to ground to change the power output mode
Changing the power output mode is done pushing down the button line via a transistor for 5s effectively emulating a button push. The ATTiny85 will do that if it detects the Raspberry Pi is not being powered or when it needs to power cycle it.
Flashing the ATTiny85 is done using a USB like the one below (you can buy it here USBASP Programmer [Ebay]). You might remember I used the same “edge connector” with my Solr project.
The code is very straight forward.
#include <Arduino.h> #define MODE_GPIO 0 // Connected to the Geekworm mode button #define POWER_GPIO 1 // Connected to the RPi 3V3 #define RUNNING_GPIO 2 // Connected to the RPi GPIO set at local.rc #define CHECK_DELAY 5000 // Run check every 5 seconds #define CHECK_RUNNING 36 // Check RPi GPIO every 36 delays (3 minutes) #define CHANGE_MODE_DELAY 5000 // Hold mode button down for 5 seconds unsigned int count = 0; void pressButton() { digitalWrite(MODE_GPIO, HIGH); delay(CHANGE_MODE_DELAY); digitalWrite(MODE_GPIO, LOW); count = 0; } void setup() { pinMode(MODE_GPIO, OUTPUT); pinMode(POWER_GPIO, INPUT); pinMode(RUNNING_GPIO, INPUT); digitalWrite(MODE_GPIO, LOW); } void loop() { // Every X seconds delay(CHECK_DELAY); // Check if RPi3 is being powered bool powered = digitalRead(POWER_GPIO) == HIGH; // Change mode if RPi3 is not being powered if (!powered) pressButton(); // Every X delays count = (count + 1) % CHECK_RUNNING; if (0 == count) { // Check GPIO bool idle = digitalRead(RUNNING_GPIO) == LOW; // Press button if GPIO is not HIGH // If switching off, // loop will switch it on again after CHECK_DELAY seconds // efectively resetting the RPi if (idle) pressButton(); } }
Saying I’m alive
I said the ATTiny85 is monitoring the 3V3 pin to know if the Raspberry Pi is being powered but it’s also monitoring a GPIO to know if it’s actually running. It expects a HIGH value there and will wait 3 minutes to see it before rebooting the RPi.
Forcing the Raspberry Pi to output a HIGH in a certain GPIO is very easy and does not require any special language or library, you just have to use the /sys/class/gpio path. Edit the /etc/rc.local file and add this contents before the exit line:
echo 14 > /sys/class/gpio/export echo out > /sys/class/gpio/gpio14/direction echo 1 > /sys/class/gpio/gpio14/value
Monitoring the battery
I used a python script supervised with supervisor to monitor the battery level and power down the Raspberry PI if it goes below a certain threshold for a certain time. Since the code is too long (and includes several files) to copy it here I have created a Gist with it here: Geekworm battery monitor script. The only dependency is ruamel.yaml to read the YAML configuration file.
Note that the code is conservative and requires two different conditions to perform a power down:
- The battery level must be below a certain threshold
- The battery must have been continuously discharging for a certain amount of time
If either of this two conditions becomes false the power down is not executed or cancelled.
Prototyping zones?
Just a side note so you don’t do the same mistake I did. Do you see those two prototyping zones in the board?
Well, they are not. They are all connected to ground… maybe a cooling mechanism?
GeekWorm Power Pack Hat Hack by Tinkerman is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Thanks for this post !
I’m looking into solving similar problem and have similar requirements as you.
Have you considered UPS HAT “S.USV pi basic” ? It seems to be working correctly and has decent reviews ?
I have not tested it. I’m sure it’s a valid option if people is reviewing it correctly. Only “problem” is that it’s 4 times as expensive as the Geekworm board….
Hey, have you checked the new LiFePO4wered/Pi+ ?
Yes, it’s great. It solves all the issues of the Geekworm but it’s 2-3 times more expensive…
Hi, what type of transistors are Q1 and Q2?
They are 7002.
|
https://tinkerman.cat/geekworm-power-pack-hat-hack/
|
CC-MAIN-2019-04
|
refinedweb
| 1,828
| 60.04
|
On 2019-01-16 14:29, Cédric Le Goater wrote: > On 1/16/19 12:47 PM, Thomas Huth wrote: >> On 2019-01-16 12:43, Cédric Le Goater wrote: >>> On 1/11/19 9:17 AM, Thomas Huth wrote: >>>> When compiling the ppc code with clang and -std=gnu99, there are a >>>> couple of warnings/errors like this one: >>>> >>>> CC ppc64-softmmu/hw/intc/xics.o >>>> In file included from hw/intc/xics.c:35: >>>> include/hw/ppc/xics.h:43:25: error: redefinition of typedef 'ICPState' is >>>> a C11 feature >>>> [-Werror,-Wtypedef-redefinition] >>>> typedef struct ICPState ICPState; >>>> ^ >>>> target/ppc/cpu.h:1181:25: note: previous definition is here >>>> typedef struct ICPState ICPState; >>>> ^ >>>> Work around the problems by including the proper headers instead. >>> >>> Thomas, >>> >>> >>> After a closer look, I think we should use 'void *' under PowerPCCPU >>> as it was the case before I introduced the second interrupt presenter. >> >> If you don't like the #includes, why not simply do anonymous struct >> forward declarations here? I think that would be better than "void *". > > yes. > >>> That's a bigger change reverting bits of already merged patches. I can >>> take care of it if you prefer. >> >> Could I keep the current patch in my series so that I can get the >> patches finally merged? You could then do any clean up that you like on >> top of it, ok? > > OK. > > See below the patch I would propose. Compiled tested with clang -std=gnu99. [...] > @@ -1204,8 +1199,8 @@ struct PowerPCCPU { > int32_t node_id; /* NUMA node this CPU belongs to */ > PPCHash64Options *hash64_opts; > #ifndef CONFIG_USER_ONLY > - ICPState *icp; > - XiveTCTX *tctx; > + struct ICPState *icp; > + struct XiveTCTX *tctx; > #endif That's pretty much what I had in an earlier version of my patch: But Greg did not like it: Thomas
|
http://lists.gnu.org/archive/html/qemu-devel/2019-01/msg03966.html
|
CC-MAIN-2019-22
|
refinedweb
| 291
| 61.87
|
#include "ltocr.h"
L_LTOCR_API L_INT EXT_FUNCTION L_OcrPage_SaveZonesFile(page, fileName, pageNumber, xmlOptions)
Saves the zones of this L_OcrPage to a multi-page zones disk file with XML options.
L_OcrPage_SaveZonesFile method save the zones of a particular OCR page to a multi-page disk file. If the file exist previously, this method will replace the zones specified in 'pageNumber' with the zones of the L_OcrPage. If the file does not contain zones for the specified page number, the zones will be appended to the file at the end and can be loaded later using L_OcrPage_LoadZonesFile.
If you wish to save all L_OcrDocument pages zones to file then you have to loop through the OCR document pages saving each page zones to the same file and the save method will append the zones of each page to the file giving you a multi-page zones file. The saved data will contain the page number of the zones. To load these zones, you also have to loop through all you OCR document pages loading each page zones separately passing the L_OcrPage_LoadZonesFile method the page number you wish to load its zones.
Note on loading zones from a multi-page zone file: If the file does not contain zones data with the correct page number, the engine will not load any zones for this page. After the method returns, any OCR page that did not have zones data will contain zero zones. You can then use L_OcrPage_AutoZone if required to re-zone this page.
The zones of this page will first be cleared prior to loading the new items.
Saving zones to an external file or could be useful when you are processing forms. For example, you can load one of the forms and automatically find the zones inside it using L_OcrPage_AutoZone, if the automatic zone detection was not 100 percent satisfactory, you can update the page zones manually and then save the result with L_OcrPage_SaveZonesFile. Once the zones are saved, you can now process all similar forms in the following manner:
Required DLLs and Libraries
For an example, refer to L_OcrPage_AutoZone.
|
https://www.leadtools.com/help/leadtools/v19/ocr/api/l-ocrpage-savezonesfile.html
|
CC-MAIN-2017-47
|
refinedweb
| 347
| 63.73
|
Creating Triggers in Silverlight :In this article we are going to look at how we can create Triggers in Silverlight and what do they offer for us .First Step :Install Blend 4 SDK from the Microsoft Site . Dll we require to implement the triggers is Microsoft.Expression.Interactions.dll .Please find the screenshot below :Created a new Silverlight Project and named it as SilverlightTriggers .Add a reference to the Dll Microsoft.Expression.Interactions.dll .Lets create a simple trigger to animate the border of the Button .No code Behind is required . All the work has to be done on the Xaml .Lets go one step at a time .Add a Button to the Grid .
Add the namespaces to the Button Tag .
Add the StoryBoard in the Resource Tag of the Button .
Finally Add the Trigger . I have added a Simple Trigger which performs the play operation on the storyboard .
Output :The Button is displayed as shown below :Click on it and you can see it animate . Well you can animate it differently by changing the storyboard . This is just a illustration . Happy Coding .
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/kb/4669-create-triggers-silverlight.aspx
|
CC-MAIN-2017-13
|
refinedweb
| 198
| 69.07
|
Server admin log/Archive 5
October 31
- 21:09 brion: added some tor ips from [1] manually to mwblocker.log
- 12:10 mark: Started squid on will
- 10:53 Tim: set up hourly apache-restart on yaseo apaches
- 10:05 sleeeepy-brion: started tarball-of-doom from khaldun->albert for enwiki images (non-thumbs) trickling
- 09:49 zzzz-brion: restarted last of search servers in tampa with data updated from snapshots
- 07:25 tired-brion: restarting tarball-of-doom on bacon under trickle so it doesn't slow things down
- 07:05 wacky-brion: creating giant tarball-of-doom on bacon to snapshot commons files for archive/copy
- 06:24 scary-brion: lifted restriction on reuploads. commons main files and archives now updated, thumbs seem to work (and copying in updates to maybe save some render time). may need to re-touch permissions at end
- 05:30 fiendish-brion: mounting bacon's /var/upload2 on /mnt/upload2 on zwinger, apaches
- 02:42 evil-brion: disabled reload priv for all users, all wikis, to try to get this image crap over and done with soon. going to migrate live commons files to bacon to try to reduce albert load
- 02:25 goblin-brion: humboldt set up and running as an apache, in lvs
- 01:44 ghoul-brion: srv6 also refusing connections, squid stuck, had to kill and restart
- 01:15 pirate-brion: srv7 refusing connections on port 80, but squid seemed to be stuck (restart complained squid was already running). killed and restarted squid, seems ok now
- 00:55 ghost-brion: bugs:3838 set localtime to UTC on ixia, lomaria, thistle
- 00:45 daemon-brion: added 'bugs'/'bugzilla' interwiki prefix on wikitech
- 00:39 zombie-brion: bugs:3839 installed ntp on humboldt to sync time
October 30
- 21:40-22:10 Tim: deployed LVS in front of squid at yaseo.
- 21:59 hashar: created dsh group apaches_yaseo (synced from amaryllis), moved apaches to apaches_pmtpa and set symlink for backward compatibilty.
- 16:00 mark: I moved anthony from internal to external VLAN, gave it ip .233, and wanted to make it a temporary Squid. However, it's giving disk errors, so that might not be such a good idea. Added to datacentre tasks for Kyle to look at.
- 15:00 mark: Upgraded all yaseo squids to the new squid RPM.
- 14:40 mark: Upgraded all pmtpa squids (except will, which is running FC2) to the new squid RPM.
- 14:10 mark: Upgraded all knams squids (except clematis, for comparison) to a new squid RPM, squid-2.5.STABLE12-1wm. This is a somewhat newer upstream Squid version, and also has a cron job added that checks whether Squid is still (supposed to be) running, and restarts it if it's not.
- 14:10 hashar: fixed a bug with server inventory, was putting larousse data every time.
- 13:50 mark: Installed NTP on vandale
- 13:40 mark: Moved LVS back from iris to pascal
- 13:25 hashar: started the Server inventory bot on wikitech site. Need feedback.
- 10:29 brion: running lucene wikipedia index rebuilds in pmtpa
- 09:23 brion: restarted yaseo apaches; extreme slowness in HTTP connection and response time, and some segfaults in logs. Seems better after restart.
- 06:50 brion: activated lucene search daemon in yaseo. running non-wikipedia index rebuilds in pmtpa
- 06:00 brion: restarted search daemons in pmtpa with wikisource. building ms/th/ko/ja indexes in yaseo, going to start more rebuilds in pmtpa...
- 04:10 Tim: installed PHP on yf1007, for some reason it wasn't there
- 04:05 Tim: Took yf1006, yf1008, yf1010, yf1017 out of rotation, they were segfaulting on form submission, e.g. save and move.
- 02:45 Tim: Set up mxircecho at yaseo
- 02:30 brion: setting up yf1017 as search server for yaseo
- 02:14 Tim: moved jawiki to yaseo
- 02:00 brion: running lucene index builds from last dumps for *wikisource
--Hashar 06:10, 30 October 2005 (PST)
October 29
- 23:53 hashar: renamed squids dsh file to squids_pmtpa (and put a symlink)
- 22:35 Tim: moved thwiki and mswiki to yaseo
- 21:19 hashar: BUG srv24 & srv27 out of apache group (see October 19) but are still in ganglia Apache group.
- 20:29 Tim: unmounted dead NFS mount srv2:/usr/local/fcache everywhere
- 20:00 Tim: took webster (and srv9) out of dsh ALL until someone can work out how to set up LDAP
- 15:41 hashar: made squid error message validate (size is invalid for hr element)
- 15:25 hashar: some people on irc told me that search on sources wikis doesn't work. Looking at MWDaemon.log , the search indexes do not exist and need to be created. Need some documentation on LuceneSearch.
- 07:40 brion: starting dump jobs on benet, srv35, srv36
- 07:06 Tim: Moved benet's increasingly large collection of NFS mount points to /var/backup/public/mnt, with symlinks left behind. They were previously scattered all over the place. There's a bug in lighttpd which requires them to be mounted in the document root. Mounted a directory from bacon, with some image dumps in it.
- 06:22 brion: fixing up more broken site_stats tables; fixed addwiki.php to use correct row id
- 04:04 Tim: unmounted dead NFS share /home/wikipedia/backup on bacon
- 00:51 Tim: Started copy of jawiki's external storage to yaseo.
- 00:47 Tim: Copied mswiki to yaseo
- 00:30 Tim: Copied thwiki to yaseo, now replicating. Images still need to be copied.
October 28
- 22:35 hashar: following samuel trouble 2 days ago, there is still some ghost articles on at least frwiki. Will manually fix that Oct. 29 if I got time.
- 21:49 Tim: LVS now in service in front of yaseo apaches. yf1010-1017 are now in service as apaches, they were previously idle but with apache installed. yf1018 is LVS, yf1019 is the experimental wikimaps installation and could be considered a spare load balancer.
- 21:42 Tim: noticed that perlbal was still taking a fair bit of load on bacon. Killed icpagent on bacon, increased icpagent delay time from 1 to 5ms on holbach.
- 21:03 Tim: Running sync-to-seoul, about to set up LVS on yf1018
- 18:35 mark: Installed a squid RPM without epoll (otherwise identical) on clematis, to compare memory leak behaviour
- 17:02 ævar: Turned off automatic capitalization on fowiktionary.
- 14:40 mark: srv8 Squid had crashed, restarted it. Please pay attention to this until we have a better solution!
- 06:52 Tim: copying dump of jawiki to henbane:/a/backup .
- 02:20 Tim: changed the password for wikiuser. DB load glitch experienced due to a migration bug.
October 27
- 17:16 ævar: disabled uploads on ndswiki [2]
- 05:43 various: killed evil special pages job that broke wiki
October 26
- 20:18 midom: rolled forward changes made to samuel on other db nodes.
- 19:18 mark: Routing problems fixed, switched DNS back.
- 19:00 mark: Routing problems in florida, but knams can reach out. Sent pmtpa traffic via knams.
- 04:41 kate: put lomaria and thistle into db service
- 02:40 kate: took suda out of rotation to turn it into fileserver. ixia is in rotation, thistle & lomaria are waiting for mysql to be set up
- 11:16 brion: installed easytimeline on wikitech ;)
- 11:00 brion: paused file copy onto bacon for peak hours
October 25
- 20:15 mark: restarted srv7's squid. Was crashed at 13:00
- 10:28 kate: ran post-installation setup on ixia, thistle & lomaria
- 7:00 brion: paused image copy from albert to bacon for the next few hours
- 6:00 Solar: Ixia, Thistle, and Lomaria, the three new db's are racked and ready!
- 5:00 Tim: fixed srv11
- 4:30 Tim: started HTML dump of all Wikipedias, running in 4 threads on srv31
October 24
- 20:12 various: fixed pascal
- 18:34 hashar: updated with the new wikidev URL.
- 12:10 mark: Got knams back up with iris as LVM load balancer, with no NFS mounts that can block it
- 11:30 Pascal went down
- 07:02 ævar: created bug, pih, vec, lmo and udm wiki
- 03:37 kate: uploaded the old images from wikidev
- 02:58 brion: Moved this site from wp.wikidev.net to
October 23
- 22:57 brion: starting trickle copy from khaldun to bacon
- 21:58 brion: shutting off bacon's broken mysql; clearing out its disk space and making an upload data copy
October 22
- 23:51 brion: hacked BoardVote to read from master; the boardvote2005 database is missing from suda and there was a lot of whinging in the database error log about it
- 23:22 brion: fixed group ownership on php-1.5 on yaseo servers (GlobalFunctions.php was unwritable)
- 22:25 hashar: zwitter fixed the issue (adding a live hack), merged back my changes.
- 22:00 hashar: broke site search by cvs updating and syncing extensions/LuceneSearch.php
- 05:37 brion: samuel caught up, back in rotation
- 05:18 brion: replication broke on samuel due to "Last_error: Error 'Lock wait timeout exceeded; Try restarting transaction'". Took out of rotation to fix
October 21
- 23:52 Tim: moved wikimedia button (the one in the footer) to the /images directory, to support offline browsing of the HTML dump
- 19:01 Tim: Increased tcp_rmem on henbane and yf1010, for faster copying.
October 20
- 06:13 brion: did '/sbin/ip addr add 145.97.39.155 dev eth0' on pascal; got one port 80 connection to go through to vandale, but others still refused
- routing problems in general; some level3 issue. pmtpa having connection problems, and freenode is splitting
- 05:56 brion: rr.knams.wikimedia.org (145.97.39.155) does not respond on port 80
- 05:04 brion: reopened ssh tunnel from yaseo to pmtpa master db and restarted replication on henbane. commons copy is catching up; hoepfully kowiki will remain working
- 01:53 brion: investigating reports that kowiki is COMPLETELY BROKEN DUE TO DATABASE LOCK since 10 hours ago
- 23:59 mark: Florida squids all seem to run out of memory after a few days... memleak. Will have to investigate.
- 14:52 kate: setup LVS at knams, on pascal.. no failover yet
- 11:20 mark: mkfs'd /dev/sdb1 on srv10. Tried to rm -rf /var/spool/squid but this failed, probably due to a corrupted filesystem. Probably needs a reinstall/thorough hw check, but in the meantime, is running with 1 squid service ip and 1 cache_dir.
- 09:40 future-brion: data dumps scheduled to start on benet, srv35, srv36 pulling from samuel
- (This is the db-friendly dump with (most?) bugs fixed. May have some extra newlines in old text with \r\n in the raw database, will continue tracking down libxml2 problems later so future dumps will be clean.)
October 19
- 16:00 mark: Installed the squid RPM on all yaseo squids. Updated hosts-lists.php...
- 14:15 mark: Squid on will had crashed; restarted it.
- 06:44 brion: running dump tests on srv35 to confirm that bugs are fixed
- 03:09 ævar: Translated /home/wikipedia/conf/squid/errors/English/error_utf8.htm into Icelandic, someone with root access might need to run /home/wikipedia/conf/squid/deploy, might.
- No, they need to be put in the next version of the squid RPM, which can then be deployed... -- mark
- 01:18 brion: turned apache back off on srv24, srv27 and took them out of apache nodegroup, as avar claims tim said they shouldn't be apaches. [they seem to be memcached and external storage]
- 01:13 brion: turned srv24, srv27 back on; several gigs have appeared and recopy of settings file succeeded
- 01:00 brion: turned off apache on srv24, srv27: out of disk space
- 00:44 brion: added Wikiportal, Wikiproyecto namespaces on eswiki
October 18
- 11:40 Tim: dewiki and enwiki have been defragmented, unused columns and indexes have been removed. Now starting compression of jawiki.
- 08:04 Tim: compressOld.php is finished with en, I've now taken adler out of rotation to defragment tables. Running null alter table on dewiki.text first.
October 17
- 23:56 brion: added recently set up machines to mediawiki-installation nodegroup. (THEY WERE ALREADY IN APACHES GROUP. NOTHING SHOULD BE IN APACHES THAT'S NOT IN MEDIAWIKI-INSTALLATION, EVAR)
- 22:28 brion: set up gmond on srv49, appears in ganglia now
- 21:00 mark: Danny says that the 3 new DB servers have been delivered, and he dispatched Kyle for their physical installation tomorrow.
- 18:15 brion: stopped dumps due to confirmed bug[3]. srv35 and srv36 available for apache for now (added back to node group)
- 16:20 mark: Remounted /a on srv6 and clematis with the reiserfs nolog option, to disable journaling. Saves disk writes, just mkfs it when it crashes...
- 13:50 mark: Deployed the new squid RPM on all knams squids
- 11:40 mark: Deployed the new squid RPM on all Florida squids, except will, which is still running FC1. Can we please reinstall will?
- 08:47 brion: rebuilt apaches now online: alrazi avicenna friedrich goeje harris hypatia kluge diderot srv49
- 08:40 brion: enwiki full dump is being run semimanually with the prefetch on an older version. will want to manually fix up links when it's done
- 07:00ish brion: trying apache setup on alrazi; will mass-run on other machines soonish
- 01:21 brion: current state of dump runs:
- srv36: enwiki
- srv35: all other wikipedias
- benet: non-wikipedias
- They're set to pull table dumps and page+rev from adler, but should only touch occasional bits of text.
- 00:55 brion: took srv35 and srv36 out of apaches node_group so nobody starts Apache on them by accident. :D
- 00:30 brion: setting up for database dumps using srv35 and srv36. Domas, these should hit the dbs a lot less so please try not to kill them too hard! Thanks.
October 16
- 21:30 midom: added ariel into enwiki service, made from fresh dump with fresh ibdata!!!!!
- 17:20 Tim and Domas: deployed LVS-DR to load balance between the squids and the apaches
- Halved median miss service time, 1100->500ms!
- Please don't try to add apaches into the LVS realserver pool until I've documented the procedure. A simple mistake, like running the commands in the wrong order, could crash the site.
- 14:00 mark: The new squid is serving peak load at roughly 20% of the cpu usage of the old squid...
- 12:00 mark: Put clematis with squid+epoll rpm in production. In case of severe problems, just kill it and start the old squid.
- 02:22 ævar: Took srv26 out of /usr/local/dsh/node_groups/mediawiki-installation and /usr/local/dsh/node_groups/apaches, wasn't working.
October 15
- 16:20 mark: Clematis is now running my experimental Squid RPM, with epoll support and my HTCP patch. It's not pooled yet, because I want to discuss and test inclusion of other patches first...
- 14:45 mark: Depooled clematis as squid, because I want to use it for testing my new Squid RPM.
- 09:59 Tim: set up srv31 as an NFS server, exporting its /var/static directory; mounted it on benet
- 07:50 brion: noticed minor bug in PHP on AMD64
- 03:30 Tim: Set up ntpd on amaryllis, dryas, henbane, yf1000-1004, by copying configuration from yf1007, and running chkconfig ntpd on;/etc/init.d/ntpd start
October 13
- 23:54 ævar: Kate fixed the tingxi issue, woo.
- 23:52 ævar: I can't ssh to tingxi (10.0.0.12) which means language/LanguagePt.php is out of sync, nothing severe, but it will cause interface mismatches for pt*.wiki*
- 23:22 kate: reinstalled vandale and added it as a squid since it's not doing anything else
- 22:46 kate: reinstalled clematis and put it in squid pool
- 21:30 brion: fiddling with albert's MaxClients aagin
- 17:40 mark: Turned off log_mime_hdrs on all squids, as we're not using it anymore
- 17:00 mark: Added fuchsia as a squid in knams
- 15:00 mark: Clematis's disk has been replaced and should be fixed. Needs a reinstall...
- 14:00 mark: Implemented and deployed a new udpmcast daemon with forwarding rules on amaryllis and larousse. This should solve our purging problems with wikis on separate clusters.
- 13:00 midom: srv34,srv33,srv32 joined ExternalStore service as cluster3
- 07:00ish brion: holbach back in business.
- 06:28 brion: disabled Special:Makesysop steward bits until I get the database problem resolved. Still poking at holbach, skipping the enwiki bits.
- 06:10 brion: took holbach out of rotation; replication broke with what looks like a steward bot application
October 12
- 23:00ish brion: trying to restart dump process, because some idiot canceled them
- 22:30ish brion: updated IRC channels in squid error page
- 21:43 ævar: the code had some discrepency due to some of it being cvs uped recently and some of it not being cvs uped recently, ran scap' and solved a live hack in includes/SkinTemplate.php, whoever wrote it might want to take a look.
- 21:15 jamesday: started gzip of first 50/50GB/25 days of samuel binary logs, had 41GB free. About 2GB/day used.
- 08:40 brion: updated internalwiki on lucene search index, restarted lucene daemons. had to clear out old logs from maurus, out of disk space.
- 07:22 brion: cleaning up dupe and missing site_stats rows; removed dupe 'warwiki' entry in all.dblist
- 05:37 ævar: Installing the CrossNamespaceLinks specialpage extension.
- 04:05 Tim: henbane and dryas were reporting high lag times, probably due to the low replication rate (it's only replicating commons and ko). The database was locked automatically when the lag was more than 30 seconds, which was usually. I increased the maximum lag to 6 hours.
- 02:20 Solar: Initialized raid and reinstalled OS on webster. Only eth0 is plugged in and is private.
October 11
- 22:00ish brion: changed project ns for plwikisource
- 19:55 kate: changed root password of zedler, someone who wants it should ask me or elian
- 17:45 Tim: When they got more load, all three pound instances started using large amounts of memory. Enough excitement for one night, switched back to perlbal.
- 17:30 Tim: we were having oscillating load between the three pound hosts, so I switched squid to round-robin, and cut the perlbals out of the list at the same time.
- 16:56 Tim: pound was reaching its fd limit during high concurrency ab testing, raising it to 100000 seems to have fixed it
- 16:08 Tim: brought dalembert, friedrich and harris into service as pound servers
- 09:44 brion: briefly stopped mailman to edit vereinde-l archives to remove improperly forwarded email
- 02:30 Tim: moved static.wikipedia.org to srv31, proxied via the squids.
- 01:55 Tim: did apache-restart-all to fix high memory usage
October 10
- 18:14 kate: fs corruption on srv10 again, moved its IPs elsewhere
- 14:41 Tim: wrote some new tools to allow srv31 to restrict its copying from albert to times of low NFS server load (<1300 req/s)
- 08:45 Tim: Installed ganglia on srv32-35 (did srv31 earlier)
- 06:29 Tim: Copying HTML dumps to srv31
- 03:41 Tim: Added new ganglia metric "nfs_server_calls" to albert and zwinger. It's a perl script, /usr/local/bin/nfs-gmetric
- 01:32 ævar: Reverted my changes in CVS; cvs up-ed, and synced the affected files, just in case.
- 01:07 ævar: Checked all the apaches for Language::linkPrefix() and it turns out they all had it (see /home/avar/report (1 = has the function; 0 = does not have the function))
- 00:46 ævar: Tried syncing again, same error, spooky, off to manually check the apaches.
- 00:30 ævar: cvs up and scap breaking the wiki, which should not have happend but did for some reason, the error was: Call to undefined function: linkprefix() in /usr/local/apache/common-local/php-1.5/includes/Parser.php on line 1232, but function linkPrefix was defined in the Language class, no problems were reported with syncing, applied a live hack to Parser.php to fix the issue, investigating.
October 9
- 21:51 midom: killed backups. haha. unkilled site. though adler is good boy, lots of RAM does not help with backups. serial reads do.
- 19:02 kate: disk failed on clematis. added mint as squid.
- 11:09 Tim: changed master for ko
- 08:46 Tim: copied ko upload directory to amaryllis. Set up dryas, with chained replication from henbane.
- 07:40 ævar: Installed extensions/Renameuser/Renameuserlog.php
- 05:10 Tim: restored 245 and 248 to srv10
- 04:28 Tim: removed bogus entries from zwinger's /etc/exports, with an RCS backup
- 03:13 brion: starting weekly data dumps on benet, srv35, srv36; pulling from adler for primary data. (live; so table dumps will be slightly inconsistent. xml dumps are self-consistent internally.)
- 02:50 Tim: srv10 down, moved virtual IPs: 245 to srv5, 248 to will and 210 to srv7
- 02:30 jeronim: added missing mount points /mnt/upload and /mnt/wikipedia on humboldt and some machines in the apaches and mediawiki-installation groups
- 02:23 Tim: Changed squid configuration to have no-query for albert. This might reduce the latency some people were experiencing when requesting images.
October 8
- 22:52 ævar: Ran a script (/home/avar/3631.sh) to confirm that bug 3631 wasn't exploited on any wiki besides enwiki, it wasn't.
- 22:29 ævar: de-sysopped myself on enwiki
- 22:27 ævar: sysopped myself on enwiki and banned the users using exploiting bug 3631
- 08:22 brion: removed 'srv9' dupe entry in zwinger exports; for some reason srv9 couldn't mount with that in place (the ip is also in)
October 7
- 15:28 jeronim: turned off and disabled swap on knams squids (clematis hawthorn iris lily mayflower ragweed sage)
- 13:34 ævar: created ilowiki
- 11:49 jeronim: chmod/chowned zwinger:/usr/local/etc/powerdns/langlist-cnames to 664 root:wikidev on avar's request
- 06:41 ævar: / on zwinger filled up (reported by jeronim) I deleted an old log I didn't need anymore freeing 2GB, more stuff needs to be cleaned out still.
- 02:49 ævar: Added a live hack to Special:Export, a notice explaining that exporting of full histories is disabled, it can't be translated, boo hoo;)
- 00:05 brion: changed pawiki sitename/meta namespace to 'ਵਿਕਿਪੀਡਿਆ'
October 6
- 07:20: midom: re-enabled steward interface
- yesterday kate: installed solaris on vandale because mysql wanted to test something. finished with it now, should have linux put back.
October 5
- 18:57 brion: image server has been very slow lately. fixed a broken thumb file or two which had a subdirectory in the way (one on the wikipedia portal was being requested *very* often, producing extra redirect load)
- 05:25 Solar: Rebooted srv26, bumped temp. threshold to 80C. Will investigate further.
- 05:20 Solar: webster is back up for now, but will fail again. Will call SM to get replacement drives.
- 02:56 Tim: srv24 in rotation as part of cluster2. Restarted compressOld.
- 01:00 Tim: Setting up srv24 as an external storage server, to replace srv26 which is down again. Stopped compressOld and stopped slave on srv25 for data directory copy.
October 4
- 23:50 Tim: Started compressOld.php, started mysqld on srv26.
- 23:30 Tim: restarted evil resource-eating program (with kate's permission)
- 20:02 kate: stopped evil resource-eating tim program on albert started ~ 06:20.
- 14:40 mark: Increased DB load on samuel in an attempt to solve DB availability problems
- 13:13 Webster broke.
- 06:22 Tim: HTML dump post-process running on albert. It'll spend most of its time in sed, with a perl controlling script.
- 05:13 Tim: static HTML dump of English Wikipedia is pretty much finished. I'm currently running a huge find command on albert, to get a list of files to post-process.
- 00:20 Solar: Uploaded pictures. Take a look at User:Solar
October 3
- 23:07 brion: srv28 shutdown broke dewiki and enwiki dumps, have to restart them. non-wikipedias finished before this.
- 19:45 Solar: srv11 and srv28 moved to new racks for power distribution requirements.
- 03:30 jeronim: pmtpa squids were mostly running with max FDs of 1024 and starving, so rebuilt them with limit of 8192 and restarted
October 2
- 21:41 brion: taking srv35 out of apache loop to run additional dump processing
- 13:10 brion: running wikipedia backups from bacon via srv36, nonwikipedia backups from bacon via benet
- 12:57 brion: replication halted on bacon due to missing tables on the new wikis (napwiki, warwiki etc) -- this will need to get fixed. in the meantime doing dumps from other wikis ...
- 09:30 brion: srv31-35 in apache service (in perlbal list)
- 08:45 jeronim: srv31-35 ready for apache deployment
- 07:40 Tim: fixed exif bug () and deployed the updated tree on all florida apaches
- 06:30 brion: running cleanupTitles.php on various wikis
- 00:10 Tim: Running fixSlaveDesync.php on en.
October 1
- 21:07 Tim: Told dalembert to stop echoing its syslog spam to zwinger and larousse. Apparently temperature warnings were appearing in terminals on larousse.
- 19:40 Tim: Added Internode proxies to the trusted XFF list
- 11:00 brion: bacon and adler catching up last couple hours' data
- 08:30 brion: stopping bacon, adler to copy current data over to bacon
- 08:15 brion: continued replication catchup on bacon
- 08:10 brion: stopped backups; benet's out of space (going to do cleanup) and I'm testing an improved backup dump script that eliminates the overhead of mwdumper on the initial dump-split-compress job.
- 08:07 Tim: re-enabled Special:Makesysop, minus steward interface
September 30
- 19:00-20:00 mark: Deployed the fixed HTCP-CLR patch to all squids, and restarted them
- 19:18 ævar: disabled PageCSS because of potential XSS issues.
- 16:06 ævar: Installed the PageCSS extension on the cluster for per-page CSS.
- 13:12 Tim: installed apache, php etc. on dalembert, by modifying /home/wikipedia/deployment/apache/prepare-host until it kind of worked. Not sure if it's all set up right, but it's probably good enough for dumpHTML, which is what I'm using it for.
- 12:30 Tim: installed gmond on various reinstalled machines
- 07:22 midom: adler in service
- 01:25 brion: did some scripted despamming crosswiki (some deleted pages by '127.0.0.1'...)
- Solar: Replaced ram in srv42
September 29
- 19:50 mark: Fixed a memleak in my HTCP CLR squid patch, and testing it on clematis. If it works well, I will deploy it to all other squids...
- 17:52 Tim: made some more tweaks to . Now it displays properly in IE, and it works with small screens
- 17:12 Tim: Returned text on to a comfortably readable size. Apologies to optometrists everywhere for the reduced pay cheque.
- 07:25 brion: Ran initStats on warwiki, napwiki, ladwiki.
- 05:15ish brion: ntp setup on ariel
- 05:00 jeronim: clean fc3 on ariel; it has had a drive swapped and is hopefully not faulty now
- 04:30 Solar: srv33, srv34, and srv35 have ip's and are ready for service. srv32 and srv31 are pending a bomis server move
September 28
- - jeronim: srv49, alrazi, diderot, hypatia, avicenna, goeje, harris, dalembert, humboldt, kluge, friedrich all freshly set up with fc3 - but no ntp setup, and no apache. alrazi's old host keys lost.
- 19:00 jeronim: on zwinger, moved squid errors directory and sync-errors back into /h/w/conf/squid from /h/w/conf/old-squid, and updated sync-errors to also sync to lopar, yaseo, and knams. Updated all squids to use shiny new error page from mark_ryan.
- 10:00 mark: Added ragweed back to the knams squid pool because of overload on the other squids
- 09:10 brion: dewiki backup running on benet while others continue (from bacon 20050921)
- 07:20 brion: backups switched to use bzip2 for xml dumps; 'articles' instead of 'public' name change; image dumps disabled
- 06:52 brion: starting bzip2 filter/output of 20050924 enwiki dump on srv36
- 01:00 Solar: alrazi avicenna diderot friedrich goeje harris hypatia humboldt kluge srv42 srv49 are back on the netgear switch
September 27
- - jeronim: dhcp still not working so I've asked Kyle to put most fc2 boxes on a different switch
- 23:53 jeronim: commented out icpagent in /etc/rc.local on dalembert in case it's rebooted
- 22:10 mark: The new switch appears to be Fast Ethernet only! It's accessible on 10.0.1.1. I configured some parts of it to make it somewhat usable: all ports in access mode, vlan 2.
- 20:15 midom: disabled steward interface, needs rewriting to select databases instead of specifying their names directly in queries -- breaks replication
- 18:00 midom: ariel gone down:
LSI MegaRAID SCSI BIOS Version G112 May 20, 2003 Copyright(c) LSI Logic Corp. HA -0 (Bus 3 Dev 1) MegaRAID SCSI 320-2 Standard FW 1L26 DRAM=64MB (SDRAM) Battery module is present on adapter Following SCSI ID's are not responding Channel-2: 0, 1, 2 1 Logical Drives found on the host adapter. 1 Logical Drive(s) Failed 1 Logical Drive(s) handled by BIOS Press <Ctrl><M> or <Enter> to Run Configuration Utility
- 08:53 Tim: Stopped icpagent on dalembert for now, pending examination of pound's problems
- 07:11 brion: added wikipedia.nl alias in powerdns in prep for changing master servers for that domain (jason has that info)
- 02:54 brion: new8 machines (except srv50) don't have ntp working, still. punching at it again (were up to about 15 seconds slow)
- copied /etc/ntp.conf and /etc/ntp/step-tickers from srv50 to the others in the group, ran /etc/init.d/ntpd start
- 02:40 brion: starting test cur-only dewiki dump to double-check dump processing bugs while other backups continue
September 26
- 21:59 brion: killed commons image dump again; too slow, too big. need to rework that...
- 19:30 jeronim: turned off swap and commented it out in /etc/fstab on all pmtpa squids after kate noticed srv7 was swapping and restarted its squid
- 17:30 jeluf: skipped some insert statements to enwiki on the slaves not replicating enwiki. Steward tool running on metawiki tries to write to enwiki and mysql replicates these transactions.
- 09:30 brion: stopped bacon again, running backup of everything but enwiki/dewiki (backdated to 20050921) from bacon
- 09:22 brion: added refresh-dblist script to update the split .dblist files in /h/w/c
- 08:46 brion: started replication catchup on bacon (about 5 days behind)
- 08:40 brion: restarted mwdumper on the enwiki dump, which had broken with a funky file locking problem
- 07:37 Tim: deleted srv27 binlogs 020-026, the rest are needed for srv26 when it starts working again.
- 07:37 brion: locking fiwiktionary for case conversion
- 04:49 brion: turned off srv41, srv26 apaches due to segfaults; turned off exif log for commons due to giant >2gb log file
- 01:04 brion: created wikiro-l, wikimediaro-l lists, iulianu as list admin
September 25
- 13:45 hashar: made nap language inherit from italian language instead of english (rebuildMessages.php nap --update).
- 13:10 hashar: created nap, war & lad wikipedia using the updated howto Add a language. Thanks Tim for the technical assistance.
- 06:17 kate: copying bacon's mysql data to zedler
September 24
- 23:44 brion: changed squid config to use bacon and holbach's .wikimedia.org names instead of .pmtpa.wmnet on kate's advice
- 23:31 brion: pound didn't seem to be working; 503 errors, other problems, was unkillable without -9. unable to run site on holbach's perlba; squids couldn't find it? restarted pound and icpagent on dalembert, working now
- 23:23 brion: tried restarting pound. (there's also weird cyclic load between rose and anthony every few minutes)
- 23:15 brion: slow site performance reported; ganglia showed unusually high load on srv50, srv37, dalembert. Stopped dumpHTML on dalembert (pound machine), restarted apache on 50 & 37
- 22:12 brion: srv8 failed on squid restart due to broken symlink to config file. added srv8 to pmtpa squid lists for new config list and relinked its config file
- 20:40 midom: webster is up with non-enwiki dbset, ariel is up with enwiki only.
- 19:36 kate: zedler is up with mysql installed; waiting for replication to be sorted out somehow
- 18:20 Tim: deployed new squid configuration generator
- 16:11 jeronim: diderot, harris, alrazi, avicenna out
- 13:30 jeronim: kluge & friedrich out too, for reinstall
- 12:12 jeronim: took goeje out of mediawiki-installation dsh group; putting fc3 on it
- Tim: stopped icpagent on bacon. Load balancers are now holbach (perlbal) and dalembert (pound)
- 07:55 brion: started enwiki xml dump with five parallel readers; experimental (on srv36 pulling from samuel)
- 07:04 brion: trying to fix ntp again on humboldt and new8 machines
- 02:14 brion: disabled Special:Undelete toplevel list; code needs rewriting or just dump it for Special:Log (added link as temp hack)
September 23
- 21:17 brion: added /^Lynx/ to unicode browser blacklist
- 15:37 Tim: Deployed pound/icpagent on dalembert. It is currently running alongside perlbal instances on bacon and holbach.
September 22
- 23:18 brion: turning off capitallinks on tawiktionary
- 18:48 brion: updated pmtpa squid error messages to remove obsolete openfacts and wikisearch references. master copies now in /h/w/conf/squid/errors
- 18:21 brion: wikinews backup done. enwiki backup halted due to some nfs/large file problem. investigating
- 11:58 Tim: brought srv26 back into service
- 11:28 Tim: started deleting thumbnails still in their obsolete locations, 180,000 to delete.
- 09:10 brion: starting *wikinews backups on srv36 pulling from bacon. [installed mwdumper]
- 08:25 brion: running enwiki backup on srv36 pulling from a halted bacon, saving on benet
- 08:08 brion: taking srv36 off perlbal nodelist to try running backups with it
- 07:26 brion: adding new machines to perlbal; ready for service... hopefully
- 07:00 Tim: restarted dumpHTML.php, I had stopped it for a while due to high DB load. I'll stop it again when we get closer to peak time.
- 06:45 brion: running setup-apache script on remaining new8 machines (srv36-41, srv42-48)
September 21
- 23:54 brion: recompiling librsvg with correction to security fix; it had accidentally disabled data: urls as well
- 20:35 brion: set european tz for nlwikimedia
- 18:46 Solar: webster and ariel have rebuilt raid and FC3 installed although they do not have IP's. They are accessible via console.
- 17:00 midom: disabled all bloat in albert's http configuration (mod_perl, php, jk, ssl, ...), that freed lots of memory and allows more effective caching of directory trees and file metadata. And yes, it solved a bit performance issues (uh oh, yet another image server overload).
- 08:59 brion: disabled wikidiff PHP extension sitewide; there are numerous reports of bad diff output in some cases, and dammit alleges it may be crashy or futex-y. InitialiseSettings.php is set to enable it in the wiki if it's on in php.ini and ignore it if not.
- 07:50 brion: tim is doing ongoing debugging on srv50 trying to identify source of segfaults
- 07:00 brion: installed patch for apache rewrite bug on amd64, but still getting segfaults on srv50
- 06:08 brion: clocks are wrong on new8 boxen; working on correcting
- 06:00 brion: setting up APC instead of Turck on srv50 experimentally
- 00:35 brion: srv50 back out; some apache child process segfaults, which don't look too good
- 00:34 brion: srv50 back in
- 00:11 brion: srv50 out for further adjustments (tidy, proctitle)
- 00:09 brion: putting srv50 into apache rotation to test it out before installing all others
September 20
- 23:20 ævar: changed wgSitename to Vichipedie on furwiki
- 23:14 ævar: ran php namespaceDupes.php --fix --suffix=/broken furwiki to fix namespaces on furry wiki
- 22:39 ævar: changed $wgMetaNamespace on furwiki from Wikipedia to Vichipedie.
- 20:14 brion: reverted Parser.php change temporarily due to reports of massive template breakage
- 19:45 brion: fixed internal wiki (whoops, typo in config change last night)
- 19:13 brion: removed bogus entries from master robots.txt ("/?", "/wiki?", "/wiki/?")
- 14:16 Tim: Disabled context display for full text search results as an emergency optimisation measure. It was taking more than its fair share of our precious DB time. $wgDisableSearchContext in CommonSettings.php.
- Note: This caused a large reduction in CPU usage on the master DB server, from 100% down to 70%. In the future, it might be worthwhile to ensure text for context display is loaded from the slaves.
- 10:30 brion: doing experimental software installs on srv50 [amd64]
- 10:08 brion: Added sync-apache script to rsync the apache config files from zwinger to pmtpa apaches. Don't forget to use it after making changes and before restarting apaches!
- 09:30 brion: moving apache configs a) into /h/w/conf/httpd subdir, and b) into local copies on each server which will be rsync'd
- 08:05 brion: new apache configs on all
- 07:23 brion: fixed up apache configs on *.wikimedia.org
- 07:00 jeronim: added acpi=off panic=5 to adler's kernel params and rebooted, because apparently there are some ACPI problems, and so that it reboots on kernel panic instead of freezing
- 06:53 brion: cleaning up apache config files; replacing ampescape rewrite usage with aliases to remove our patch dependency (tested on wikimediafoundation.org)
- 06:40 jeronim: installed same kernel on adler as is on samuel and set it as default; also samuel's default kernel was changed to a newer one (by yum?) in /etc/grub.conf, so changed it back to match the current kernel
- 05:30 brion: put suda back in rotation; toned down its share of enwiki hits a bit
- 05:02 brion: adler crashed again at some point
- 02:36 brion: adler was rebooted by colo; running innodb recovery
- 01:58 brion: adler is down, seems to have crashed (panic bits on scs output). taking out of rotation too
- 01:45 brion: lots of delays trying to open suda from wiki; taking out of db rotation
- 01:11 brion: halted backup; benet ran out of space. en_text_table.gz is much larger than expected (49gb), perhaps external storage has not been used correctly as expected? will remove file and continue.
September 19
- 22:10 ævar: uninstalled nogomatch on enwiki, who's going to sort through all that gibberish data? Not me!
- 21:07 brion: rebooting new8 machines to make sure they're running current kernel
- 21:02 brion: new8 group status: srv47 online but borked; 31-35 and 49 offline. others to be set up as apaches
- 20:46 brion: running special pages update on frwiki by request... will update others on cronjob if there's not already one?
- 19:40 mark: Replaced udpmcast.py by a properly daemonized version. Set it up at knams to forward to a multicast group instead of all unicast IPs forwarded by larousse...
- 18:45 mark: Removed miss_access line from knams squids to solve the cache peer errors. Repeat at yaseo if it works...
- 13:49 ævar: Installed the nogomatch extension experementally on enwiki.
- 08:00 Tim: Removed all NFS mounts from srv1's fstab. Set up a simple /home directory on its local hard drive.
- 06:06 kate: reverted root prompt on zwinger so it's not invisible on a white background
- 04:47 James: stop slave on bacon while dumper is running. Slave will restart when done.
- 02:45 Tim: changed root prompt on zwinger. Started sync-to-seoul, with -u option this time so we don't accidentally overwrite stuff
- 01:50 brion: seems to be mostly back up at this point. boot seemed to be aided by disabling named and letting it lookup from albert
- 01:36 brion: zwinger boot still going on; nfs init is *very* slow doing the exportfs -r; seems to be slow dns lookups
- 00:38 brion: jeronim did this: [root@zwinger srv38]# reboot - unfortunately it was not srv38, but zwinger.
- 00:05 brion: mounted /home on srv1; couldn't login, caused sync-file failures
- 00:05 brion: enabled Nuke extension on meta & mediawiki.org
September 18
- 14:00 jeronim: rebooted zwinger by mistake and it needed a manual reset by colo staff to come back up. Site was offline for about an hour.
- 04:34 brion: vandale kernel panic, frozen
- 04:30 Solar: srv36-srv50 are racked, have ip's, and are ready for production
- 03:10 Tim: moved compressOld.php to dalembert (where dumpHTML.php has been running), on complaints that it was causing problems on zwinger.
September 17
- 22:17 brion: running unique-ip counter on fuchsia with saved logs (into uniqueip table on vandale)
- 22:02 brion: disabled disused info-de-l list by request of list admins
- 11:05 brion: ran initStats on all wikisources to initialise those not already set
- 07:06 brion: canceled upload dump for commons backup due to size and slowness; too big to fit
- 06:30 jeronim: on larousse, removed fedora netcat and installed from source into /usr/local
- 04:30 Tim: used ntpdate -u pool.ntp.org to set the times on all the yaseo machines, some were a long way out. Then set all their timezones to UTC. This apparently caused ganglia to think yf1000 and yf1002 were down, fixed by restarting the local gmond.
- 04:10 Tim: Started replication on henbane
- 01:10 brion: enabled wikidiff on all wikis. (can be disabled selectively w/ wgUseExternalDiffEngine in InitialiseSettings)
- Tim: Set up mysql on henbane, made a consistent dump of kowiki and commonswiki using bacon, copied dump to henbane ready to start replication
September 16
- 22:20 Tim: started mysqld on srv26, it had been off for 12 hours or so. The compression script had been running all that time, srv26 caught up to the master without incident.
- Colo (Solar):
- supposedly bart is brought back up
- borrowed HP switch connected to gi0/4 on the cisco
- moreri was moved, and is trying to netboot (fails)
- 10 of the 20 new servers have been racked and wired to the borrowed HP switch, but don't have IPs yet
- 11:37 brion: updating sitenames on he, el, ru wikisource
- 11:30 brion: started backup run
- 03:17 brion: frwiki reimport done
- 02:47 brion: frwiki reimport started
- 02:35 brion: jawiki reimport done
- 01:49 brion: started jawiki reimport
- 01:33 brion: bacon catching up; suda is fine as it is partial mirror
- 01:29 brion: took bacon, suda out of rotation for further investigation
- 01:23 brion: nlwiki open for editing
- 01:03 brion: reimporting nlwiki on samuel
- 00:41 brion: nl/fr/ja dumps done (in /var/backup/private/recovery). going to try reimporting soon
- 00:16 brion: running attachLatest on *wikisource
September 15
- 23:14 brion: 3 dumps from adler done; doing extra backups from samuel too. setting adler to read-only
- 22:37 dumping nlwiki, frwiki, jawiki databases from adler onto sql files on benet
- 22:18 put load back on samuel for enwiki with adler disabled. fr, nl, ja wikipedias are locked while we work this out
- 22:09 commented out adler from db.php; adler appears to be misconfigured and all kinds of breakage is going on. it's not read-only, and has some revisions that others don't have
- 21:56 brion: took load off bacon (was 100 load on fr, nl, ja; nl and fr reporting weird editing problems possibly freak lag problems, and it was consistently lagging a few seconds at least)
- 17:25 mark: Setup IPsec between bacon and vandale. Who wants to setup replication?
- 16:50 mark: Altered geodns: pointed Malaysia at yaseo, and Israel, Turkey, Cyprus at knams
- 13:04 Tim: Shutting down apache on dalembert temporarily so that I can use it for HTML dump testing and generation
- 12:35 Tim: Restarted compressOld.php, it stopped when I shut down bacon to do the copy to adler.
- 11:30 mark: Restarted some knams squids to increase FDs, changed /etc/rc.local startup script
- 11:15 mark: Deployed squid on yf1003 and yf1004, and added them to the DNS pool
- 11:10 mark: Recompiled squid on yaseo to increase filedescriptors to 8192 and restarted all squids with 4096
- 07:37 brion: running importDumpFixPages.php on wikisources to fix bogus rev_page items
- 02:30 kate: ariel's down
- 02:29 brion: recompiling mono 1.1.9 on benet for xml bugfix
- 00:15 brion: removed humboldt and hypatia from mediawiki-installation node group, neither has port 80 on:
- humboldt prompts for password, not configured correctly?
- hypatia shows host key changed; was reinstalled?
- 00:10 brion: disabled MWSearchUpdater plugin as the daemon is broken; briefly broke the wiki due to bad include_path; need to fix config for MWBlockerHook to make sure the path is right even w/o the lucene include
September 14
- 21:30 mark: Setup log rotation at yaseo to knams, routed japanese and chinese clients to yaseo squids.
- 20:30 midom: adler online, bacon catching up
- 20:15 mark: Deployed squid on yf1001, and routed Korean clients to the Florida squid cluster.
- 18:15 mark: Deployed squid on yf1000.
- 18:10 mark: Wrote a YASEO squid deploy script /home/wikipedia/deployment/yaseo-squid/prepare-host (yahoo cluster only, should I put it at florida?) after Tim's apache prepare-host script
- 17:48 ævar: de-opped myself on ruwiki and stopped my revert bot, the russians hate me even more now.
- 16:30 mark: Set up a squid on yf1001. Same setup as knams, except it's in /usr/local/squid as in florida. Adapted florida's squid and mediawiki configs accordingly.
- 13:19 ævar: ran INSERT INTO user_groups VALUES (1165, "sysop"); on ruwiki to make myself temp. sysop to fix the MediaWiki: fsckup.
- 11:15 brion: halted nlwiki partial temp backup as enough was run to test problem
- 10:41 brion: running another nlwiki backup to get raw dumpBackup.php output for testing
- 10:39 brion: halted old backup sequence (at nlwiki, with a mystery breakage in output that needs examining)
- 10:33 brion: hacking dumpBackup.php to load php_utfnormal.so extension (not yet enabled sitewide)
- 10:05 brion: running kowikisource and zhwikisource imports on formerly broken parts
- 08:55 brion: updated messages on jawikisource
- 08:30ish brion: updated messages on *wikisource
- 01:30 jeronim: access to yaseo console server should be back hopefully within a few hours - eam is dealing with it
September 14
- 13:32 Tim: Shut down mysql on bacon, started copying data directory to adler
September 13
- 23:23 brion: set logo on dewikiquote to commons version
- 23:ish brion: installing mono 1.1.9 with xml patch on benet to fix future dumps ([5])
- 17:23 ævar: Logging Exif debug information to /home/wikipedia/logs/exif.log using wgDebugLogGroups.
- 16:40 jeronim: yf1000 - yf1004 are all set up with reiserfs now. The only yaseo machine not working is yf1013 which is in an unknown state as the console server (konsoler04.krs.yahoo.com (10.11.1.186)) is unreachable.
- 16:18 Tim: Started moving some text to cluster2, starting with frwiki.
September 12
- 11:59 brion: killed search update daemon; going to replace this (again) with a more robust queuing system
- 15:00 or so kate: upgraded perlbal to 1.37
- 13:24 jeronim/kyle: lots of machines connected to SCS, port labels corrected. The APC has apparently vanished - Kyle couldn't find it.
- 09:40 brion: installed ICU 3.4 on zwinger and mediawiki-installation from RPMs built from the ICU-provided spec file. Source and binary rpms in /home/wikipedia/src/icu
- 09:34 brion: fixed misnamed krwikisource -> kowikisource db
- 8:50 Tim: rebuilt interwiki tables
- 02:15 brion: replaced old php.ini on zwinger with symlink to the common one. added /usr/local/lib/php back into the default include_path (for PEAR stuff sometimes used)
- 01:04 brion: blocked leech enciclopedia.ipg.com.br
September 11
- 22:05 brion: trying batch clears in parallel overloaded zwinger; canceled, running in serial again
- 21:35 brion: running batch operation to remove bad cached messages
- 21:00 brion: reconfigured blocker daemon to log to samuel. had to set up permission grant again on samuel
- 18:19 Tim: finally managed to fix the message problem, except for some erroneous values stored in cache
- ~18:00 ævar: To get interwiki links working on hrwikisource: sourced the output of maintenance/rebuildInterwiki.php and sourced mainteance/interwiki.sql on all wikis, some interwiki prefixes appear to have been lost in the progress e.g. bugzilla: (only mediazilla: exists in interwiki.sql) looks like we need better interwiki update scripts...
- Don't run interwiki.sql, under any circumstances. Add new prefixes to m:Interwiki map. -- Tim 08:52, 12 Sep 2005 (UTC)
- 16:05 Tim: switched master to samuel. Adler asks for root pw after reboot due to failed fsck.
- 15:10 Adler crashed. Tim and JeLuF on the scene, wiki switched to read-only mode
- 14:59 Tim: Non-default language message caching completely f****d up. Blank messages everywhere
- 07:10 brion: now using blocker list
- 07:00 brion: installed limited librsvg on apache cluster, svg back on
- 15:40 Tim: Installed apache, php, turck and mediawiki on yf1005. Put all required commands in /home/wikipedia/deployment/yaseo-apache/prepare-host. Still needs database, memcached and mediawiki configuration.
- 05:05 brion: restarted MWUpdateDaemon, hung again at 1gb used memory
- 02:38 brion: disabled svg for further security work
- 01:20 brion: reconfiguring wikisource to allow en.wikisource.org to work (hr ja kr sv zh en now imported)
- 01:09 brion: installed librsvg 2.11.1 on the apaches; it's in /usr/local. (old librsvg versions seemed to muck up text pretty bad)
September 10
- 22:49 brion: importing wikisource nl ro ru
- 22:34 ævar: deinstalled the wgDebugLogFile on commonswiki, got enough debug output to see if anything was wrong.
- --:-- jeronim: yaseo stuff:
- reinstalled FC4 on yf1000, yf1001, yf1003, yf1004 with reiserfs
- reinstalled FC4 on dryas & henbane with 10GB ext3 root partition and the bulk of the disk as jfs on /a
- rsyncing /home, /tftpboot, /root, /var/www, /usr/local, and /etc from amaryllis to dryas in preparation for reinstalling amaryllis with reiserfs. It's a script, /root/amaryllis-rsync.sh, running in a screen on dryas.
- 14:14 ævar: installed a wgDebugLogFile for commonswiki in /home/wikipedia/logs/commonswiki.log to monitor Exif debug output.
- 13:26 ævar: ran maintenance/deleteImageMemcached.php on all wikis fixing bug 3410
- 10:44 brion: cleaning out old mysql data from benet to free up space for current backups (40 days+ out of date, not too useful)
- 10:00 brion: restored working frame-breakout code (pending cached wikibits.js)
- 07:58 Tim: moved some ancient rubbish from /home/wikipedia/htdocs to /var/backup/home/wikipedia/htdocs
- 07:10 brion: running data split for additional wikisource languages
- 02:40 Tim: Changed names of Seoul machines
- 02:15 brion: set edit rate limit for new accounts to same as ip rate limit
- 01:40 brion: installed rsvg (librsvg2) on mediawiki-installation machines, enabled SVG uploads
September 9
- 06:30 brion: restarted stalled de,en dumps
September 8
- 19:18 brion: checker daemon running
- 10:50 brion: setting up vandal checker daemon on larousse
- 10:42 hashar: enabled subpages for portal (100) and portal discussion (101) on dewiki.
- 7:45 hashar: added two namespaces for frwiki : 100=>Portail, 101=>Discussion_Portail .
September 7
- 22:00 jeronim: fixed avar's login problem on servers in the mediawiki-installation group -
- 21:30 jeronim: killed everyone's ssh sessions and sshd on zwinger (sorry)
- 10:25 midom: After Tim did put live memcached patch, site's sessions were switched from NFS to memc.
- 06:54 brion: killed stalled backup -- memcached send hang for the last day or so. It's continuing w/ dkwiki; will rerun stalled dewiki and enwiki
September 6
- 19:55 brion: tgwiktionary to lowercase
- 05:30 brion: set up experimental upload verification hook
- 04:02 koko: removed firewall
September 5
- 12:40 brion: set up to shut down search builder daemon every hour (at 47 minutes) to protect aganst memory leaks in builder; search-update-daemon wrapper script set to auto-restart 5 seconds after shutdown/crash of the daemon
- 09:05 brion: rebuildMessages.php --update on all wikis to add various new messages
- 06:09 brion: starting mass lucene updates of pages edited in august
- 05:18 brion: lucene back-deletions done, reoptimizing build index
- 01:10 brion: search updater up; running queued deletions
- 00:45 brion: vincent back in active search rotation
September 4
- 23:55 brion: splitting lucene config to lucene.php. putting coronelli on search, wiht optimized index
- 19:30 jeronim: created helpdesk-l
- 17:20 jeronim: fuchsia does not boot on the latest kernel (see below), but it does boot on the 2.6.11-1.33_FC3smp kernel, so switched it to boot that kernel by default
- 16:27 mark: Because of cascading incidents in knams, we moved all traffic to florida and lopar via DNS.
- 14:30 jeronim: fuchsia was dead or very close, so power-cycled it using the IPMI. It is broken:
NaodW29-pre4d8ccd6810bab9a700000001
- 13:16 Tim: made /home/wikipedia/lib/install.sh ignore x86_64 machines, added a part to clean up rubbish left in /usr/lib, then ran it everywhere with dsh -a -f
- 04:20 Tim: reinstalling PHP 4.4.0 with exif support. Using php-upgrade-440, which calls the new script /home/wikipedia/lib/install.sh to set up shared libraries in /usr/local/lib.
September 3
- 18:40 jeronim: removed body of mailman archive messages here and here on yannf's request
- 06:40 brion: relaunch updated backup script with some of the broken bits fixed.
- 04:50 Tim: Finished benchmarking PHP 4.4.0, see GCC benchmarking. Now deploying the new binaries, from source tree /home/wikipedia/src/php/php-4.4.0-gcc4
- sometime brion: added .log to text/plain on benet's lighty
September 2
- 12:00 brion: ran backup test on aawiki using the new dump splitter and partial new backup script. (script is in ~brion/run-backup.sh if anyone wants to examine it)
- 07:19 Tim: compiling GCC 4.0.1 on zwinger. It will be installed with a program suffix, so gcc is still the old compiler, and gcc-4.0.1 is the new one. Source directory is /home/wikipedia/src/gcc/gcc-4.0.1, build directory is /home/wikipedia/src/gcc/gcc-4.0.1-build.
- 06:21 Tim: removing hypatia from perlbal nodelist for an hour or so, for some benchmarking
September 1
- 07:45 brion: set sitename/meta namespace on mtwiki
- 07:00 brion: running cleanupTitles.php to rename broken pages. Will be at Special:Prefixindex/Broken/ at each wiki.
August 30
- 17:30 jeronim: made a robots.txt on larousse (noc/kohl) to disallow some dynamic pages and a few others
- 16:40 jeronim: created wikimediapl-l
August 29
- 21:30 brion: blocked wissens-schatz.de for remote loading
- 17:30 jeluf: anonymized a name in the archive of wikide-l
- 11:30 brion: running a batch job checking for invalid titles on various wikis (cleanupTitles). shouldn't interfere with anything, making no changes.
August 28
- 22:15 brion: locking plwiktionary for capitalization change
- 15:18 hashar: created wikimk-l mailing list.
- 15:15 mark: Brought mayflower back up. Repaired the filesystems, and rebooted it. It was reporting lines like
Aug 28 04:22:34 mayflower kernel: swap_free: Bad swap file entry 7800007ffffff00f
- 14:30 mark: Another Kennisnet V-20 went down, this time it was mayflower dieing somewhere this morning. Depooled it... As it's not critical and we still have SP access, I will have a look at it first.
August 27
- 00:45 brion: turned on wegge's experimental watchlist bot thingy on dawiki
August 26
- sometime: lots of data imported on wikisources
August 25
- 16:02 jeronim: added fc-mirror.wikimedia.org DNS entry for fedora mirror
- fc-mirror 1H IN CNAME albert
- 15:40 hashar: created wikials-l mailing list. TODO: delete /h/w/htdocs/mail/.index.html.sw(o|p) (swap files by fire).
- 19:00 mark: PowerDNS on pascal appeared corrupted. Most probably because of an overlapping zones problem in bindbackend (not bindbackend2). I integrated rev.wikimedia.org into the wikimedia.org to evade that.
- 16:09 hashar: blacklisted www . izynews . com on florida squids (using acl badbadip src 62.75.174.182/32). Need to be done on kennisnet and paris cluster too.
- 11:00 brion: set up https on kohl. (old ssl key files backed up; wasn't using the established password, nobody knew what it might have been)
- 07:05 brion: rebuilt interwiki tables; using correct interwikis for the new wikisources.
- 06:51 brion: added sr.wikisource.org
- 02:02 hashar: updated in HEAD LanguagePt.php from meta. Watchout when syncronising.
August 24
- 14:04 hashar: disabled lucene search. Daemon run on maurus but timeout / dont give any output.
- 04:00 Jamesday: started nice bzip2 for slow query log and first 72 binary logs on adler to free 40GB of disk. Can archive them on another box later.
- 00:43 brion: trying out an older version of MWDaemon on vincent to see if memory leak is a new code problem
August 23
- 16:17 jeluf: removed 10.0.0.17 (vincent) from MWDaemon pool. Was always reporting errors.
- 09:39 brion: added
- 05:23 Tim: Reports from users of frequent "connection refused" errors reported by the browser. Investigated, found squid was crashing once every 10 minutes or so, on 4 out of 6 squids. The two that weren't crashing were running a newer version of squid, I upgraded them all to that.
August 22
- 22:12 brion: upped max post size to 75mb on squids; were problems posting large videos to commons (or something)
- 21:50 brion: renamed presswiki to internalwiki
August 21
- 22:53 brion: bugzilla up; removed ssl-ticket.wikimedia.org from pascal's apache conf.d dir
- 22:48 brion: bugzilla.wikimedia.org appears to be offline.
- 13:30 Tim: reduced lucene load on vincent to 1/4, maybe that will stop it from locking up (which it did again)
- 13:00 Tim: restarted lucene on vincent, it was closing connections as soon as they were established
- 06:27 brion: otrs now accessible again on ; now with redirect for the index page! For reference: Apache is in /usr/local/otrs
- 06:00 brion: trying to start otrs on ragweed. apache configuration appears to be borked.
August 20
- 10:00 jeluf: finished OTRS transition to ragweed. Spamassasin setup finished.
- 09:53 Tim: Switched site to 1.6alpha
- 08:16 Tim: Applying schema update for 1.6alpha, basically an ALTER TABLE watchlist
- 01:00 Tim: ran update-special-pages
August 19
- 23:30 brion: changed postfix 'myhostname' setting from zwinger.wikimedia.org to mail.wikimedia.org, should prevent the mail loop errors reported sending to the full addr
- 23:00 brion: ran namespace conflict checks for updates on tawiki and gawiki
- 21:40 brion: updated rebuildInterwiki
August 18
- 23:30 jeluf: OTRS status: Installed apache/php/perl/postfix/mysql client on ragweed. Using pascal as DB server. Problems with sessions, sessions seem to be mixed up, sometimes I get logged in as presroi, sometimes as JeLuF :-/ Stopped apache for now. Postfix still accepting new tickets.
- 22:30 mark: Changed DNS CNAME ticket.wikimedia.org to point to ragweed
- 22:17 brion: disabled account creation throttle on press wiki; this is closed wiki and all accounts are created by an admin
- 10:00 midom: suda is back again, with enwiki and commonswiki databases
- 05:00 jeluf: copied OTRS tables to pascal, copied otrs binaries to pascal, configured pascal to serve https. Can access old tickets again. Currently can't send new tickets to otrs. DNS change needs to be done.
- 00:55 brion: recreated wikimediasr-l list on zwinger
August 17
- 19:27 brion: fixed bug in db.php that set all database load factors to NULL
August 16
- 20:15 jeluf: renamed project namespace on cswikibooks to Wikiknihy.
- 15:30 midom: resumed idle bacon's mysql replication, we might need to do external store migration soon, and bring back suda with smaller dataset.
August 15
- 21:46 kate: always_bcc on zwinger was set to "quagga" and its mbox was full, so it generated lots of bounce messages. i removed the setting.
- 12:30 mark: Mint seems to have at least a bad disk, possibly other problems. Sun will look at it. In the meantime, we can *try* to network boot it and recover data.
- 10:30 jeronim: had a look at mint via the IPMI - tried to power cycle it but it wouldn't switch off. Mark will tell the kennisnet guys about it. There's a dump of the OTRS DB from before the transfer to mint in albert:/root. If mailman is to be put back to zwinger, chapter-l and the new Serbian list will need to be re-created (and maybe some other lists?).
- 09:00 mark: Mint apparently is fucked, RAID and SP settings were reverted to factory defaults. Trying to do data recovery now. Possibly a power problem?
August 14
- 19:51 brion: mail config on zwinger broken or funky or otherwise annoying; just leaving it off for now. moved dns for mail back to mint (which is still dead) sighhhh
- 19:26 brion: moved mail.wikimedia.org back to zwinger due to extended outage on mint. With our limited support contract on knams we can't afford to have this critical service there.
- 14:30 midom: srv27,srv26,srv25 joined external storage service, waiting for payload
- 09:30 brion: mint is offline, no ping
- 00:20 brion: stopped bacon to run backup dump
- 01:00 jeluf: enabled spamassassin for OTRS on mint (~otrs/.procmailrc)
August 13
- sometime kate: moved otrs to mint
- 23:25 brion: added wikimediasr-l aliases to mailman on mint
- sometime someone: Apparently mail.wikimedia.org has been moved to mint.
- 10:42 jeronim: set ticket.wikimedia.org to CNAME mint.knams.wikimedia.org. (move of OTRS to mint is in progress)
- 00:58 Tim: started update-special-pages
- 00:19 Tim: it happened again so I disabled otrs's crontab. Original crontab is in /opt/otrs/crontab
August 12
- 23:18-23:30 Tim: An OTRS process on albert (PostMaster.pl) developed a runaway memory leak, causing heavy swapping. This slowed down albert sufficiently to cause the entire apache cluster to lock up with high load. Killed the process at 23:30 and the site soon returned to normal.
- 09:30 brion: took srv1 out of 'apaches' node group and shut off apache on it. DON'T RUN APACHE ON SRV1
August 11
- 21:26 Tim: TICK TICK TICK, that's the sound of 58 servers with their clocks ticking in synchrony, maximum offset 80ms.
- 20:30 Tim: Added the missing restrict line for 10.0.0.200 to ntp.conf on (almost) all machines
- 19:30 Tim: Synchronised ntp.conf on hypatia, humboldt, rose, anthony, rabanus, diderot and srv1 with /home/config/others/etc/ntp.conf.vlan2 . This made them remotely queryable, for easier debugging in the future, and also switched their preferred server from zwinger to the cisco (in broadcastclient mode).
- 18:35 Tim: Fixed tingxi's resolv.conf
- 17:45 mark: Fixed inconsistent favicons on apaches. Older apaches had symlinks to a common (wikipedia) favicon, which got overwritten with the new wikinews favicon by brion. Removed the symlinks, and put the correct favicons in place.
- 12:20 brion: set up pl.wikimedia.org and press.wikimedia.org (press is locked, and currently has no user accounts. a sysop/bureaucrat will need to be added for it to be used)
- 07:28 brion: updated wikinews.org favicon
August 9
- 23:20 mark: Rerouted Europe back to knams, because all sorts of weird problems were occuring. Fixed a typo (pmpta) in DNS. Some nameservers report TTL 0 for some of our DNS records - need to investigate that.
- 22:20 mark: Moved Squid service IP 207.142.131.246 from overloaded srv10 to srv5. Cleared the ARP entry on the l3 switch.
- 22:00 mark: Reroute everything from knams to pmtpa directly, because of routing problems
- 13:35 mark: changed biruni's hostname from biruni.wikimedia.org to biruni
- 13:30 mark: added avicenna and biruni to node_groups/apaches
- 13:00 mark: Restarted apaches on avicenna, alrazi and biruni with -DSLOW, and changed startup scripts
- 08:52 jeronim: blocked 61.48.105.65 spammer IP from all wikis using block-ip-all - so ipblocklist message will speak of "vandalism" instead of "spam"
- 08:25 jeronim: created chapter-l for mailman on mint
August 8
- 09:22 kate: enabled greylisting on mail.wm.org
- 20:54 hashar: readded srv2 (with ip x.x.0.1 ) to the apache pool
- 18:25 hashar: avicenna & biruni readded. Monitoring error log, #wikipedia and memory.
- 17:43 brion: added /mnt/upload mounts on avicenna and biruni
- 17:32 hashar: forgot sync-common on avicenna and biruni :/ I though scap would do the job ... They both missing the upload directory.
- 15:45 brion: stopped apache on avicenna and biruni pending more information on reported errors
- 15:36 hashar: TODO: biruni hostname seems wrong /etc/sysconfig/network list HOSTNAME=biruni.wikimedia.org whereas other servers just get HOSTNAME=zwinger or HOSTNAME=srv30 ...
- 15:36 hashar: removed srv1 from mediawiki-installation dsh file (as apache is not meant to run on).
- 15:24 hashar: bringed back biruni in mediawiki-installation pool
- 15:12 hashar: bringed back avicenna in mediawiki-installation pool
- 14:30 hashar: started apache on srv11.
- 06:30 kate: moved mailing lists to mint. let's see if it starts sucking less.
August 7
- 20:50 brion: postfix hung zombified on zwinger, wouldn't restart automatically. had to remove master.pid and restart.
- 16:25 brion: installed DynamicPageList on wikiquote per [6]
- 15:50 brion: locked tlhwiki
- 07:47 brion: added application/ogg as mime type for ogg files on albert
- 00:59 brion: set localized logo for ptwiktionary
August 3
- 14:15 mark: Switched over upload.wikimedia.org to lighttpd instead of apache on albert
- 12:00 brion: added frankfurt city map to wikimania whitelist. whoops!
August 2
- 15:45 mark: Bound albert's apache to a single IP, instead of INADDR_ANY
- 09:40 brion: added wildcard subdomains for wiktionary.com redirection
August 1
- 22:30 all: samuel's disk filled up. Switched master to adler. Re-syncing samuel from suda.
- 14:50 mark: Put all kennisnet squids back into DNS, updated DNS on pascal -
|
https://wikitech.wikimedia.org/wiki/Server_admin_log/Archive_5
|
CC-MAIN-2019-26
|
refinedweb
| 11,190
| 61.36
|
Consuming Web Services with kSOAP.
Introduction
The kSOAP library is an elegant, lightweight,.
It is often risky to integrate open source software (OSS) in a project as it can cause unforeseen problems down the road. Always look for comments and reviews of other developers who have used the library. If the project isn't under active development, it may be better to look for an alternative solution.
The objective of this tutorial is to make you familiar with the kSOAP library. For demonstration purposes, we'll be using a simple web service from W3Schools. The web service is a Fahrenheit to Celsius converter. The web service accepts a value in degrees Fahrenheit and responds with the equivalent value in degrees Celsius. We'll go through the example step by step. At the end of this tutorial, we'll display the result on the user's device.
1. Getting Started
Step 1: Project Setup
Start a new Android project and configure it as you please. Feel free to use your favorite IDE, but for this tutorial I'll be using IntelliJ IDEA.
In the project's manifest file, you need to specify that the application is allowed to connect to the internet. We're also going to specify the target SDK version as well as the minimum SDK version. Take a look at the manifest file shown below.
<uses-sdk android: <uses-permission android:
Step 2: Downloading kSOAP
Always try and use the latest stable release of a library or upgrade the library you're using in an application after a major update has been released. It's good practice to keep a project's dependencies up to date for various reasons, such as security and bug fixes. In this tutorial, we'll be using version 3.1.1 of the kSOAP library, which you can find on the project's download page. After downloading the kSOAP library, add it to your project's
libs folder.
Step 3: Adding kSOAP to Your Project
In order to use the kSOAP library in your project, you'll need to add it to your project. I'll show you how to add it using IntelliJ IDEA. The steps might be slightly different if you're using a different IDE, but the idea is the same. In IntelliJ IDEA, choose Project Structure... from the File menu, open the Modules pane, click the plus button at the bottom of the right pane, and select library. Navigate to the
libs folder and select the kSOAP library. Take a look at the two images below for clarification.
The kSOAP library should now be visible as a dependency of your project. Click the check box next to the kSOAP library to add it to your project. Now that we've added the library to our project, it is time to put it to use. If you're using IntelliJ IDEA, your project should look similar to the one shown below.
2.W3Schools Web Service
Using the kSOAP library to consume a web service involves a number of steps. However, before we dive head first into using the kSOAP library, it is useful to tell you a bit more about the web service we'll be using.
Visit the website of the W3Schools web service that we'll use in this tutorial. You'll see that there are two operations,
CelsiusToFahrenheit and
FahrenheitToCelsius. The name of each operation is self-explanatory. The web service's URL is the base URL that we'll use to connect to the web service.
If you select an operation on the W3Schools website, you're shown an example of the request that the web service expects as well as the response of the web service. Take a look at the code snippet below, which is an example request that the web service expects. Pay close attention to the
SOAPAction in the code snippet. We will use it a bit later in this tutorial.
POST /webservices/tempconvert.asmx HTTP/1.1 Host: Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "" <!--?xml version="1.0" encoding="utf-8"?--> string
The next code snippet shows an example response of the web service.
HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <!--?xml version="1.0" encoding="utf-8"?--> string
3. Using kSOAP
Step 1: Creating an Envelope
The first thing we need to do is creating a SOAP envelope using the
SoapSerializationEnvelope class (
org.ksoap2.serialization.SoapSerializationEnvelope), which you need to import from the kSOAP library. Take a look at the code snippet below, in which I have initialized an instance of the
SoapSerializationEnvelope class.
SoapSerializationEnvelope envelope = getSoapSerializationEnvelope(request);
The
getSoapSerializationEnvelope method isn't defined in the kSOAP library. It's a helper method that I've created that to make working with the kSOAP library a bit easier. The method returns the SOAP envelope that we need for the rest of the example. Take a look at the implementation of
getSoapSerializationEnvelope below.
private final SoapSerializationEnvelope getSoapSerializationEnvelope(SoapObject request) { SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.dotNet = true; envelope.implicitTypes = true; envelope.setAddAdornments(false); envelope.setOutputSoapObject(request); return envelope; }
The
getSoapSerializationEnvelope method accepts a
SoapObject instance, which is the request. We'll see how to create the request in just a few minutes. When creating an instance of the
SoapSerializationEnvelope class, the
SoapEnvelope version is set by passing in
SoapEnvelope.VER11, which tells the kSOAP library that we'll be using SOAP 1.1. We set the envelope's
dotNet property to
true as the web service we'll be consuming runs on Microsoft's .NET framework.
envelope.dotNet = true;
Step 2: Configuring the Envelope
It is now time to configure the SOAP envelope by setting the request information. Start by importing the
SoapObject class (
org.ksoap2.serialization.SoapObject) and take a look at the code snippet below to see how to configure the envelope. We start by creating an instance of the
SoapObject class, which requires two parameters, a namespace and a method name. You can add additional properties to the request using the
addProperty method as shown below. In our example, I use
addProperty to add the value in degrees Fahrenheit to the request.
String methodname = "FahrenheitToCelsius"; SoapObject request = new SoapObject(NAMESPACE, methodname); request.addProperty("Fahrenheit", fValue);
You may be wondering where
NAMESPACE is coming from. It's a private static string that is defined elsewhere in the application as you can see below.
private static final String NAMESPACE = "";
Step 3: Creating the Request
To send the request to the web service, we need to create an HTTP transport request. We'll use the
HttpTransportSE class (
org.ksoap2.transport.HttpTransportSE) for this. Take a look at the example below.
HttpTransportSE ht = getHttpTransportSE();
As you may have guessed,
getHttpTransportSE is another helper method and allows you to quickly create an HTTP transport object. It makes it less tedious to create an HTTP transport object every time you make a web service call. Take a look at its implementation below. To create an
HttpTransportSE instance, we only need the base URL of the web service, which is another private static string as you can see below.
private final HttpTransportSE getHttpTransportSE() { HttpTransportSE ht = new HttpTransportSE(Proxy.NO_PROXY,MAIN_REQUEST_URL,60000); ht.debug = true; ht.setXmlVersionTag("<!--?xml version=\"1.0\" encoding= \"UTF-8\" ?-->"); return ht; }
private static final String MAIN_REQUEST_URL = "";
In
getHttpTransportSE, we also configure the
HttpTransportSE object. By passing
Proxy.NO_PROXY as the first argument of the constructor, we specify that no proxy is used for the request. The third argument of the constructor sets the session timeout in milliseconds. To make debugging easier, we also set the object's
debug property to
true. Any issues that pop up are logged to LogCat.
Step 4: Sending the Request
It is time to send the SOAP request to the web service. We do this over HTTP using the transport and envelope objects we created earlier. The HTTP transport object has a
call method, which is used to add the SOAP action and envelope that we created earlier.
ht.call(SOAP_ACTION, envelope);
SOAP_ACTION is another private static string as you can see below.
private static final String SOAP_ACTION = "";
Step 5: Processing the Response
When the web service sends back a response, we'll need to process it and handle any errors that may have been thrown. We can then display the data to the user. Take a look at the code snippet below in which we extract the response from the response envelope using the
getResponse method.
SoapPrimitive resultsString = (SoapPrimitive)envelope.getResponse();
I'm using a
SoapPrimitive type, but you can also use a
SoapObject instance if the response from the web service is XML. You can then use the
SoapObject instance to get the response values and store them in an array. Call
toString on the
SoapPrimitive object to convert the response to a simple string to use it in your application.
Take a look at the next code snippet in which I've implemented a method
getCelsiusConversion. The method accepts a string variable as its only argument. The variable is added as a property to the SOAP request as we saw earlier in this tutorial. The variable the method consumes is the value in degrees Fahrenheit. This value is sent to and processed by the web service and we get back a response in degrees Celsius.
public String getCelsiusConversion(String fValue) { String data = null; String methodname = "FahrenheitToCelsius"; SoapObject request = new SoapObject(NAMESPACE, methodname); request.addProperty("Fahrenheit", fValue); SoapSerializationEnvelope envelope = getSoapSerializationEnvelope(request); HttpTransportSE ht = getHttpTransportSE(); try { ht.call(SOAP_ACTION, envelope); testHttpResponse(ht); SoapPrimitive resultsString = (SoapPrimitive)envelope.getResponse(); List COOKIE_HEADER = (List) ht.getServiceConnection().getResponseProperties();; } } data = resultsString.toString(); } catch (SocketTimeoutException t) { t.printStackTrace(); } catch (IOException i) { i.printStackTrace(); } catch (Exception q) { q.printStackTrace(); } return data; }
I use two strings in
getCelsiusConversion,
data and
methodname. The
data variable will be returned by the method after the web service sent back a response, while
methodname stores the name of the operation of the the web service that we'll target and is used in the
SoapObject instance.
You may have noticed the
for loop in
getCelsiusConversion, which isn't part of the steps we've discussed earlier. When working with more complex web services, it is important to keep track of the current session. In the snippet below, I store the session and keep track of it each time I make a call to the web service.; } }
4. Creating the user interface
Now that the hard work is behind us, it is time to make use of what we just created. To conclude this tutorial, I'll show you how to create a simple user interface for converting a value in degrees Fahrenheit to a value in degrees Celsius and display the result on the user's device.
Step 1: Create the layout
First, we need to create an XML file in the project's layout folder. Take a look at the code snippet below. It is a simple illustration of a user interface created in XML.
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <EditText android: <Button android: <TextView android: </LinearLayout>
We create three components, an
EditText instance, a
Button instance, and a
TextView instance. The
EditText instance is used to enter and capture the value that we intend to send to the web service. The button is used to run the thread that invokes
getCelsiusConversion and the text view displays the response we get back from the web service.
Step 2: Create the Activity
The next step is creating an
Activity instance to display the layout we just created. Take a look at the following code snippet. This shouldn't be too surprising if you've developed Android applications before.
package com.example.KsoapExample; import android.app.Activity; import android.os.Bundle; public class MyActivity extends Activity { /** * Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } }
Now that we've taken care of the user interface, we can tie everything together. Take a look at the next code snippet to see how this is done.
package com.example.KsoapExample; import android.app.Activity; import android.os.Bundle; public class MyActivity extends Activity { private TextView txt; private String celsius; /** * Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); final EditText edt = (EditText)findViewById(R.id.value_to_convert); Button btn = (Button)findViewById(R.id.convert); txt = (TextView)findViewById(R.id.answer); btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (edt.length() > 0) { getCelsius(edt.getText().toString()); } else { txt.setText("Fahrenheit value can not be empty."); } } }); } }
In
onCreate, we set a listener on the button,
btn. We also double-check that a value is entered in the input field before sending it to the web service. In the button click listener, the value passed to
getCelsius is cast to a string as the web service expects a string value. The implementation of
getCelsius isn't difficult as you can see below.
private final void getCelsius(final String toConvert) { new Thread(new Runnable() { @Override public void run() { SoapRequests ex = new SoapRequests(); celsius = ex.getCelsiusConversion(toConvert); handler.sendEmptyMessage(0); } }).start(); }
In
getCelsius, a new thread is created, which runs and creates an instance of the class that implements
getCelsiusConversion. When we receive a response from the web service, we send a message to a handler to update the user interface by displaying the value in degrees Celsius to the user.
public Handler handler = new Handler(new Handler.Callback() { @Override public boolean handleMessage(Message msg) { switch (msg.what) { case 0: txt.setText(celsius); break; } return false; } });
In the handler, we update the
TextView instance with the value we received from the web service. Take a look at the final result below.
Conclusion
You should now be able to add the kSOAP library to a project and leverage it to make requests to a web service that uses the SOAP protocol. Working with the kSOAP library will become easier with a little practice and I therefore encourage you to try the Celsius to Fahrenheit conversion web service. Try out the example Android application that's part of the tutorial for a bit of extra help.
|
http://code.tutsplus.com/tutorials/consuming-web-services-with-ksoap--mobile-21242
|
CC-MAIN-2015-11
|
refinedweb
| 2,381
| 57.47
|
py-coinspot-api 0.2.2
A python library for the Coinspot API
Python Coinspot API Library
A python library for the Coinspot API.
Source:
PyPi:
Please see for documentation on the CoinSpot API.
NOTE: All requests and responses will be JSON
Installation
pip install py-coinspot-api --user
or
sudo pip install py-coinspot-api
Configuration
You have two options for configuration, using os environment variables or a yaml file
Option 1
Windows:
set COINSPOT_API_KEY=XXXXXX set COINSPOT_API_SECRET=XXXXXXXXXX
Linux:
export COINSPOT_API_KEY=XXXXXX export COINSPOT_API_SECRET=XXXXXXXXXX
Option 2
The config.yml.sample needs to be copied to config.yml and your unique api key and secret values need to be inserted. Extra options like debug and logging file name can only be configured using the yaml file.
api: key: 'PUT_YOUR_KEY_HERE' secret: 'PUT_YOUR_SECRET_HERE' endpoint: '' debug: True logfile: 'coinspot.log'
Class Documentation
TODO
- Extend test cases and requirements.
Example Usage
After you have your config.yml in place, test it out
from coinspot import CoinSpot # initialise the library client = CoinSpot() # get the spot prices print client.spot() # get your coin wallet balances print client.balances() # get the last 1000 orders for Dogecoins print client.orderhistory('DOGE') # get a list of all the current buy and sell orders print client.orders('DOGE') # put an order in to sell 20 Dogecoins at 0.000280 per coin print client.sell('DOGE', '20', '0.000280') # Get a quote on buying a billion Dogecoins, with estimation of timeframe print client.quotebuy('DOGE', 1000000000) # Donate a craptonne of Dogecoins # to the author of this library! Much Appreciate!!! print client.send('DOGE', 'DJrHRxurwQoBUe7r9RsMkMrTxj92wXd5gs', 1000)
Send Dogecoins of appreciation
If you like this software, you can always send cold hard cryptocoin my way
You can do this using the library like this:
# Donate a craptonne of Dogecoins to the author of this library! # Much Appreciate!!! print client.send('DOGE', 'DJrHRxurwQoBUe7r9RsMkMrTxj92wXd5gs', 10000)
or send Bitcoins:
# Donate a craptonne of Bitcoins to the author of this library! print client.send('BTC', '1LybpYphZJqSAxjNFqjfYHB8pWxKcBmFkf',.
Change Log
- 0.1.1 Initial Release
- 0.2.0 Logging Support, Initial Test Cases, Exception Handling, Travis Support, Configuration File
- Downloads (All Versions):
- 0 downloads in the last day
- 13 downloads in the last week
- 186 downloads in the last month
- Author: Peter Dyson
- Keywords: coinspot api development bitcoin dogecoin litecoin cryptocurrency
- License: GPLv3
- Categories
- Development Status :: 5 - Production/Stable
- Intended Audience :: Developers
- License :: OSI Approved :: GNU General Public License v3 (GPLv3)
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: geekpete
- DOAP record: py-coinspot-api-0.2.2.xml
|
https://pypi.python.org/pypi/py-coinspot-api/
|
CC-MAIN-2016-18
|
refinedweb
| 437
| 55.64
|
Page info sometimes resizes an image incorrectly when it uses scaling
VERIFIED FIXED in Firefox 3.7a1
Status
()
People
(Reporter: mozilla.bugs, Assigned: mozilla.bugs)
Tracking
(Depends on: 1 bug)
Firefox Tracking Flags
(status1.9.2 beta2-fixed)
Attachments
(1 attachment, 3 obsolete attachments)
1. Go to and open the Media Tab in Page Info. 2. Select "" Results: The image has its natural width and its scaled height. I looked through the code on the makePreview(row) function and experimented with it, but all of the values it has are correct, and I'm not sure what function controls the height and width after the image is passed to imageContainer.appendChild(newImage); I'm glad to work on this, but I don't know where to go from here.
Images in XUL tend to do this if you're not careful, because the image will stretch to fill it's box -- and if the box gets sizes/flex differently than the images aspect ration it's stretched. I'd poke around with DOM inspector to see what the actual sizes of things in the windows end up as. I'd randomly guess there's some unexpected interaction with a min-size and flex.
Perhaps instead of using |var newImage = new Image();| you could try using createElementNS() to force the image into the xhtml namespace.
I tried createElementNS() and various elements and it didn't change anything, but I think moving to using that would be good. I have found that if the following code block is disabled and the lower code block used instead: 933 newImage.width = ("width" in item && item.width) || newImage.naturalWidth; 934 newImage.height = ("height" in item && item.height) || newImage.naturalHeight; 939 newImage.width = newImage.naturalWidth; 940 newImage.height = newImage.naturalHeight; the problem goes away, but that prevents the browser form detecting if an image has been scaled. Also, if I create a text block like this, it doesn't fix the problem, which I would think would, because it would force both attributes to be present before using the scaled image: if (!isBG) { if (("width" in item && item.width) && ("height" in item && item.height)) { newImage.width = ("width" in item && item.width); newImage.height = ("height" in item && item.height); } else { newImage.width = newImage.naturalWidth; newImage.height = newImage.naturalHeight; } } else { newImage.width = newImage.naturalWidth; newImage.height = newImage.naturalHeight; }
Using the DOM inspector, the width value that is passed to "imageContainer.appendChild(newImage);" is the correct value, but for some reason when it is loaded all values for width (width, clientWidth, etc.) list the naturalWidth. Therefore, the problem is in whatever events the appenChild function calls. Where would that code be?
I'll go ahead and assign this to me.
Assignee: nobody → mozilla.bugs
Status: NEW → ASSIGNED
Created attachment 406852 [details] [diff] [review] Fix that uses HTML properties and XUL properties I have filed Bug 522850, which is for the XUL:image vs. HTML:img problem. Here is a potential workaround for the problem which uses the HTML elements to find the resize values (which XUL lacks) which providing a XUL image with the correct implementation of the width property. If Bug 522850 is fixed, it will probably fix this bug, so I don't know if we want this patch or not. Please provide any suggestions.
I'm curious, having poked at this too, why this is happening. Seems like it's either a bug with sticking an xhtml:img into XUL, or yet another detail of how XUL does sizing that I don't grok.
Since this is a minor glitch, I'd rather not add a front-end workaround, unless it's clear that the core bug can't be fixed in the foreseeable future. Is this a regression, by the way?
(In reply to comment #8) > Since this is a minor glitch, I'd rather not add a front-end workaround, unless > it's clear that the core bug can't be fixed in the foreseeable future. I understand. Though nobody has respond to my other bug, which surprises me; I'll see what I can find out. > Is this a regression, by the way? Not as far as I am aware: users in #developers say it also appears in 3.0 and 3.5.
Is there a reason not to just use a xul image element instead? The bug is likely some block to box issue which is probably hard to fix.
Does setting <html:img work?
(In reply to comment #10) > Is there a reason not to just use a xul image element instead? The bug is > likely some block to box issue which is probably hard to fix. Yes, page scaling breaks because XUL has no property that I known of that determines what an image's natural dimensions are, so the patch gets those elements from the image and then changes the image type to a XUL image. > Does setting <html:img work? Yes it works for testcases in Bug 522850, but I can't get it to apply to this context since it is a script; whenever I do what should apply it using js, it does not affect it. It is possible that I am using the wrong syntax for it though.
I also have tried applying min-width in something like the page-info stylesheet, but that has no affect either, unless I am again using the wrong syntax.
(In reply to comment #8) > Since this is a minor glitch, I'd rather not add a front-end workaround, unless > it's clear that the core bug can't be fixed in the foreseeable future. It is a workaround, using a xul:image seems like a perfectly natural solution anyway.
(In reply to comment #13) > I also have tried applying min-width in something like the page-info > stylesheet, but that has no affect either, unless I am again using the wrong > syntax. If you publish your code, we can help you figure that out.
Created attachment 407471 [details] [diff] [review] Alternate patch using an html:img I was able to get it to work using js/css. I think the question now is, do we want to use a xul:image or an html:image, and then we select the patch accordingly.
You can access #thepreviewimage from <>, can't you?
Created attachment 407473 [details] [diff] [review] pageInfo.css solution that uses html:img (In reply to comment #17) > You can access #thepreviewimage from > <>, > can't you? That would be smart ;). It works, and I prefer it to doing it image-by-image like in the last patch.
Attachment #407471 - Attachment is obsolete: true
Attachment #407473 - Flags: review?(dao)
Comment on attachment 407473 [details] [diff] [review] pageInfo.css solution that uses html:img please add a comment with a reference to bug 522850
Attachment #407473 - Flags: review?(dao) → review+
Attachment #406852 - Attachment is obsolete: true
Created attachment 407568 [details] [diff] [review] Final patch Made those changes, now the patch needs to be checked in. Dao: Thank you!
Attachment #407473 - Attachment is obsolete: true
Keywords: checkin-needed
Status: ASSIGNED → RESOLVED
Last Resolved: 9 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 3.7a1
Comment on attachment 407568 [details] [diff] [review] Final patch a192=beltzner
Attachment #407568 - Flags: approval1.9.2? → approval1.9.2+
Checkin needed on 1.9.2
Keywords: checkin-needed
status1.9.2: --- → final-fixed
Keywords: checkin-needed
Verified Fixed. Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.3a1pre) Gecko/20091031 Minefield/3.7a1pre (.NET CLR 3.5.30729) ID:20091031042521
Status: RESOLVED → VERIFIED
Comment on attachment 407568 [details] [diff] [review] Final patch a192=beltzner, thanks for filing the followup
Keywords: checkin-needed
Keywords: checkin-needed
|
https://bugzilla.mozilla.org/show_bug.cgi?id=519390
|
CC-MAIN-2018-34
|
refinedweb
| 1,287
| 65.93
|
? === modified file 'src/fileio.c' --- src/fileio.c 2011-12-05 08:55:25 +0000 +++ src/fileio.c 2011-12-15 02:17:01 +0000 @@ -2416,15 +2416,27 @@ return (st.st_mode & S_IWRITE || S_ISDIR (st.st_mode)); #else /* not MSDOS */ #ifdef HAVE_EUIDACCESS - return (euidaccess (filename, 2) >= 0); -#else + int res = (euidaccess (filename, 2) >= 0); +#ifdef CYGWIN + /* euidaccess may have returned failure because Cygwin couldn't + determine the file's UID and GID; if so, we return success. */ + if (!res) + { + struct stat st; + if (stat (filename, &st) < 0) + return 0; + res = (st.st_uid == -1 && st.st_gid == -1); + } +#endif /* CYGWIN */ + return res; +#else /* not HAVE_EUIDACCESS */ /*, 2) >= 0); -#endif +#endif /* not HAVE_EUIDACCESS */ #endif /* not MSDOS */ } Jari, can you test it and see if it solves your problem? Ken
|
http://lists.gnu.org/archive/html/bug-gnu-emacs/2011-12/msg00446.html
|
CC-MAIN-2014-10
|
refinedweb
| 124
| 66.94
|
Events
AdonisJS ships with an event emitter module built on top of emittery . It differs from the Node.js native Events module in following ways.
- It is asynchronous in nature, whereas the Node.js events module emits events synchronously. Make sure to also read the emittery explanation on this as well.
- Ability to make events type safe.
- Ability to trap events during tests, instead of triggering the real event.
Usage
We recommend you to define all the event listeners inside a dedicated file, just like the way you define routes in a single file.
For the purpose of this guide, lets define the event listeners inside the
start/events.ts file. You can create this file manually or run the following ace command.
node ace make:prldfile events# SELECT ALL THE ENVIRONMENTS
Open the newly created file and write the following code inside it. The
Event.on method registers a listener that is invoked everytime you emit the
new:user event from anywhere inside your application.
import Event from '@ioc:Adonis/Core/Event'Event.on('new:user', (user) => {console.log(user)})
Making events type safe
The events listeners and the code that emits the event are usually not in the same place/file. It is very easy for some part of your code to emit the event and send the wrong data to it. For example:
Event.on('new:user', (user) => {console.log(user.email)})// There is no email property defined hereEvent.emit('new:user', { id: 1 })
You can prevent this behavior by defining the arguments type for a given event inside the
contracts/events.ts file.
declare module '@ioc:Adonis/Core/Event' {interface EventsList {'new:user': { id: number; email: string }}}
Now, the TypeScript static compiler will make sure that all
Event.emit calls for the
new:user event are type safe.
Listener classes
Just like controllers and middleware, you can also extract the inline event listeners to their own dedicated classes.
Conventionally event listeners are stored inside the
app/Listeners directory. However, you can customize the namespace inside the
.adonisrc.json file.
Customize event listeners namespace
{"namespaces": {"eventListeners": "App/CustomDir/Listeners"}}
You can create a listener class by running the following ace command.
node ace make:listener User# CREATE: app/Listeners/User.ts
Open the newly created file and define the following method on the class.
import { EventsList } from '@ioc:Adonis/Core/Event'export default class User {public async onNewUser(user: EventsList['new:user']) {// send email to the new user}}
Finally, you can bind the
onNewUser method as the event listener inside the
start/events.ts file. The binding process is similar to a Route controller binding and there is no need to define the complete namespace.
Event.on('new:user', 'User.onNewUser')
Trapping events
In order to make testing easier, the Event module allows you to trap a specific or all the events. The actual listener is not invoked, when there is a trap in place.
You can write the following code inside your test block, just before the action that triggers the event.
import Event from '@ioc:Adonis/Core/Event'Event.trap('new:user', (user) => {assert.property(user, 'id')assert.property(user, 'email')})
You can also trap all the events using the
trapAll method. The trap for a specific event gets preference over the catch all trap.
Event.trap('new:user', (data) => {// called for "new:user"})Event.trapAll((event, data) => {// only called for "send:email"})Event.emit('new:user', {})Event.emit('send:email', {})
Once done with the test, you must restore the trap using the
Event.restore method. A better option will be to place this method inside the afterEach lifecycle hook of your testing framework.
afterEach(() => {// Restores the trapEvent.restore()})
Differences from the Node.js event emitter
As mentioned earlier, the Event module of AdonisJS is built on top of emittery and it is different from the Node.js event emitter in following ways.
- Emittery is asynchronous and does not block the event loop.
- It does not have the magic error event
- It does not place a limit on the number of listeners you can define for a specific event.
- It only allows you to pass a single argument during the
emitcalls.
|
https://docs-adonisjs-com.pages.dev/guides/events
|
CC-MAIN-2021-49
|
refinedweb
| 696
| 58.89
|
Once I needed to make some automation processing of Microsoft Word documents located in a SharePoint Portal Server. But I faced an interesting problem – how to find all the documents in SharePoint programmatically?
First, as long as I already used Word automation interfaces, I tried to use the file searching capabilities implemented in the Microsoft Office API, namely the FileSearch interface exposed by Office’s Application object. I assumed that whilst Microsoft Word is capable of opening documents directly from SharePoint, of course, it would be able to search through Word documents on the server…
FileSearch
Application
Naïve assumption - of course not. Things are never so simple that you may relax and have the fun of the “famous” KISS principle. It was able to work properly only on local disk, or, at most, at a network file share.
OK, I know that the SharePoint Server exposes its content in the form of web services. I tried that stuff too, but those entities that I had to work with seemed to me much more complicated than was required for such a simple task - a lot of lists and items of different types. Also, I was not sure about the compatibility between different versions of the SharePoint Server (like 2001, 2003, and the next 2006), and furthermore, I wanted a solution that might work with not only the SharePoint Portal Server, but with any Web Folder that you may browse in Windows Explorer (e.g., running under pure Windows Server 2003).
As you may know, it is possible to browse a web folder in Windows Explorer only if the IIS server supports either the FrontPage Web Extender Client (WEC) or the Web Distributed Authoring and Versioning (WebDAV) protocol extensions.
There are different ways of making WebDAV/WEC requests – through composing XML requests, or through writing a defined set of HTTP headers into the HTTP request. These would require deep scrutinizing of the standards.
I found out that MS Office products use the FPWEC.DLL COM library usually located at %PROGRAMFILES%\Common Files\Microsoft Shared\web server extensions\60\BIN. It contains a set of wrappers for these protocols, but I found no documentation on them, and their usage did not seem to be straightforward, mainly in the asynchronous manner.
Unfortunately, I didn’t find any free really-working C# library that would allow mw to easily use WebDAV.
One of the ways is to use the OLE DB Provider for Internet Publishing from old ADO. However, on MSDN, there is this statement that “the .NET Framework Data Provider for OLE DB does not support OLE DB version 2.5 interfaces. OLE DB Providers that require support for OLE DB 2.5 interfaces will not function properly with the .NET Framework Data Provider for OLE DB. This includes the Microsoft OLE DB Provider for Exchange and the Microsoft OLE DB Provider for Internet Publishing”. I didn’t like to use COM interop over ADO 2.5 interfaces from my C# code, and I skipped this solution. However, you may try it – it may be quite feasible in some cases.
Suddenly, I remembered that sometimes when you try browsing a web folder, a shortcut to the web folder is added to the My Network Places folder available in the Windows Explorer.
As you may know, these shortcuts are physically placed into the c:\Documents and Settings\{USER NAME}\NetHood hidden folder. A web folder shortcut is a physical directory with a read-only file attribute (the directory name is equal to the name of the shortcut) that contains two specific files:
So I tried to read the content of the web folder shortcut just as Windows Explorer is expected to do; I used the “Shell Objects for Scripting” (VBScript sample):
dim oShell
dim oFolder
dim sDir
Dim s
set oShell = CreateObject("Shell.Application")
set oFolder = oShell.NameSpace("c:\Documents and" & _
" Settings\Administrator" & _
"\NetHood\The_ShortCut_Name")
for i=0 to oFolder.Items.Count-1
s = s + oFolder.Items.Item(i).Path+ _
chr(10)+chr(13)
MsgBox s
… and it has worked out. Inside, the appropriate shell extension (pre-installed with Windows) provides the necessary Folder and FolderItem objects, doing all the hard work of interacting with the web server through WebDAV/WEC for us – we will just utilize their results. Note, that passing a pure HTTP URL to the Shell.Application object won't work.
Folder
FolderItem
Shell.Application
Recently, I came across a script code that allows creating web folder shortcuts (Code Ccomments) - many thanks to the author. I converted the code into C#:
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using System.Collections.Specialized;
using System.Text.RegularExpressions;
namespace Utils
{
public static class PathRoutines
{
//'44 seems to be the length where we have
//to change a byte from 00 to a 01.
private const int URL_CUTOFF = 44;
//This is where we construct the target.lnk
//file byte by byte. Most of the lines
//are shown in 16 byte chunks,
//mostly because that is the way I saw it
//in the Debug utility I was using to inspect shortcut files.
private static readonly byte[] LINK_CONTENT_PREFIX = new byte[]
{//Line 1, 16 bytes
0x4C, 0x00, 0x00, 0x00, 0x01, 0x14, 0x02,
0x00, 0x00, 0x00, 0x00, 0x00, 0xC0,
0x00, 0x00, 0x00,
//Line 2, 16 bytes
0x00, 0x00, 0x00, 0x46, 0x81, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00,
//Line 3, 16 bytes
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00,
//Line 4., 16 bytes. 13th byte is significant.
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x00, 0x00, 0x00,
//Line 5. 13th byte is significant.
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00,
//When I was analyzing the next byte of shortcuts
//I created, I found that it is set to various values,
//and I have no idea what they are referring to.
//In desperation I tried substituting some values.
//00 caused a crash of Explorer. FF seeems to work fine for all.
0xFF};
private static readonly byte[] LINK_CONTENT_MID = new byte[]
{
0x14, 0x00,
//Line 6, 16 bytes
0x1F, 0x50, 0xE0, 0x4F, 0xD0, 0x20, 0xEA, 0x3A,
0x69, 0x10, 0xA2, 0xD8, 0x08, 0x00, 0x2B, 0x30,
//Line 7, 16 bytes
0x30, 0x9D, 0x14, 0x00, 0x2E, 0x00, 0x00, 0xDF,
0xEA, 0xBD, 0x65, 0xC2, 0xD0, 0x11, 0xBC, 0xED,
//Line 8, 16 bytes
0x00, 0xA0, 0xC9, 0x0A, 0xB5, 0x0F, 0xA4
};
private static readonly
byte[] LINK_CONTENT_MID2 = new byte[]
{
0x4C, 0x50, 0x00, 0x01, 0x42, 0x57, 0x00, 0x00,
//Line 9, 16 bytes
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00,
//Line 10, 2 bytes
0x00, 0x00
};
private static readonly
byte[] LINK_CONTENT_POSTFIX = new byte[]
{
//Last line, 13 bytes
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00
};
/// <span class="code-SummaryComment"><summary>
</span>
/// Returns path to a directory with name
/// 'shortcutName' that is mapped
/// to the specified web siteUrl.
/// If the directory already exists - does nothing
/// <span class="code-SummaryComment"></summary>
</span>
/// <span class="code-SummaryComment"><param name="siteUrl">The target url</param>
</span>
/// <span class="code-SummaryComment"><param name="shortcutContainerPath">Folder where
</span>
/// the shortcut folder mapped to
/// the specified web siteUrl will be created<span class="code-SummaryComment"></param>
</span>
/// <span class="code-SummaryComment"><param name="shortcutName">Name of the
</span>
/// shortcut folder<span class="code-SummaryComment"></param>
</span>
public static string CreateWebFolderLink(string siteUrl,
string shortcutContainerPath, string shortcutName)
{
if (siteUrl.Length >
By the way, you may use the CreateWebFolderLink method to create the necessary web folder shortcuts to be visible in the “My Network Places” folder; for that, just pass the “c:\Documents and Settings\{USER NAME}\NetHood” string as the shortcutContainerPath parameter.
CreateWebFolderLink
shortcutContainerPath
So now, when one wants to get a list of files in a web folder, the algorithm will be as follows:
Shell32
/// <span class="code-SummaryComment"><summary>
</span>
/// Returns path to a shortcut folder that is mapped
/// to the specified web siteUrl. Name of the folder is derived
/// automatically.
/// If the shortcut folder already exists - does nothing
/// <span class="code-SummaryComment"></summary>
</span>
/// <span class="code-SummaryComment"><param name="siteUrl"></param>
</span>
/// <span class="code-SummaryComment"><param name="shortcutContainerPath"></param>
</span>
/// <span class="code-SummaryComment"><returns></returns>
</span>
public static string CreateWebFolderLink(Uri siteUrl)
{
//derive shortcut name
string sitePath = siteUrl.ToString();
string shortcutName =
sitePath.GetHashCode().ToString();
return CreateWebFolderLink(sitePath,
Path.GetTempPath(), shortcutName);
}
In order that the following code is compiled, you need to add a COM reference to the Shell32 library (its caption is “Microsoft Shell Controls and Automation”) to your project.
/// <span class="code-SummaryComment"><summary>
</span>
/// Uses Shell32
/// <span class="code-SummaryComment"></summary>
</span>
/// <span class="code-SummaryComment"><param name="siteUrl"></param>
</span>
/// <span class="code-SummaryComment"><param name="filePathPattern">For example,
</span>
/// to make a wildcard search ".doc" use the following
/// reg expression: Regex regex = new Regex(@".*\.doc",
/// RegexOptions.Compiled | RegexOptions.IgnoreCase);
/// to exclude word template documents assigned
/// to a SharePoint workspace, use the following
/// reg expression: new Regex(@"(?<span class="code-SummaryComment"><!.*/Forms/[^/]+)\.doc$",
/// RegexOptions.Compiled | RegexOptions.IgnoreCase)</param>
</span>
/// <span class="code-SummaryComment"><param name="searchSubFolders"></param>
</span>
/// <span class="code-SummaryComment"><returns></returns>
</span>
public static StringCollection
SearchFilesInWebFolder(Uri siteUrl,
Regex filePathPattern, bool searchSubFolders)
{
string mapFolderPath = CreateWebFolderLink(siteUrl);
StringCollection ret = new StringCollection();
Shell32.ShellClass shell = new Shell32.ShellClass();
Shell32.Folder mapFolder = shell.NameSpace(mapFolderPath);
SearchInFolder(mapFolder, filePathPattern, searchSubFolders, ret);
return ret;
}
private static void SearchInFolder(Shell32.Folder folder,
Regex filePathPattern, bool searchSubFolders,
StringCollection resultList)
{
foreach (Shell32.FolderItem item in folder.Items())
{
if (item.IsLink)
continue;
if (item.IsFolder && searchSubFolders)
{
SearchInFolder((Shell32.Folder) item.GetFolder,
filePathPattern, searchSubFolders, resultList);
continue;
}
if (filePathPattern.IsMatch(item.Path))
{
resultList.Add(item.Path);
}
}
}
The suggested solution seems to be quite simple and feasible for Windows or console applications. However, for server applications, use it with caution – you may likely face some problems with security (I am a bit doubtful whether an “ASP.NET” user would be able to use the Shell32 COM Interop) and multithreading. For such circumstances, I think the best approach would still be to use Web Services or WebDAV/WEC protocols directly.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
Dim fld
Set fso = CreateObject("Scripting.FileSystemObject")
Set fld = fso.GetFolder("\\{SharePointServerName}\sites\{ProjectName}")
For Each f In fld.Subfolders
MsgBox f.Name
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/13261/Reading-contents-of-Web-Folders-in-C?fid=274314&df=90&mpp=25&noise=5&prof=True&sort=Position&view=None&spc=None&fr=26
|
CC-MAIN-2015-11
|
refinedweb
| 1,782
| 53.31
|
I. We will also create a debugging utility that will intercept JIT calls and print diagnostics information to the console.
Microsoft Intermediate Language or MSIL (properly know as Common Intermediate Language (CIL)) is a low level assembly type language. All .NET languages
are compiled into MSIL (some exception with C++/CLI). Processors cannot run MSIL directly (maybe an ARM Jazelle
like technology for .NET in the future). The Just In Time compiler is used to turn MSIL into machine code.
A method will only be compiled once by the JIT, the CLR will cache the machine code the JIT outputs for future calls.
The compilation process needs to be extremely fast since it happens at runtime. Because MSIL is a low level language, its OP codes translate very
easily to machine specific OP codes. The compilation process itself uses something called a JITStub. A JITStub is a chunk of machine code,
each method will have one. The JIT stub initially contains code that will invoke the JIT for the method. After the method is JIT'ed, the stub
is replaced with code that calls the machine code the JIT created directly.
000EFA60 E86488D979 call 79E882C9 // Before JIT. Call to JIT method
000EFA60 E97BC1CB00 jmp 00DABBE0 // After JIT. Jmp to JIT'ed assembly code
Every class has a method table. A method table has the address of all the JITStubs for a class' methods. The method table is used
at JIT time to resolve method JIT stubs. A method that has already been JIT'ed will not reference this table; the machine code created
by the JIT will call the stub address directly. Below is the method table output from SOS. The entry column contains the stub address.()
000efa60 00d77ce0 JIT ReplaceExample.StaticClassB.A()
000efa70 00d77ce8 NONE ReplaceExample.StaticClassB.B()
It is possible to invoke the JIT from managed code without actually invoking a method. The System.Runtime.CompilerServices.RuntimeHelpers.PrepareMethod method
will force the JIT to compile the method.
System.Runtime.CompilerServices.RuntimeHelpers.PrepareMethod
Debugging the JIT process can be quite difficult. Luckily, there are a number of great tools we can use. We will briefly go over them.
SOS stands for Son of Strike. It is a debugging extension for the CLR that you can use in Visual Studio or WinDbg. It is a really neat tool that
comes with the .NET framework. I have used it several times to debug .NET exceptions on machines that don't have Visual Studio installed. We can also use
it to locate and view structures in memory used by the CLR, view assembly and IL, and many other things. It has been my primary debugging tool
for writing this. SOS only works when unmanaged debugging is enabled. For more info, please click here.
Rotor is an Open Source (released under Microsoft Shared Source license) CLR released by Microsoft. Rotor is not the same as the CLR that Microsoft ships,
but it is a complete CLR. The amount of code is enormous, and it can be quite difficult to find what you are looking for. We are using some
of the headers from Rotor for our JIT Logger.
The JITLogger is a tool that logs JIT calls to the console. It can be enabled and disabled. I used Daniel Pistelli's code from
his .NET Internals and Code Injection article to create the JITLogger. Below are the JITLogger
signatures and some sample output. For more information, please look at the attached code or read Daniel's excellent
article.
public class JitLogger
{
public static bool Enabled { get; set; }
public static int JitCompileCount { get; }
}
Output:
JIT : 0xdd20a8 Program.StaticTests
JIT : 0xdd217b Program.TestStaticReplaceJited
JIT : 0x749c205c MethodUtil.ReplaceMethod
JIT : 0x749c2270 MethodUtil.MethodSignaturesEqual
JIT : 0x749c231c MethodUtil.GetMethodReturnType
JIT : 0x749c20e4 MethodUtil.GetMethodAddress
JIT : 0xdd23e6 StaticClassB.A
I want the replace code to be very basic. We will take two methods, a source and destination, and replace the destination with the source. Below is the signature
of our replacement method:
public static void ReplaceMethod(MethodBase source, MethodBase dest)
I originally wanted to replace the IL, but I ran into some issues. The CLR seems to behave different depending
on the build mode and if a debugger is attached. Also, the JIT calls are cached (stubs replaced). We would need to get the CLR to somehow invalidate
the JIT cache. I was able to get this working, but only in debug mode, and I had to persist some state for each method to be able to get the CLR to re-JIT it.
I also tried using Daniel Pistelli's hooking method, but ran into some issues. I wanted to programmatically replace methods from managed code.
I thought I could hook the JIT, use RuntimeHelpers.PrepareMethod to cause the method to get JITed, and then modify the CORINFO_METHOD_INFO structure that
gets passed into our hooked method. Passing state between managed and unmanaged hooked methods was an issue. If we call any managed method
from the hooked method, we get a stack overflow. Also, invalidating the JIT cache is a pain, and again, I could only get it working in debug mode.
RuntimeHelpers.PrepareMethod
CORINFO_METHOD_INFO
Another approach I attempted was using the unmanaged metadata APIs. I would read the RVA from the metadata method table,
use the RVA and module base address to find the IL address in memory, and just write over it. This was problematic because the length of the source
IL in bytes must be less than the detestation. There is more than just the IL. We also have the tiny or fat IL header and possibly SEH structures, etc.
After the method is invoked, once the JITStub is replaced, we run into the same problem as the other method.
After running into a wall with the different IL approaches, I decided to try a different method. Instead of replacing the IL, we will replace
the assembly code that JIT outputs. With this approach, we don't have to worry about invaliding the cache, IL headers, SEH, etc.
Our new approach will ensure both the source and destination methods are compiled, locate the method table in memory for both methods,
and replace the destination's JITStub address with that of the source. We can replace a method as many times as we like, and do not
have to worry about the method being cached. We need to locate a couple things in memory first.
We are going to use the RuntimeTypeHandle and RuntimeMethodHandle to locate the method table and a method slot in memory.
The RuntimeMethodHandle points to an 8 byte structure in memory called a MethodDescription. This is the same address we see in the MethodDesc column using
the SOS !DumpMT -MD command. This structure contains the index of the method in the method table. We can then use RuntimeTypeHandle
to locate the method table itself. The method table starts 40 bytes after the RuntimeTypeHandle address.
RuntimeTypeHandle
RuntimeMethodHandle
MethodDescription
!DumpMT -MD
Dynamic methods work differently. I could not really find any documentation, but I was able to find the JITStub address using
the memory debugger. A dynamic method does not expose its RuntimeMethodHandle so we need to use Reflection to get it. I found the JITStub address
24 bytes after the address of the runtime method handle.
public static IntPtr GetMethodAddress(MethodBase method)
{
if ((method is DynamicMethod))
{
unsafe
{
byte* ptr = (byte*)GetDynamicMethodRuntimeHandle(method).ToPointer();
if (IntPtr.Size == 8)
{
ulong* address = (ulong*)ptr;
address += 6;
return new IntPtr(address);
}
else
{
uint* address = (uint*)ptr;
address += 6;
return new IntPtr(address);
}
}
}
RuntimeHelpers.PrepareMethod(method.MethodHandle);
unsafe
{
// Some dwords in the met
int skip = 10;
// Read the method index.
UInt64* location = (UInt64*)(method.MethodHandle.Value.ToPointer());
int index = (int)(((*location) >> 32) & 0xFF);
if (IntPtr.Size == 8)
{
// Get the method table
ulong* classStart = (ulong*)method.DeclaringType.TypeHandle.Value.ToPointer();
ulong* address = classStart + index + skip;
return new IntPtr(address);
}
else
{
// Get the method table
uint* classStart = (uint*)method.DeclaringType.TypeHandle.Value.ToPointer();
uint* address = classStart + index + skip;
return new IntPtr(address);
}
}
}
private static IntPtr GetDynamicMethodRuntimeHandle(MethodBase method)
{
if (method is DynamicMethod)
{
FieldInfo fieldInfo = typeof(DynamicMethod).GetField("m_method",
BindingFlags.NonPublic|BindingFlags.Instance);
return ((RuntimeMethodHandle)fieldInfo.GetValue(method)).Value;
}
return method.MethodHandle.Value;
}
After we get the location of the JITStub addresses, we simply need to change the value. Shown below is our replace method:
public static void ReplaceMethod(IntPtr srcAdr, MethodBase dest)
{
IntPtr destAdr = GetMethodAddress(dest);
unsafe
{
if (IntPtr.Size == 8)
{
ulong* d = (ulong*)destAdr.ToPointer();
*d = *((ulong*)srcAdr.ToPointer());
}
else
{
uint* d = (uint*)destAdr.ToPointer();
*d = *((uint*)srcAdr.ToPointer());
}
}
}
public static void ReplaceMethod(MethodBase source, MethodBase dest)
{
if (!MethodSignaturesEqual(source, dest))
{
throw new ArgumentException("The method signatures are not the same.",
"source");
}
ReplaceMethod(GetMethodAddress(source), dest);
}
In our example code, we are going to try several things. We are going to replace a static method from one class with a static method from another class. We will do the same
thing with an instance method. We will also replace a static method with a DynamicMethod. Some of our test methods were being inlined
in Release mode. I had to add MethodImpl attributes to several of the methods to prevent inlining.
MethodImpl
If we step through the code, we will notice that Visual Studio gets tricked too. After the method is replaced, Visual Studio will step into the new method instead of the old method.
Below is the output from our tests:
Enabling JIT debugging.
JIT : 0x10720a8 Program.StaticTests
JIT : 0x107217b Program.TestStaticReplaceJited
Replacing StaticClassA.A() with StaticClassB.A()
JIT : 0x71ac205c MethodUtil.ReplaceMethod
JIT : 0x71ac2270 MethodUtil.MethodSignaturesEqual
JIT : 0x71ac231c MethodUtil.GetMethodReturnType
JIT : 0x71ac20e4 MethodUtil.GetMethodAddress
JIT : 0x10723e6 StaticClassB.A
JIT : 0x71ac2094 MethodUtil.ReplaceMethod
JIT : 0x1072426 StaticClassA.A
Call StaticClassA.A() from a method that has already been jited
StaticClassA.A
Call StaticClassA.A() from a method that has not been jited
JIT : 0x1072172 Program.TestStaticReplace
StaticClassB.A
JIT : 0x1072190 Program.InstanceTests
JIT : 0x1072284 Program.TestInstanceReplaceJited
Replacing InstanceClassA.A() with InstanceClassB.A()
JIT : 0x10723c2 InstanceClassB.A
JIT : 0x1072402 InstanceClassA.A
Call InstanceClassA.A() from a method that has already been jited
JIT : 0x107241e InstanceClassA..ctor
InstanceClassA.A
Call InstanceClassA.A() from a method that has not been jited
JIT : 0x1072268 Program.TestInstanceReplace
InstanceClassB.A
JIT : 0x10722a0 Program.DynamicTests
JIT : 0x1072344 Program.CreateTestMethod
Created new dynamic metbod StaticClassA.C
JIT : 0x107232e Program.TestDynamicReplaceJited
Replacing StaticClassA.B() with dynamic StaticClassA.C()
JIT : 0x71ac2210 MethodUtil.GetDynamicMethodRuntimeHandle
JIT : 0x1072434 StaticClassA.B
Call StaticClassA.B() from a method that has already been jited
StaticClassA.B
Call StaticClassA.B() from a method that has not been jited
JIT : 0x1072325 Program.TestDynamicReplace
JIT : 0x10c318 DynamicClass.C
StaticClassA.C
The practical uses of this code are limited. If you want to modify a library you are using and don't have access to the source code,
don't want to decompile recompile or use a hex editor, then this might be helpful to you. Might be possible to create some AOP library that
modifies existing types at runtime instead of creating wrappers or build time approaches.
There are some limitations with this code. As we can see from the example code, once a method has been JIT'ed, it will no longer refer
to the method table address we are changing. Replacement should happen before a calling method gets JIT'ed. This has not been tested at all
on an x86-64 machine. Zapped or NGen-ed assemblies also do not work.
We need to keep in mind that we are directly manipulating the CLR memory in ways not intended. This code might not work with newer versions
of the .NET framework. This was tested with .NET 3.5 on a Vista x86, and might not work on your machine.
It might be cool to write a class that could detect the processor features and reemit a more optimized version that takes advantages
of technologies such as SIMD. My knowledge of the x86 assembler is a little bit limited. I will see if I can throw something together later though.
It looks like the memory layout changed in .NET 2.0 SP2 which I was forced to install when installing .NET 3.5 SP1. I updated the code to detect
the framework and act appropriately. Please let me know if there are any issues: ziadelmalki@hotmail.com.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
public bool NewLicenseCheckMethod() {
return true;
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
News on the future of C# as a language
|
http://www.codeproject.com/Articles/37549/CLR-Injection-Runtime-Method-Replacer?msg=3743533
|
CC-MAIN-2014-10
|
refinedweb
| 2,097
| 60.41
|
So I'll admit it, I'm more than acceptably late to this party. OJ started working on these problems years ago, while I was still coding away on other stuff at school. Now that I'm not in school and needing a challenge I'm doing the best I can to tackle the problems at Project Euler. Since I'm also trying to pick up Haskell, this makes for a good way to learn the language. This isn't my first time using a Functional Programming language. But the last time I tried, it was forced upon me at school with inadequate assistance and the whole experience left a bad taste in my mouth. I'm glad that getting over the bad experience and learning something new at the same time.
I've started keeping a github repo with my solutions for the various problems. If anyone is interested in commenting on my code please do so. I welcome all constructive criticism.
To start this thing off; problem one reads,
“If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.”
Being this one was simple. I coded it up in Haskell, Python, and Perl.
Haskell:
problem1 = sum $ filter (\x -> mod x 3 == 0 || mod x 5 == 0) [1..1000]problem1' = sum $ filter (\x -> mod x 3 == 0 || mod x 5 == 0) [1..999]
#!/usr/bin/python def threeorfive(n): if ( n % 3 == 0 or n % 5 == 0): return True else: return False def main(): first_list = range(1,1000) second_list = filter(threeorfive, first_list) print "%s" % (sum(second_list)) if __name__ == "__main__": main()
#!/usr/bin/perl my $count = 0;my $total = 0; while ( $count < 1000){ if ( $count % 3 == 0 or $count % 5 == 0) { $total += $count; } $count++;} print $total;
One thing that is nice about doing this in different languages, is that you get to become aware of the differences between some of those languages. For instance in Python the range function works a little differently than I expected. I was expecting an inclusive range function, one in which the 10 is included in the list. But that is now how Python's range works. It gives me 10 numbers, starting with 1, the end result is a list ending in 9. It's not a big deal and easily fixable, but just not something I was expecting. That is why the Haskell code has to functions in it. Just to verify that the Python code was correct.
I'll be posting more answers as I complete them. So expect to see some random posts with Euler solutions in them.
Source:
public class Add{ public static void main(String[] args){ int sum=0; int num = 0; System.out.println("This Java program will add all the natural numbers below one thousand that are multiples of 3 or 5."); while(num<999) //While loop keeps on running until num reaches 999 { num++; if(num%3 == 0 || num%5 == 0) //If the number is a multiple of 3 or 5, it will be added to the current sum { sum=sum+num; } } System.out.println("Sum is: "+ sum); //Printing the final output }}
Helpful blog, bookmarked the website with hopes to read more!
say [+] (1..999).grep { $^n !% 3 || $^n !% 5 }
Good to hear you're playing with Haskell mate. It's a fab language to learn.
I personally prefer filtering during list generation using list comprehension rather than filtering post creation. Behind the scenes I'd say that they function the same thanks to Haskell's laziness though :)
solution = sum [ x | x <- [1..999],mod x 3 = 0 || mod x 5 = 0]
Keep solving and posting mate :)
OJ
solution = sum [ x | x <- [1..999],mod x 3 == 0 || mod x 5 == 0]
my $total;for my $n (1..999) { $total += $n if $n % 3 == 0 or $n % 5 == 0}
or...
use List::Util qw(sum);my $total = sum grep { $_ % 3 == 0 or $_ % 5 == 0 } 1..999;
use List::Util qw(sum);my $total = sum grep $_ % 15 ~~ [0,3,5,6,9,10,12], 1..999;
I didn't know that Perl could do the 1..999 thing. Also, I'm going to have to look into that List::Util a bit as well. Thanks.
or oneliner...
perl -E 'for (1..9) { $s += $_ unless $_ % 5 && $_ % 3 }; say $s'
That one liner is amazing! I'm going to have to remember that one. Thanks for sharing.
If you made it this far down into the article, hopefully you liked it enough to share it with your friends. Thanks if you do, I appreciate it.
|
http://scrollingtext.org/project-euler-problem-1
|
CC-MAIN-2015-35
|
refinedweb
| 795
| 81.83
|
In today’s Programming Praxis exercise,our goal is to make a small library to compute the following of a number: the divisors, the sum of the divisors, the number of divisors, the totatives and the totient. Let’s get started, shall we?
Some imports:
import Data.List import Data.Numbers.Primes import Data.Ratio
Calculating the divisors of a number could be done the compact but naive way (divisors n = filter ((== 0) . mod n) [1..n]), but let’s go with the slightly longer but more efficient version of multiplying every combination of prime factors.
divisors :: Integral a => a -> [a] divisors = nub . sort . map product . subsequences . primeFactors
Getting the sum of divisors is trivial.
divisorSum :: Integral a => a -> a divisorSum = sum . divisors
For the amount of divisors we’ll ignore the trivial method (length . divisors) and go with the more efficient algorithm.
divisorCount :: Integral a => a -> Int divisorCount = product . map (succ . length) . group . primeFactors
The totatives can be be trivially calculated using trial division.
totatives :: Integral a => a -> [a] totatives n = filter ((== 1) . gcd n) [1..n]
For the totient there is also a better way than the trivial one (length . totatives):
totient :: (Integral a, Integral b) => a -> b totient n = floor . product $ n % 1 : [1 - 1 % f | f <- nub $ primeFactors n]
As usual, some tests to see if everything is working properly:
main :: IO () main = do print $ divisors 60 == [1,2,3,4,5,6,10,12,15,20,30,60] print $ divisorSum 60 == 168 print $ divisorCount 60 == 12 print $ totatives 30 == [1,7,11,13,17,19,23,29] print $ totient 30 == 8 print $ totient 60 == 16
Looks like it is, and at five lines of code I’d call this a pretty small library 🙂 Fortunately, the non-trivial algorithms don’t add much in the way of length. Sure, the lines are a little longer, but they’re all still one-liners. I’d say it’s worth it for the improved speed.
Tags: bonsai, code, divisors, Haskell, kata, praxis, programming, totatives, totient
November 26, 2010 at 4:34 pm |
I am gradually learning Haskell by reading your solutions to my exercises. Your totient function confuses me. nub $ primeFactors n gives the list of distinct factors of n. I assume % is an operator that performs division on fractions (it’s not in my list of Haskell operators), so the result of the list comprehension, for n=60, is the list 1/2, 2/3, 4/5. I understand why you cons n onto the list to take the product, but don’t understand why you take n % 1; is that some kind of type coercion so n is the same type as the rest of the list? Can’t Haskell work out the types so the coercion can be done automatically? Then I don’t understand the floor function. Assuming all the arithmetic is done as fractions, the calculation is guaranteed to produce an exact integer (all the denominators are factors of n, so they all cancel), so why do you need floor? And one last question: why use the Integral type instead of the Fractional type (since you are working with fractions)? If you used the Fractional type, could Haskell work out the type coercion of n % 1 automatically?
Phil
November 26, 2010 at 5:14 pm |
> I assume % is an operator that performs division on fractions (it’s not in my list of Haskell operators)
Correct. % is found in Data.Ratio. It’s a way of doing mathematically exact division so you don’t have to worry about floating point inaccuracies.
> I understand why you cons n onto the list to take the product, but don’t understand why you take n % 1; is that some kind of type coercion so n is the same type as the rest of the list?
Yes. n has type Integral a => a, so I need to coerce it to Ratio a before I can cons it onto a [Ratio a].
> Can’t Haskell work out the types so the coercion can be done automatically?
Haskell can work out the types just fine. However, the only implicit coercion in Haskell is done on literals. You can do something like 2 : [1 % 2] and the 2 will automatically be converted to a Ratio. But n already has a type, so you will have to explicitly coerce it. The reasoning behind this is that implicit type coercion can lead to subtle bugs.
> Then I don’t understand the floor function. Assuming all the arithmetic is done as fractions, the calculation is guaranteed to produce an exact integer (all the denominators are factors of n, so they all cancel), so why do you need floor?
Technically, I don’t. Like you say, the result is mathematically exact. However, it makes more sense to return an Integer than to return a Ratio Integer, since the result will always be a whole number. floor is one way of doing so.
> And one last question: why use the Integral type instead of the Fractional type (since you are working with fractions)? If you used the Fractional type, could Haskell work out the type coercion of n % 1 automatically?
I’m using Integral for the arguments because the functions only work on whole numbers; there’s no such thing as the divisors of 2.4125. If you were to use Fractional then you wouldn’t need Ratios or coercion, but the function would not accept integers, which makes little sense. Besides, primeFactors returns a list of Integrals, so the conversion of n to a Ratio would be replaced by having to coerce the factors to Fractionals.
November 26, 2010 at 5:18 pm |
That’s clear. Thank you.
Scheme’s numeric tower (integer, rational, real, complex) is simpler than Haskell’s, which I find confusing. Do you gain any real advantage from the added complexity?
November 26, 2010 at 8:23 pm |
One benefit of Haskell’s more finely grained typeclasses is that when defining a new datatype you can make it an instance of only the ones for which your datatype actually has an implementation rather than having to do a whole bunch of foo x = undefined. There are probably more reasons, since there’s a numeric-prelude package that splits everything up into dozens of type classes, but I haven’t looked at the issue in enough detail to give you any. Personally though, I do find myself wishing for a simpler stack on occasion, which would cut down on some of the necessary coercion. Then again, I’d probably find myself bemoaning the lack of granularity in other situations if that were the case.
November 27, 2010 at 1:14 am |
I wrote a set of one-liners equivalent to Remco’s version. You can run the program at. Follow the link to see all the prelude functions that are required; here is the meat of the library:
November 27, 2010 at 1:15 am |
Remco: please fix my indentation. Sorry.
November 27, 2010 at 1:46 am |
Fixed the indentation. A small tip: if you use square brackets instead of angle brackets for the code tag it will format your code as intended, even adding line numbers.
November 30, 2010 at 9:29 am |
I don’t understand why you use prime factors of a number to determine it divisors ?(some math properties I’m missing here)
November 30, 2010 at 10:01 am |
It’s an optimization. Instead of trying every number below n to see if it’s a divisor, which is rather slow, we use the following property: every divisor of n is either prime or can be obtained by multiplying two or more primes. These primes are necessarily also divisors of n. Therefore, by multiplying every possible combination of prime divisors of n, we obtain every divisor of n, possibly more than once. Once we have these products, we eliminate the duplicates and sort the list, producing the list of all divisors.
November 30, 2010 at 11:43 am |
Thank you for your explanation.
December 17, 2010 at 11:28 am |
[…] We need a function to calculate the divisors of a number, which we recycle from a previous exercise: […]
|
https://bonsaicode.wordpress.com/2010/11/26/programming-praxis-divisors-and-totatives/
|
CC-MAIN-2017-22
|
refinedweb
| 1,372
| 62.27
|
Copyright © 2009 and content of an EXI document and introduces the notions of EXI header, EXI body, and EXI grammar which are fundamental to the understanding of the EXI format. Furthermore, additional details about data type representation, compression, and their interaction with other format features are presented. Finally, Section 3. Efficient XML Interchange by Example provides a detailed, bit-level description of both, a schema-less and a schema-informed EXI Datatype Representations
2.2.2 String Table
2.3 Compression
3. Efficient XML Interchange by Example
3.1 Notation
3.2 Options
3.3 Encoding Example
3.4 Schema-less Decoding
A References
B Encoding Examples
The intended audience of this document includes users and developers of EXI with a basic understanding of XML and XML Schema. This document provides an informal description of the EXI format; the reader is referred to the [Efficient XML Interchange (EXI) Format 1.0] document for further details. Hereinafter, the presentation assumes that the reader is familiar with the basic concepts of XML and the way XML Schema can be used to describe and enforce constraints on XML documents.
The document is comprised of two major parts. The first part describes the structure and content, efficiency, and flexibility represents the contents of an XML document as an EXI stream. As shown below, an EXI stream consists of an EXI header followed by an EXI body.
The EXI header conveys format version information and may also include the set of options that were used during encoding. If these options are omitted, minimal header can be represented in a single byte. This keeps the overhead and complexity to a minimum and does not sacrifice compactness, especially for small documents where a header can introduce a large constant factor.
The structure of an EXI header is depicted in the following figure.
The EXI header starts with an optional four-byte EXI Cookie. The four byte field consists of four characters " $ " , " E ", " X " and " I " in that order, each represented as an ASCII octet, that can be used to distinguish an EXI stream from a broad range of data streams.
The EXI Cookie is followed by a pair of Distinguishing Bits. The two bit-sequence (1 0) can be used to distinguish an EXI document from a textual XML document and are interpreted as 15 + 2 or version 17.
The EXI Options specify how the body of an EXI stream is encoded and, as stated earlier, their presence is controlled by the presence bit earlier in the header. The overhead introduced by the EXI options is comparatively small given that they are formally described using an XML schema and are encoded using EXI as well. The following table describes the EXI options that can be specified in the EXI header. When the EXI Options document does not specify a value for a particular option, the default value is assumed. are really a family of options that control which XML items are preserved and exactly information items distinguished by structure and content. In EXI terminology, content denotes attribute and character values while all other information items are considered as belonging to the structure category.
For named XML items, such as elements and attributes, there are three types of events: SE(qname), SE(uri:*) and SE(*) as well as AT(qname), AT(uri:*) and AT(*). These events differ in their associated structure: when SE(qname) or AT(qname) are used, the actual qname of the XML item is not encoded as part of the event while SE(uri:*) and AT(uri:*) events do not encode the uri. The decision to use one type of event over the other will be explained later after introducing the notion of EXI grammars. Additionally, Fidelity Options may allow the preservation of namespace prefixes.
The fidelity options introduced in Section 2.1.1 EXI Header may be used to prune EXI events such as Namespace Declaration (NS), Comment (CM), Processing Instruction (PI), DOCTYPE (DT) or Entity Reference (ER). Grammar pruning simplifies the encoding and decoding process and also improves compactness by filtering out unused event types.
Consider a simple XML document from a notebook application:
<?xml version="1.0" encoding="UTF-8"?> <notebook date="2007-09-12"> <note category="EXI" date="2007-07-23"> Start Document (SD) and ends with an End Document (ED). The order in which attributes are encoded may be different in schema-less and schema-informed EXI streams, as is the exact content associated with each event.
The actual number of bits used to represent each type of event, excluding its content, differs depending on the context. The more event types events to might do).
An event code is represented by a sequence of one to three parts, where each part is a non-negative integer. Event codes in an EXI grammar are assigned to productions in such a way that shorter event codes are used to represent productions that are more likely to occur. Conversely, longer event codes are used to represent productions that are less likely to occur. EXI grammars are designed in a way
probability, a 4-bit code is needed to represent each entry. In the second
table, on the other hand, code lengths vary from 2 bits to 6 bits since
productions are grouped real-world documents, it is easy to verify that attributes occur more frequently than processing instructions and should therefore receive shorter event codes.
Further improvements in grammar design are possible if schema information is
available. In this case, we can not only take advantage localName (
date),
AT(
category) largely valid with respect to the schema; most deviations from the schema will result in an encoding error. In non-strict mode, any. Strict Schema-Informed Grammar for SE(note)
Note that AT(
category) is accepted before
AT(
date) even though their order is reversed in the
schema. This is because attributes in schema-informed grammars must be
sorted lexicographically,.
Generally speaking, schema-informed grammars should be favored over built-in grammars. Schema knowledge characterizes constraints of the structure and content type of XML information items. Hence schema-informed grammars do not evolve while processing and value items such as characters and attribute values are encoded according to their types. As a consequence, processing speed increases and compaction is improved.
EXI uses built-in datatype representations to represent so called content value items in an efficient manner. In other words, all attribute and character values are encoded according to their EXI datatype representation associated with their XML Schema datatype. Type information can be retrieved from available schema information. The following table lists the mapping between [XML Schema Datatypes] and the Built-in Datatype Representations supported in EXI.
Note:
The built-in EXI datatype QName is used for structure coding, such as qualified names for XML elements and attributes.
Enumerated values are efficiently encoded as n-bit Unsigned Integers where n = ⌈ log 2 m ⌉ and m is the number of items in the enumerated type. The ordinal position (starting with position zero) of each value item in the enumeration in schema-order is used as identifier. Exceptions are for schema types derived by union, QName or Notation. The values of such types are processed by their respective built-in EXI datatype representations instead of being represented as enumerations.
The interested reader is referred to the Efficient XML Interchange (EXI) Format 1.0 document which describes in details the encoding rules for representing built-in EXI datatypes. When the preserve.lexicalValues option is true, all values are represented as Strings. Some values that would have otherwise been designated to certain built-in EXI datatype representations are represented as Strings with restricted character sets. In the absence of external type information (no available schema information) all attribute and character values are typed as Strings.
String tables are used in memory-sensitive areas allowing a compact representation of repeated string values. Re-occurring string values are represented using an associated compact identifier rather than encoding the string literally again. When a string value is found in the string table (i.e. a string table hit) the value is encoded using a compact identifier. Only if a string value is not found in the associated table (i.e. a string table miss) is the string encoded as String and a new compact identifier is introduced.
EXI puts the following four information items into string tables and partitions the string tables based on the context in which they occur.
uri
prefix
local-name
value
The table below shows EXI information items used in section 2.1.2 EXI Body and its relations to the string table partitions (e.g. a prefix information item is assigned to the Prefix partition while value information items are assigned to the Value partition).
Each EXI string table partition is optimized for more frequent use of either compact identifiers or string literals depending on the purpose of the partition. The URI and the Prefix partitions are optimized for frequent use of compact identifiers while the LocalName and the Value partitions are optimized for frequent use of string literals.
In the subsequent paragraphs, more details about the different partitions are given by making use of the previously introduced Notebook example. The Notebook XML Document example is repeated here to simplify algorithm illustrations.
<?xml version="1.0" encoding="UTF-8"?> <notebook date="2007-09-12"> <note category="EXI" date="2007-07-23"> a schema is provided, there is an additional entry that is appended and the URI partition is also pre-populated with the name of each target namespace declared in the schema, plus namespace URIs allowed in wildcard terms and attribute wildcards.
Figure 2-4. Initial Entries in URI Partition
The local-name portion of qname content items and
local-name content items are assigned to
LocalName partitions. Respectively, prefix
content items are assigned to Prefix partitions. Both partition
types are initially pre-populated with likely entries (see figure below).
These types of partitions are further differentiated according to the
associated namespace URI. In our notebook example no prefixes are used and
the default namespace URI (
"" [empty-string]) is used. When a
schema is provided, LocalName partitions are also pre-populated
with the local-name of each attribute, element and type declared in the
schema, sorted lexicographically. Further local-name entries are appended in
the order they appear in the actual XML instance (no additional sorting is
applied).
Figure 2-5. Initial Entries in Prefix and LocalName Partitions
The figure above shows in highlighted form uri and
local-name items used throughout the entire example
document. For instance, the notebook sample assigns 7 local-name entries,
such as
notebook and
date, to the empty URI
namespace. Whenever local-name and/or uri
information items occur again, the compact identifier is used instead.
The last string table partition type is the Value partition. A Value partition is initially empty and grows while processing an XML instance. The total number of value items or the maximum string length of value content items can be restricted to save memory on small devices (see valuePartitionCapacity and valueMaxLength EXI Options). Attribute and Character content-values of type String are assigned to this partition.
Figure 2-6. Final Entries in Value Partition
The figure above illustrates that value content items can be indexed from two different partitions, a 'global' partition and a 'local' partition. When a string value is neither found in the global nor in the local value section its string literal is encoded as String and the string value is added to both, the associated local and the global string index.
In our example we assume that all value items are represented as
String, as it is the case in schema-less grammars or if the fidelity option
Preserve.lexicalValues is true. When a string value is found in the local
value section for a given element or attribute, EXI can use. Hence, the second appearance results in a global
value hit encoded as a 3 bit compact identifier since there are 6 entries in
the global section (3 = ⌈ log 2 6 ⌉). Due to
the different table sizes, a global compact identifier is generally less
compact than a local compact identifier. Still, global value hits avoid the
repeated encoding of string literals.
The number of bits needed to encode a compact identifier depends on the actual number of entries of the associated table at that time. Since all tables are growing while parsing an XML instance, the number of bits Efficient XML Interchange (EXI) Format 1.0 document.
EXI can use additional computational resources to achieve higher compaction. EXI compression leverages knowledge of XML to achieve higher compression efficiency than generic compression of an EXI stream. It multiplexes an EXI stream of heterogeneous data elements into channels of more homogeneous data elements that compress better together.
The mechanism used to combine homogeneous data is simple and flexible enough so that it can be used in both, schema-informed and schema-less EXI streams. Element and attribute values are grouped according to their qnames. Structure information such as Event Codes is also combined. To keep compression overhead at a minimum, smaller qname channels are combined while larger channels are compressed separately.
The figure below depicts a bit-packed EXI Body Stream where no EXI compression is in use. Grey buckets represent structure information and colored buckets are used for content information. The color is determined by the associated qname (e.g. date, category, subject, body).
Figure 2-7. EXI Body Stream
XML instances can thus be treated as a combination of structure and content information. The content information can be further divided in different sections according to the context (surrounding structure as indicated by a qname). EXI treats XML instances this way and uses these implied partitions, referred to as channels, to provide blocked input to a standard compression algorithm. This grouping of similar data increases compression efficiency.
Note:
An pre-compression notebook example falls in the first category and is encoded as a single compressed deflate stream containing first the structure channel, followed by the qname channels in the order they appear in the document (date, category, subject, body). The reader is referred to the Efficient XML Interchange (EXI) Format 1.0 document for further details.
The additional task of applying EXI compression is legitimated by the fact that, in many use-cases, encoded files become over 100 times smaller than XML and up to 14 times more compact than gzipped XML. In addition, the use of computational resources to achieve higher compaction is in most cases less than that of conventional compression such as GZip (see [EXI Evaluation Note]).
This section walks through the EXI coding of the Notebook Example, explaining the concepts previously introduced in a step-by-step approach.
The table below shows the notation that is used in the description of EXI encoding in subsequent sections.
We do not make use of specific encoding options such as using compression or user-defined datatype representations to encode the body. The table below shows the fidelity options used throughout the presented example.
The fidelity options setting shown above prunes those productions in an EXI Body. The sample XML document is transcoded on the one hand into an EXI document in the absence of any schemas using built-in grammars and on the other hand using schema-informed grammars according to provided schema information (see Notebook XML Schema).
While coding an EXI Body stream a stack of EXI grammars is in use. A grammar consists of an ordered list of productions. Each production (e.g. SD DocContent 0) is made up of an EXI Event, a following grammar (except for End Element), and an associated Event Code.
The initial stack item is a Document or a Fragment Grammar, depending on whether we deal with an XML document or with an XML fragment, respectively. The grammar at the top of the stack represents the context and constitutes the possible productions or events of this context. In addition, Start Element (SE) events push a new grammar onto the top of the stack while End Element (EE) events pop the current grammar.
Each XML information set item is split up into the according event, grammar, and encoding part accompanied with notes.
Decoding of an EXI Body Stream is straightforward and uses as its starting point a Document Grammar or a Fragment Grammar respectively.
The following steps describe how an EXI Body can be decoded:
Decode Event Code (according to grammar-rule context)
Decode event content (if present, see EXI Event Types)
Move forward in grammar according to current grammar rules
Return to step 1 if last event was not EndDocument (ED)
[Done]
For the sake of simplicity, the subsequent step-by-step approach shows the decoding process of a questionnaire XML document without external information such as schemas. We expect an XML document and therefore use the Built-in Document Grammar as the initial grammar.
The resulting XML instance is shown below.
The WG has crafted a tutorial page EXI 1.0 Encoding Examples that explains the workings of the EXI format using simple example documents. At the time of this writing, the page only shows a schema-less EXI encoding example.
|
http://www.w3.org/TR/2009/WD-exi-primer-20091208/
|
CC-MAIN-2014-41
|
refinedweb
| 2,871
| 53.1
|
Playing .mp3 sound issue in html5 (Chrome)
Hi!
I've used FlxSound to play sound effects in my game. In html5 target and Chrome browser some .mp3 files work correctly, but the others are not played at all.
In the following example (html5 and chrome) I hear bckMusicPlay but there is no explosionSound.
What could be wrong here?
import flixel.FlxState; import flixel.system.FlxSound; class SoundTest extends FlxState { public var bckMusicPlay:FlxSound; public var explosionSound:FlxSound; override public function create() { explosionSound = new FlxSound(); explosionSound.loadEmbedded('assets/sounds/explosion.mp3', true, true); explosionSound.play(); bckMusicPlay = new FlxSound(); bckMusicPlay.loadEmbedded('assets/music/background_play.mp3', true, true); bckMusicPlay.play(); } }
just a guess, but maybe the looped=true for explosion sound should be false? Do you really want to loop it? Not sure that would cause your problem, or not.
You can use
FlxG.sound.play()and
FlxG.sound.playMusic()to make things easier.
// if you just want to play it once, you don't need to define the var explosionSound. FlxG.sound.play('assets/sounds/explosion.mp3'); // not looped by default, autoDestroy by default // if you want to pause/stop/etc somewhere else in your class or project, you can assign the FlxSound object. bckMusicPlay = FlxG.sound.playMusic('assets/music/background_play.mp3'); // looped by default
I've had problems with some mp3 files if they had certain bitrate. I've found that bitrate
Constantat
96kbpsworks. I use Audacity to convert them and use all Mono sounds to keep them smaller.
I use ogg for my sounds unless I'm targeting flash since flash must use mp3 and all other targets can use ogg. You can use Audacity to convert those, too. If you're targeting multiple platforms, and use both ogg and mp3 you'll need to include/exclude them in
project.xmlaccordingly. I do this:
<assets path="assets" rename="assets" exclude="*.mp3" unless="flash" /> <assets path="assets" rename="assets" exclude="*.ogg" if="flash" />
Hope it helps.
EDIT: I edited the code format
Thanks a lot! Conversion to .ogg worked!
|
http://forum.haxeflixel.com/topic/541/playing-mp3-sound-issue-in-html5-chrome
|
CC-MAIN-2018-22
|
refinedweb
| 338
| 52.76
|
#!/bin/bash ###Qname: Quick basic batch renaming. #Defaults TGT="" FIX="" INC="" DIR="$PWD" SHOW="yes" MODE="" #single, global function help_me () { cat << END ========================================================================== USAGE: $(basename "$0") -g -t "STRING" -f "FIX" $(basename "$0") -s -f "FIX" -i "FILTER" --directory|-d) Specify target directory. [PWD] --target|-t) Specify a string to be replaced/removed. Don't use this option when you're doing prefixing! Use -i for the filtering instead. [NONE] --fix|-f) Specify a string to replace the target string, or to put as prefix to the filenames. [NONE] --include|-i) Use filter. Useful when prefixing. Also useful if you don't want to rename all files which contains STRING. [ALL] --single|-s) Single replace MODE: only replace/remove the first occurance of target STRING. --global|-g) Global replace MODE: replace/remove all instances of target STRING. --force|-F) Do not ask for confirming. --help|-h) print this help. NOTE: -s -g options are incompatible. If you specify more than one of these, only the last one is respected. -s -f without -t, it does prefixing! ========================================================================== END } function display_rename () { TGT="$1" ; FIX="$2" ; INC="$3" ; MODE="$4" #Note we need to escape "Regular Expression Operators" for awk, with every character enclosed by [ ]. case "$MODE" in single) ls -1 | grep -F "$TGT" | grep -F "$INC" | awk -v tgt="$(echo "$TGT" | sed 's/./[&]/g')" -v fix="$FIX" '{ printf("mv -- \"%s\" ",$0);sub(tgt,fix);printf("\"%s\"\n",$0) }' ;; global) ls -1 | grep -F "$TGT" | grep -F "$INC" | awk -v tgt="$(echo "$TGT" | sed 's/./[&]/g')" -v fix="$FIX" '{ printf("mv -- \"%s\" ",$0);gsub(tgt,fix);printf("\"%s\"\n",$0) }' ;; esac } function do_rename () { cd "$DIR" 2>/dev/null && echo -e "\x1b[0;32m""$PWD""\x1b[00;00m" if [ "$?" != "0" ] ; then echo '** Directory "'"$DIR"'" does not exist! **' ; echo ; exit 1 fi if [ "$SHOW" == "yes" ] ; then echo '* Careful *' echo '-------------------------------------------------------------------------' display_rename "$1" "$2" "$3" "$4" echo '-------------------------------------------------------------------------' echo -n '* Rename Files? [y/n].. ' read ANS case "$ANS" in y|Y) display_rename "$1" "$2" "$3" "$4" | sh echo '* Files renamed.' ;; n|N) echo '* Nothing renamed.' ;; *) echo '* Wrong answer!' ; exit 1 ;; esac elif [ "$SHOW" == "no" ] ; then display_rename "$1" "$2" "$3" "$4" | sh ; echo '* Files renamed!' fi } #Options OPT_TEMP=$(getopt --longoptions directory:,target:,include:,fix:,single,global,force,help --options d:t:f:i:sgFh -- "$@") eval set -- "$OPT_TEMP" while : ; do case "$1" in --directory|-d) DIR="$2" ; shift 2 ;; --target|-t) TGT="$2" ; shift 2 ;; --fix|-f) FIX="$2" ; shift 2 ;; --include|-i) INC="$2" ; shift 2 ;; --single|-s) MODE="single" ; shift ;; --global|-g) MODE="global" ; shift ;; --force|-F) SHOW="no" ; shift ;; --help|-h) help_me ; exit 0 ;; --) shift ; break ;; *) echo 'Wrong option!' ; exit 1 ;; esac done #Do it echo case "$MODE" in single) do_rename "$TGT" "$FIX" "$INC" single ;; global) do_rename "$TGT" "$FIX" "$INC" global ;; *) echo '** You must specify MODE, see -h for more info. **' ;; esac echo
What it does is very basic. It replaces a specific STRING in filenames from DIR with FIX: replace the first instance only or replace all. It also does prefixing. With interactive mode and force mode. The --help describes it rather clearly.
Example Output:
Qname -s -t '[ANBU-Frostii]' -f '[BSS]' -d ..
/media/A/Tokyo Magnitude 8.0
* Careful *
-------------------------------------------------------------------------
mv -- "[ANBU-Frostii]_Tokyo_Magnitude_8_-_01_-_[720p][E5C69941].mkv" "[BSS]_Tokyo_Magnitude_8_-_01_-_[720p][E5C69941].mkv"
mv -- "[ANBU-Frostii]_Tokyo_Magnitude_8_-_02_-_[720p][C527D655].mkv" "[BSS]_Tokyo_Magnitude_8_-_02_-_[720p][C527D655].mkv"
mv -- "[ANBU-Frostii]_Tokyo_Magnitude_8_-_03_-_[720p][74F4FEC9].mkv" "[BSS]_Tokyo_Magnitude_8_-_03_-_[720p][74F4FEC9].mkv"
mv -- "[ANBU-Frostii]_Tokyo_Magnitude_8_-_04_-_[720p][0DCD1E0D].mkv" "[BSS]_Tokyo_Magnitude_8_-_04_-_[720p][0DCD1E0D].mkv"
-------------------------------------------------------------------------
* Rename Files? [y/n].. n
* Nothing renamed.
I believe there is something like this already out there, probably much more sophisticated. Please let me know.
BTW, the rename command on my machine seems to be a really basic version... I remember there is another better rename command out there? Please also let know. --> Oh, found it! It's prename from debian based distros. prename is in AUR, and the GTK2 gprename is in community! Alternatively, you can get the prename script here.
Also, if somebody's gonna improve my code above, you're welcome! I'll be very interested
Thanks.
Last edited by lolilolicon (2009-08-17 15:35:37)
This silver ladybug at line 28...
Offline
MPyC - A simple pygtk util for changing the playing MPD song. Requires mpc, pygtk.
When started, shows a windows with only a text field. You start typing the name of an artist, album or song. The filtered songs from your currently playing mpd playlist will appear. There is autocompletion - I'm not sure if it even works (I didn't get it quite right), but if you press enter while typing, the first song in the filtered list will start playing. Ctrl+Q to quit.
It was from my early days with python, so I don't remember how the gtk.EntryCompletion works or what other hacks there are.
#!/usr/bin/env python # -*- coding: utf-8 -*- import pygtk import gtk import commands as cmd import sys class MPDCompletion: def __init__(self): window = gtk.Window() window.set_title("MPyC") window.connect('destroy', lambda w: gtk.main_quit()) #--- Accelerators accel_group = gtk.AccelGroup() window.add_accel_group(accel_group) accel_group.connect_group(ord('q'), gtk.gdk.CONTROL_MASK, gtk.ACCEL_LOCKED, lambda w,x,y,z: gtk.main_quit()) #--- vbox = gtk.VBox() label = gtk.Label('Type song name to search') vbox.pack_start(label) entry = gtk.Entry() vbox.pack_start(entry) window.add(vbox) completion = gtk.EntryCompletion() self.liststore = gtk.ListStore(str) self.liststore.set_column_types(str, str, str, str, str) playlist = self.mpd_get_playlist() for song in playlist: self.liststore.append(song) completion.set_model(self.liststore) completion.set_inline_selection(True) entry.set_completion(completion) completion.set_text_column(4) # show the full song string completion.set_match_func(self.compare_func, None) completion.connect('match-selected', self.match_cb) entry.connect('activate', self.activate_cb) window.resize(500,50) window.show_all() return def compare_func(self, completion, entrystr, iter, data=None): """ This function checks if a row from the liststore should be filtered. It searchs for arist name, title and album """ entrystr = entrystr.lower() model = completion.get_model() artist = model[iter][1] title = model[iter][2] album = model[iter][3] #~ print modelstr, "+", entrystr, ":", modelstr.lower().startswith(entrystr) artist_match = artist.lower().startswith(entrystr) title_match = title.lower().startswith(entrystr) album_match = album.lower().startswith(entrystr) if artist and title and album: return (artist_match or title_match or album_match) def match_cb(self, completion, model, iter): """ This callback is invoked when a matching entry has been selected """ song_num = model[iter][0] full = model[iter][4] #~ print "Playing:", song_num, full self.mpd_play_song(song_num) gtk.main_quit() return def activate_cb(self, entry): """ This is activated when Enter is pressed in the entry without a selected match. We get a list of songs mathing only the title, and play the first one """ text = entry.get_text().lower() for row in self.liststore: if row[1].lower().startswith(text) or row[2].lower().startswith(text): self.mpd_play_song(row[0]) gtk.main_quit() return return def mpd_parse_line(self, line): """ Break a line into a (song number, artist, title, album) tuplea """ line = line.strip() num, data = line.split(' ', 1) num = num.strip('()> ') tags = data.split('%') artist = self.mpd_filter_special_symbols(tags[0]) title = self.mpd_filter_special_symbols(tags[1]) album = self.mpd_filter_special_symbols(tags[2]) full = artist + " - " + title + " (" + album + ")" return num, artist, title, album, full def mpd_filter_special_symbols(self, string): """ Filter-out some special symbols I can't display. There are more, but I haven't found the codes yet. TODO: Un-ugly-fy this function. Is this even needed? I don't remember. """ if string=="": return "Unknown" symbols = {} symbols["\xc3\x82"] = "A" # Â symbols["\xc3\xa4"] = "a" # ä symbols["\xc3\xb6"] = "o" # ö symbols["\xc3\xbc"] = "u" # ü symbols["\xc3\x9c"] = "U" # Ü for sym in symbols.iterkeys(): if sym in string: string = string.replace(sym, symbols[sym]) return string def mpd_get_playlist(self): """ Returns a list of tuples, each tuples represent one song """ playlist_cmd = 'mpc playlist --format [%artist%]%[%title%]%[%album%]' status, playlist_raw = cmd.getstatusoutput(playlist_cmd) if status == 256: sys.exit("MPD not running") playlist_lines = playlist_raw.split('\n') playlist = [] for line in playlist_lines: playlist.append(self.mpd_parse_line(line)) return playlist def mpd_play_song(self, num): """ Plays the song at the given position. """ cmd.getoutput("mpc play "+str(num)) if __name__ == "__main__": mc = MPDCompletion() gtk.main()
Last edited by spupy (2009-08-17 22:49:06)
There are two types of people in this world - those who can count to 10 by using their fingers, and those who can count to 1023.
Offline
Offline
markp1989 wrote:
Looks neat, i like the indentation one thing i noticed is that if i do define "arch linux"
i get the error : grep: linux: No such file or directory
with the 1 i did, it would work if you quoted 2 words.
@markp1989 and Gen2ly: Here's my version of the script. It has a reduced pipeline (less programs piping output to each other - the main reason I did this), similar formatting to Gen2ly's script, and a code cleanup of various mistakes and pet peeves of mine. I have implemented the fold command in sed some time in the past as well, so I'm convinced I could reduce everything after html2text into a single sed command, but it's not worth the effort. There are some things you may want to change: the google URL is now google.com, so you could change it back to google.co.uk,
Last edited by fflarex (2009-08-20 04:33:15)
Offline
So… I'm the only one who used a DICT server for that?
Offline
@Barrucadu: I know there is a better way to do it, but I just couldn't stand to see a script with so many unnecessary pipes. I rarely need to look up words anyways.
Offline
/me thinks you have unnecessary pipes!
┌─[ 16:31 ][ blue:~ ] └─> grepp define .bash_functions # go to google for a definition define() { local LNG=$(echo $LANG | cut -d '_' -f 1) local CHARSET=$(echo $LANG | cut -d '.' -f 2) lynx -accept_all_cookies -dump -hiddenlinks=ignore -nonumbers - /tmp/define if [ ! -s /tmp/define ]; then echo "No definition found." echo else echo -e "$(grep -v Search /tmp/define | sed "s/$1/\\\e[1;32m&\\\e[0m/g")" echo fi rm -f /tmp/define }
Offline
I really don't think your solution looks any better than mine. For one thing, you haven't cut down on pipes at all; we both use 4 of them. You've also added a temporary file and invoked echo twice as much as necessary (plus I just hate the echo command in general).
In any case, Barrucadu's solution is both simpler and more elegant than any of ours. It could maybe use a bit of formatting for the output, though.
Last edited by fflarex (2009-08-19 22:01:35)
Offline
Looks neat, i like the indentation one thing i noticed is that if i do define "arch linux"
i get the error : grep: linux: No such file or directory
with the 1 i did, it would work if you quoted 2 words.
Yeah, this could probably be fixed by using "$@" but I haven't tried it.
... printf "Definitions of \x1b[1;32m%s\x1b[m on the Web:\n" "$1"
Got more to learn with sed looks like, does chomp it down nicely. Didn't know that -e could be defined multiple times, and still have no idea what '/^ / !s/.*/ &/' does. Looks like: beginning of line, do not replace, everything, &(???). I also like the idea of useing escape sequences inside sed. Didn't know that was possible.
So… I'm the only one who used a DICT server for that?
Last time I used the dict server the definitions were pretty aged (at least the wordnet ones) and incomplete. I don't think it's been updated for quite a bit.
Brisbin, good to think charset and lang, definitely makes the script more portable.
This inspired me to fix my wordnet script. Wordnet is the Princeton database of words. Can be pretty slow at times, but good for more technical definition of words (more like you see in common dictionaries). Added spell-suggestion (needs aspell and local language dictionary installed), and used fflarex suggestion for coloring:
#!/bin/bash # define - command line dictionary DICTCHECK=$(echo "$1" | aspell -a | sed '1d' | wc -m) # Usage if parameter isn't given if [[ -z $1 ]]; then echo " define <word-to-lookup> - command line dictionary" exit 1 fi # Suggestions if word is not in dictionary, otherwise define if [[ $DICTCHECK -gt "3" ]]; then echo "$@" | aspell -a | sed '/Ispell/ c\\nNot in dictionary, possible alternatives:\n'| sed 's/^.*: //' exit 2; else links -dump"$1" | sed \ '1,5d' | sed '$d' | sed '$G' | sed 's/^ //' | sed 's/^ / /' | sed \ 's/S\: //' | sed -e 's/*/\c[[1;37m*\c[[m/' -e 's/'"$1"'/\c[[4;37m'"$1"'\c[[m/' fi
Offline
'/^ / !s/.*/ &/' will add 3 spaces to the beginning of any line which does not already begin with a space.
The first part is a pattern which means to only apply this command to lines which contain the pattern (similar to the line numbers you just used in the above script). The exclamation negates that, so that it applies to all lines which do not match the pattern. The ampersand is shorthand for "the pattern matched earlier in the command". So it replaces the 'line' with '3 spaces + line'.
It's not as complicated as it looks. You could learn all of sed's syntax in one weekend if you wanted. Perl is probably a better tool for manipulating text, but it would take a lot longer to learn so I've never bothered (although if I ever did, I've heard that sed knowledge transfers nicely to perl).
Last edited by fflarex (2009-08-19 23:04:35)
Offline
Update all out of date dependencies of a given package
pacman -S $(comm -1 -2 <(pacman -Qqu) <(pactree -u PKGNAME | sort))
Last edited by Daenyth (2009-08-20 00:48:01)
Offline
How can I get the bash prompt to show month.day before the time? Time is \A after the first YELLOW:
PS1="\[$BLUE\]┌─\[$PURPLE\][ \[$YELLOW\]\A \[$PURPLE\]][ \[$LIGHTGREEN\]\w \[$PURPLE\]]\n\[$BLUE\]└─\[$YELLOW\]> \[$LIGHTGRAY\]"
Offline
Use $(date --format foo) ?
Offline
Ahh, beautiful, didn't think about that for .bashrc, just added $(date +%c), thanks fellas
Offline
I like to add `pom` next to `date` in my .bashrc (pacman -S bsd-games)
Offline
lol, what?
Offline
Hi everyone, I have made a twitter client in Ruby:.
It's very colorful.
I hope everyone likes it, as I find it very useful. I work from the Terminal (rxvt-unicode, to be exact) for hours on end, and posting to twitter is a nice break. I want a way to do it quickly, so I made this.
Any criticism/comments are appreciated, as always.
Offline
Here is a script which waits until the currently playing song in MPD has changed, then exits. It is useful for doing things like shutting off the computer after 4 more songs have finished, etc. It is the direct successor to mpd-on-change, but with a couple advantages: it doesn't constantly poll MPD, and it works correctly even if the same exact file is played twice in a row. I consider it pretty much complete, except for possible bug fixes (and also if I can figure out how to make it work without the named pipe voodoo). It requires some version of netcat (tested with nc, ncat, and gnu-netcat). I'm interested in hearing how it works for other people, especially those who connect to remote instances of MPD or use a password.
Example uses:
mpd-wait -n4; halt env MPD_HOST=password@host MPD_PORT=port mpd-wait while mpd-wait --songs=2; do custom-osd-script.sh done
#!/bin/sh # Script to wait for the currently playing song in MPD to change. # Uses netcat to communicate directly with the MPD server. # Name of the script progname=$(basename "$0") # Find and save a unique file name for the temporary FIFO tmpfifo=/tmp/."$progname".$$ while [ -e "$tmpfifo" ]; do&2 exit 1 esac shift done # Determine netcat executable to use for exe in nc ncat netcat; do&2 # Tests that MPD is running printf 'ping\nclose\n' | $nc $mpd_host $mpd_port | grep '^OK MPD [0-9.]*' &>/dev/null \ || { printf '%s: daemon is not running\n' "$progname" >&2; exit 2; } # Tests that $nsongs is a non-negative integer test $nsongs -ge -1 2>/dev/null \ || { printf '%s: argument to '\'--songs\'' or '\'-n\'' must %s\n' "$progname" \ 'be a non-negative integer' >&2 exit 1; } # Gets MPD's internal ID for the current song, which is unique even for # identical files in the playlist getsongid() { printf '%s\nstatus\nclose\n' \ "$(test \"$mpd_pass\" && printf 'password %s' \"$mpd_pass\")" \ | $nc $mpd_host $mpd_port \ | sed -ne '/^state: stop$/ q' \ -e '/^songid: / s///p' } # ID for the song we're waiting for songid=$(getsongid) # Create temporary FIFO mknod --mode=600 "$tmpfifo" p # $count keeps track of how many songs have changed # This is the meat of the script, which keeps track of the current song and # whether or not it has changed. The very confusing voodoo with the named pipe # is to prevent netcat from hanging after the "idle" command has finished. # (Possible security risk - attacker could manipulate the daemon by piping to # the named pipe before the script does.) count=0 until [ $count -eq $nsongs ]; do while [ "$songid" = "$(getsongid)" ]; do printf '%s\nidle player\n' \ "$(test \"$mpd_pass\" && printf 'password %s' \"$mpd_pass\")" \ | cat - "$tmpfifo" \ | $nc $mpd_host $mpd_port \ | sed -n "/^changed.*/ s//close/w $tmpfifo" done count=$(($count + 1)) songid=$(getsongid) done # Remove temporary FIFO rm "$tmpfifo"
EDIT: Please note that if you just copy/paste this, the help message will not display as intended. This is because the tabs were converted to spaces by the forum. Just change the leading four spaces in the usage() function to tabs to fix it.
Last edited by fflarex (2009-08-23 04:31:32)
Offline
Here is a perl version of the define function. Still needs some clean up, but no pipes used
. Should make brisbin happy.;)
#!/usr/bin/perl use strict; use LWP::Simple qw/get $ua/; my $content; #What is found at $url my $url; #url to get my $word2define=join (' ',@ARGV); # my $definitions; my @definitions; my $TXTYLW="\e[0;33m"; # Yellow my $TXTGRN="\e[0;32m"; # Green my $TXTRST="\e[0m"; # Text Reset sub define{ $ua->agent("Mozilla/4.0"); $url=""; $content = get("$url"); die "Cannot open $url " unless defined $content; } sub parse{ @definitions = ($content =~ m#<li>(.*?<br>)#g); foreach (@definitions){ s/<br>/\n\n/; s/^/$TXTGRN\*$TXTRST/; s/($word2define)/$TXTYLW\1$TXTRST/i; print ("$_"); } } &define; &parse;
Offline
so here's a really short one i use to convert < and > to < and > for use in an html document. typically, i'll be editing a webpage in vim so i can just:
:r ! text2html < /some/script.sh
to pull a script right into the <pre> blocks of whatever page i'm currently writing. you can also just run it on commandline; it's a simple STDIN -> STDOUT filter.
the script:
IFS="\n" while read -r LINE; do sed 's/</\<\;/g;s/>/\>\;/g' <<< "$LINE" done unset IFS
/edit: sorry, found my own useless use of cat... and removed a pipe!
Last edited by brisbin33 (2009-08-25 20:13:25)
Offline
You should s/\&/&/ before anything else also. Personally I'd rather use perl + a module to escape it.. there's too many cases that a sed would have to cover..
#!/bin/bash # # Description: A small script I created to speed up the configuration process of Archlinux. This script assumes # you already have configured your network adapters and rc.conf for basic settings so pacman will work. # Last Updated: 8.10.09 # Author: brenix #Colors blue="\E[1;34m" green="\E[1;32m" red="\E[1;31m" bold="\E[1;37m" default="\E[0m" #-------Check for root-------# if [ $(whoami) != "root" ]; then echo -en "$red Error:$default you cannot performan this operation unless you are root." exit 1 fi #------Edit nameservers------# read -p "Edit Nameservers[y/n]? " if [ "$REPLY" == "y" ]; then vi /etc/resolv.conf echo -en "$blue :: Completed :: $default\n" fi #---Verify the database is up to date before installing anything---# pacman -Syy #---Install yaourt/powerpill----# read -p "Install yaourt/powerpill[y/n]? " if [ "$REPLY" == "y" ]; then pacman --noconfirm -S abs aria2 subversion git python rxvt-unicode bash-completion #random stuff added here abs # Sync ABS Tree cd /tmp wget tar -zxvf yaourt.tar.gz cd yaourt makepkg -i --asroot vi /etc/yaourtrc pacman -S --noconfirm abs chmod -R 777 /var/abs # Dont ask... heh.. echo -en "$blue You must add user ALL=NOPASSWD: /usr/bin/pacman and /usr/bin/pacdiffviewer to the sudoers" # Reminder.. sleep 10 visudo yaourt -S --noconfirm powerpill cd ~ echo -en "$blue :: Completed :: $default\n" fi #----Install pkgd--------------# read -p "Install pkgd[y/n]? " if [ "$REPLY" == "y" ]; then yaourt -S --noconfirm pkgd vi /etc/pkgd.conf # Customization vi /etc/pacman.conf vi /etc/rc.conf # Add pkgd daemon to rc.conf echo -en "$blue :: Completed :: $default\n" fi #----Sort mirrors by their speed----# read -p "Sort Mirrors by Speed[y/n]? " if [ "$REPLY" == "y" ]; then pacman -S --noconfirm python cd /etc/pacman.d cp mirrorlist mirrorlist.backup rankmirrors -n 6 mirrorlist.backup > mirrorlist pacman -Syy cd ~ echo -en "$blue :: Completed :: $default\n" fi #--------Update System-----------# read -p "Update System[y/n]? " if [ "$REPLY" == "y" ]; then pacman -Syu --noconfirm # echo -en "$bold Re-ranking mirrors... $default" # cp /etc/pacman.d/mirrorlist /etc/pacman.d/mirrorlist.backup # rankmirrors -n 6 /etc/pacman.d/mirrorlist.backup > /etc/pacman.d/mirrorlist echo -en "$blue :: Completed :: $default\n" fi #---------Add user---------------# read -p "Add a new user[y/n]? " if [ "$REPLY" == "y" ]; then echo -en "$green Enter username: $default " read USERNAME useradd -m -G audio,optical,storage,video,wheel,power,network -s /bin/bash $USERNAME # Add new user with common groups passwd $USERNAME echo -en "$blue :: Completed :: $default\n" fi #---------Install Alsa----------# read -p "Install Alsa[y/n]? " if [ "$REPLY" == "y" ]; then pacman -S --noconfirm alsa-utils alsamixer read -p "Was your sound card detected[y/n]? " if [ "$REPLY" == "y" ]; then echo -en "$bold Testing sound... $default" aplay /usr/share/sounds/alsa/Front_Center.wav # Test audio vi /etc/rc.conf # Add alsa to daemons line.. echo -en "$blue :: Completed :: $default\n" else alsaconf alsamixer vi /etc/rc.conf # Add alsa to daemons line.. echo -en "$blue :: Completed :: $default\n" fi fi #--------Install Xorg----------# read -p "Install Xorg[y/n]? " if [ "$REPLY" == "y" ]; then pacman -S --noconfirm libgl xorg xf86-input-evdev xf86-input-keyboard xf86-input-mouse ttf-ms-fonts ttf-dejavu ttf-bitstream-vera gtk2 echo -en "$green Enter video (i.e. nvidia or xf86-video-intel): $default" read VIDEO yaourt -S --noconfirm $VIDEO Xorg -configure cp /root/xorg.conf.new /etc/X11/xorg.conf vi /etc/X11/xorg.conf cp /etc/skel/.xinitrc /home/$USERNAME/ echo exec xterm >> /home/$USERNAME/.xinitrc chown $USERNAME:users /home/$USERNAME/.xinitrc chmod 755 /home/$USERNAME/.xinitrc echo -en "$blue :: Completed :: $default\n" fi #-------Istall VIM------------# read -p "Install VIM[y/n]? " if [ "$REPLY" == "y" ]; then powerpill -S --noconfirm vim echo -en "$blue :: Completed :: $default\n" fi #-------Install MPD/NCMPCPP----# read -p "Install mpd/ncmpcpp[y/n]? " if [ "$REPLY" == "y" ]; then powerpill -S --noconfirm mpd ncmpcpp mpc gpasswd -a mpd users cp /etc/mpd.conf.example /etc/mpd.conf mkdir /var/lib/mpd/music sed -i 's|#music_directory.*$|music_directory "/var/lib/mpd/music"|1' /etc/mpd.conf touch /var/lib/mpd/db touch /var/lib/mpd/mpdstate touch /var/run/mpd/mpd.pid touch /var/log/mpd/mpd.log touch /var/log/mpd/mpd.error chown -R mpd:mpd /var/lib/mpd chown -R mpd:mpd /var/run/mpd chown -R mpd:mpd /var/log/mpd echo -en "$blue :: Completed :: $default\n" fi #------Install Xmonad---------# read -p "Install xmonad[y/n]? " if [ "$REPLY" == "y" ]; then chmod -R 777 /var/abs # Again, dont ask... :) yaourt -S --noconfirm haskell-x11-darcs yaourt -S --noconfirm xmonad-darcs yaourt -S --noconfirm xmonad-contrib-darcs pacman -S --noconfirm xmobar mkdir /home/$USERNAME/.xmonad chown $USERNAME:users /home/$USERNAME/.xmonad echo -en "$blue :: Completed :: $default\n" fi #-------Install autofs--------# read -p "Install autofs[y/n]? " if [ "$REPLY" == "y" ]; then pacman -S --noconfirm autofs blkid sleep 7 vi /etc/autofs/auto.misc vi /etc/autofs/auto.master echo -en "$blue Add autofs4 to modules and autofs to daemons $default" sleep 5 vi /etc/rc.conf echo -en "$blue :: Completed :: $default\n" fi #--------Reboot System---------# read -p "Reboot system (recommended) [y/n]? " if [ "$REPLY" == "y" ]; then reboot else exit fi.
(...)
FYI: … _Procedure
< Daenyth> and he works prolifically
4 8 15 16 23 42
Offline
|
https://bbs.archlinux.org/viewtopic.php?pid=604637
|
CC-MAIN-2016-26
|
refinedweb
| 4,087
| 58.28
|
What’s An Interface?
I mentioned in the first post of this series that I’ll likely be referring to C# in most of these posts. I think the concept of an interface in C# extends to other languages–sometimes by a different name–so the discussion here may still be applicable. Some examples in C++, Java, and Python to get you going for comparisons.
An interface contains definitions for a group of related functionalities that a class or a struct can implement.
By using interfaces, you can, for example, include behavior from multiple sources in a class. That capability is important in C# because the language doesn’t support multiple inheritance of classes. In addition, you must use an interface if you want to simulate inheritance for structs, because they can’t actually inherit from another struct or class.
It’s also important to note that an interface decouples the definition of something from its implementation. Decoupled code is, in general, something that programmers are always after. If we refer back to the points I defined for what makes good code (again, in my opinion), we can see how interfaces should help with that.
- Extensibility: Referring to interfaces in code instead of concrete classes allows a developer to swap out the implementation easier (i.e. extend support for different data providers in your data layer). They provide a specification to be met should a developer want to extend the code base with new concrete implementations.
- Maintainability: Interfaces make refactoring an easier job (when the interface signature doesn’t have to change). A developer can get the flexibility of modifying the implementation that already exists or creating a new one provided that it meets the interface.
- Testability: Referring to interfaces in code instead of concrete classes allows mocking frameworks to leverage mocked objects so that true unit tests are easier to write.
- Readability: I’m neutral on this. I don’t think interfaces are overly helpful for making code more readable, but I don’t think they inherently make code harder to read.
I’m only trying to focus on some of the pro’s here, and we’ll use this sub-series to explore if these hold true across the board. So… should every class have a backing interface?
An Example
Let’s walk through a little example. In this example, we’ll look at an object that “does stuff”, but it requires something that can do a string lookup to “do stuff” with. We’ll look at how using an interface can make this type of code extensible!
First, here is our interface that we’ll use for looking up strings:
public interface IStringLookup { string GetString(string name); }
And here is our first implementation of something that can lookup strings for us. It’ll just lookup an XML node and pull a value from it. (How it actually does this stuff isn’t really important for the example, which is why I’m glossing over it):
public sealed class XmlStringLookup : IStringLookup { private readonly XmlDocument _xmlDocument; public XmlStringLookup(XmlDocument xmlDocument) { _xmlDocument = xmlDocument; } public string GetString(string name) { return _xmlDocument .GetElementsByTagName(name) .Cast<XmlElement>() .First() .Value; } }
This will be used to plug into the rest of the code:
private static int Main(string[] args) { var obj = CreateObj(); var stringLookup = CreateStringLookup(); obj.DoStuff(stringLookup); return 0; } private static IMyObject CreateObj() { return new MyObject(); } private static IStringLookup CreateStringLookup() { return new XmlStringLookup(new XmlDocument()); } public interface IMyObject { void DoStuff(IStringLookup stringLookup); } public class MyObject : IMyObject { public void DoStuff(IStringLookup stringLookup) { var theFancyString = stringLookup.GetString("FancyString"); // TODO: do stuff with this string } }
In the code snippet above, you’ll see our Main() method creating an instance of “MyObject” which is the thing that’s going to “DoStuff” with our XML string lookup. The important thing to note is that the DoStuff method takes in the interface IStringLookup that our XML class implements.
Now, XML string lookups are great, but let’s show why interfaces make this code extensible. Let’s swap out an XML lookup for an overly simplified CSV string lookup! Here’s the implementation:
public sealed class CsvStringLookup : IStringLookup { private readonly StreamReader _reader; public CsvStringLookup(StreamReader reader) { _reader = reader; } public string GetString(string name) { string line; while ((line = _reader.ReadLine()) != null) { var split = line.Split(','); if (split[0] != name) { continue; } return split[1]; } throw new InvalidOperationException("Not found."); } }
Now to leverage this class, we only need to modify ONE line of code from the original posting! Just modify CreateStringLookup() to be:
private static IStringLookup CreateStringLookup() { return new CsvStringLookup(new StreamReader(File.OpenRead(@"pathtosomefile.txt"))); }
And voila! We’ve been able to extend our code to use a COMPLETELY different implementation of a string lookup with relatively no code change. You could make the argument that if you needed to modify the implementation for a buggy class that as long as you were adhering to the interface, you wouldn’t need to modify much surrounding code (just like this example). This would be a point towards improved maintainability in code.
“But wait!” you shout, “I could have done the EXACT same thing with an abstract class instead of the IStringLookup interface you big dummy! Interfaces are garbage!”
And you wouldn’t be wrong about the abstract class part! It’s totally true that IStringLookup could instead have been an abstract class like StringLookupBase (or something…) and the benefits would still apply! That’s a really interesting point, so let’s keep that in mind as we continue on through out this whole series. The little lesson here? It’s not the interface that gives us this bonus, it’s the API boundary and level of abstraction we introduced (something that does string lookups). Both an interface and abstract class happen to help us a lot here.
|
http://devleader.ca/tag/best-practices-2/
|
CC-MAIN-2019-09
|
refinedweb
| 963
| 53.51
|
If you use React you probably know about the so called hooks. They were officially announced at this year's ReactConf by Sophie Alpert and Dan Abramov. Their presentation could be seen here. I, same as many others got intrigued by this new feature. A little bit confused by if I like them or not but kind of excited. This article pretty much sums up my thinkings around React hooks and aims to give a balanced opinion.
Have in mind that the hooks just got released and they are (maybe) a subject of change. Beign an experimental feature React team suggests to check the official documentation at and monitor the RFC.
What.
The obvious advantages
I think we will all agree that the idea is not bad. In fact if we use hooks we write less code and our code reads better. We write only functions and not classes. There is no usage of the keyword
this and there is no weird bindings in the constructor. The React component is written in a declarative fashion with almost no branches and it becomes easier to follow. Consider the example below:
class Counter extends React.Component { constructor(props) { super(props); this.state = { count: 0 }; this.onButtonClicked = this.onButtonClicked.bind(this); } onButtonClicked() { this.setState({ count: this.state.count + 1 }); } render() { const { count } = this.state; return ( <div> <p>You clicked {count} times</p> <button onClick={this.onButtonClicked}> Click me </button> </div> ); } }
It is the equivalent of the same
Counter function above. We have three methods so more or less we have to jump through all of them to fully understand what is going on.
this.onButtonClicked.bind(this) seems weird but we have to do it because we can't leave
.bind in the
render method for performance reasons. Overall we have some sort of boilerplate that we have to deal with when writing React components using classes. Let's have a look again at the same
Counter but written with hooks:
function Counter() { const initialValue = 0; const [ count, setCount ] = useState(initialValue); return ( <div> <p>You clicked {count} times</p> <button onClick={() => setCount(count + 1)}> Click me </button> </div> ); }
Much simpler with less code we achieve the same thing. But more importantly for me are two things - the component becomes easier to read and the stateful logic becomes easier to share. Let's imagine that I want to use the same counter logic but with a different representation. If we decide to use classes we will probably go with the function as children pattern like so:
class Counter extends React.Component { constructor(props) { super(props); this.state = { count: 0 }; this.onButtonClicked = this.onButtonClicked.bind(this); } onButtonClicked() { this.setState({ count: this.state.count + 1 }); } render() { const { count } = this.state; const { children } = this.props; return children({ count, increase: onButtonClicked }); } }
And then use the same
Counter component many times:
function AnotherCounter() { return ( <Counter> { ({ count, increase }) => ( <div> <p>You clicked {count} times</p> <button onClick={increase}> Click me </button> </div> ) } </Counter> ) }
That's fine but now we have one more layer in our components tree and sooner or later we will end up with the wrapper hell situation. I do like the function as children pattern but it always looked a little bit off. Passing an expression as a child is not the most natural thing. On the other hand using a simple JavaScript function feels pretty normal.
function useCounter(initialValue) { const [value, setCount] = useState(initialValue) return { value, increase: () => setCount(value + 1), } } export default function CounterA() { const counter = useCounter(0) return ( <div> <p>You clicked {counter.value} times</p> <button onClick={counter.increase}>Click me</button> </div> ) }
With hooks it is possible to extract stateful logic to a simple JavaScript function which is just a composition of the basic hooks like
useState and
useEffect.
Concerns
So far we saw how beneficial the hooks are. However, I'm a little bit reserved about this feature. The same way as I aws for the higher-order components and function as children patterns. I didn't quite like them but just a couple of days later I started using them all over my code. I bet it will be the same with the hooks. Till then I will question this approach of writing React and will try to make a fair judgement for myself.
The first thing which bothers me is changing the mindset for the functional React components. We used to think about them as dumb, short, stateless functions that only render stuff. Of course we can still have them like that but if the hooks became the new way of writing React we can't continue saying "If it is a function it has no state and it is purely rendering thing". Especially when using the
useEffect hook where we pass a function and that function will probably do an async task. This means that the React component defined as a function is alive even it returns a result. For example:
function FriendStatus(props) { const [isOnline, setIsOnline] = useState(null); useEffect(function onRender() { ChatAPI.subscribeToFriendStatus( props.friend.id, status => setIsOnline(status.isOnline) ); }); if (isOnline === null) { return 'Loading...'; } return isOnline ? 'Online' : 'Offline'; }
Notice how
useEffect receives a function
onRender which gets executed at some point in the future. We used to think that such React components are executed, they return something and that's it. And I think the confusing part is that
useEffect handles logic which is not in sync with the rendering cycles of React. I mean it is not like we give data and that data is rendered. We trigger a process that happens in parallel with the rendering. Also we don't want to have our
onRender fired every time when
FriendStatus is rendered. There is an API to handle such cases - we may pass an array of variables as a second argument of
useEffect which act as list of dependencies.
useEffect(function componentDidMount() { ChatAPI.subscribeToFriendStatus( props.friend.id, status => setIsOnline(status.isOnline) ); }, [numberOfFriends]);
Let's say that in this example we subscribe to a friend's status and that subscription depends for some reason on the number of the friends. We can just pass that number as a second argument and React will skip the effect if it is the same during the next render. So, to wrap up my point here I will say that we have to change our mindset for the dumb components because they may be not so dumb anymore.
So far for me React was a no-magic library. I didn't dig into the code but there was no API which made me think "How did they do it?". When a saw
useState for the first time that was the first question which pops up in my head. I kind of felt the same way when I saw the Angular 2 dependency injection. Even though Dan explained that there is no really magic behind this feature it feels magical from the outside. I'm not saying that this is a bad thing. It is just something which I didn't see in React before. There are certain rules that we have to follow to have the hooks working properly. Like for example we have to define the hooks at the top of the function and avoid placing them in conditional or looping logic. Which I think is anyway going to happen and makes total sense. It is not like we don't have similar rules even now but this is a bit different.
Conclusion
As I said in the beginning of the article, the hooks in React are experimental feature and they are still a proposal. You shouldn't rewrite your apps using hooks because their API may change. My thinking is that the hooks are a step in the right direction. However, they require some kind of a mindset shift in order to be adopted. That is because they are not just a pattern but a new paradigm that can significantly change how we build React apps. New ways of composition and new ways of share logic.
Resources
If you want to start using hooks and wondering where to start from I'll suggest first to watch this presentation and then read the official docs. Once you try how everything works you will probably want to read Making Sense of React Hooks article and also check the Kent C. Dodds video tutorials here.
|
http://outset.ws/blog/article/react-hooks-changing-the-mindset
|
CC-MAIN-2019-51
|
refinedweb
| 1,394
| 64.81
|
Hi,
I am calling a function that dynamically allocates memory for some arrays using new, and then fills the arrays with values. It is my understanding that once the function exits, the memory and values should persist and that I should be able to access the values in other parts of my program. However, after the function exits, when I try to access the arrays, I get the message 'Access violation at address...'. Can anyone give me some advice on why this is happening and how I can fix it? Thanks a lot.
I am using Borland C++ Builder 5.0 on Windows XP pro.
Here's some sample code:
#include "header.h"
{
ifstream infile("filename");
float *data, buff;
int size=0;
size = get_data_size(&infile);
fill_data_array(&infile, data, size);
buff = data[0]; //this is where access violation occurs
}
//function is defined in header.h
void fill_data_array(ifstream *infile, float *data, int size)
{
data = new float[size];
for(int i=0; i<size; i++)
{
... //do stuff with infile to fill data
}
}
|
https://cboard.cprogramming.com/cplusplus-programming/54632-using-new-inside-functions.html
|
CC-MAIN-2017-39
|
refinedweb
| 170
| 65.52
|
The list of changes is extensive.
However, the essential API and the markup language for creating literate programs hasn't (significantly) changed. A few experimental features were replaced with a first-class implementation.
The interesting (to me) bit is this sequence of events.
I started out using Leo and Interscript as a literate programming tools. They worked. But they were larger and clunky and I wasn't happy.
I wrote my own too, not really getting the use cases.
I found pyLit and liked it a lot. For a long time, I liked it better than my own pyWeb tool.
Then I ran across some problem domains for which pyLit didn't work out well. It's not that I've abandoned pyLit, but I believe I'll focus more on pyWeb.
The Awkward Problem Domains
Here are the two awkward problem domains.
- Historical Story Lines. In some cases, we want to describe a module or package based on the path of exploration. Rather than simply drop the design, we want to show the path followed which lead to the design. This can be helpful for certain kinds of pedagogical exercises where we're steering the reader through a process.
- Complex Packages that Don't Follow Python's Presentation Order. In some cases, we need to present things out of order. Python constrains us to have docstring and imports first. Our class definitions must proceed in "dependency" order. But this may not be the best order for explanation. Sometimes, we want to start with the "def main():" function first to explain why a class looks the way it does.
PyWeb handles these nicely. One of the handiest things is this for out-of-order presentation.
@d Some Class... @{
class TheClass:
this class uses the following imports
@}
@d Imports...@{
import this
import that
@}
We can then scatter imports through the documentation in the relevant places. And they follow the more interesting material.
When it comes to final assembly, we have this.
@o some_module.py @{ @<Imports for this module@> @<Some Class that does the real work of this module@> @}
This builds the module, tangling the imports into one cluster up front, and putting the class definition later.
|
https://slott-softwarearchitect.blogspot.com/2014/04/
|
CC-MAIN-2021-39
|
refinedweb
| 365
| 68.06
|
Discovering the High Resolution Time API
In today’s world, performance really matters. Developers need to be able to accurately measure the performance of their software. For those who work on the web, the W3C has a brand new API for reliably keeping time. That API is the High Resolution Time API. This article will explore the High Resolution Time API, and show how to use it.
To measure a function’s performance, web developers used to work with the JavaScript
Date.now() method. Typically, time keeping code resembles looks something like this:
var startTime = Date.now(); // A time consuming function foo(); var test1 = Date.now(); // Another time consuming function bar(); var test2 = Date.now(); // Print results console.debug("Test1 time: " + (test1 - startTime)); console.debug("Test2 time: " + (test2 - test1));
The method
Date.now() retrieves the current timestamp, based on the system time. Unfortunately, its precision varies between user agents, so it isn’t very reliable. To mitigate this problem, the W3C standardized the High Resolution Time API. The API is described as “
a JavaScript interface that provides the current time in sub-millisecond resolution and such that it is not subject to system clock skew or adjustments.” On October 23rd 2012, the specification became a W3C Proposed Recommendation – the final step before becoming a Recommendation. On December 17th they became a W3C Recommendation (updated December 17th)
How the High Resolution Time API Works
I must admit, this is the simplest API I have ever read, as it only consists of a single method. The API extends the
Performance interface, which is also used by the Navigation Timing API. If you’ve never heard of it, take a look at Navigation Timing API: How to Profile Page Loads Efficiently.
The only method exposed is
now(), which returns a
DOMHighResTimeStamp representing the current time in milliseconds. The timestamp is very accurate, with precision to a thousandth of a millisecond. Please note that while
Date.now() returns the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC,
performance.now() returns the number of milliseconds, with microseconds in the fractional part, from
performance.timing.navigationStart(), the start of navigation of the document, to the
performance.now() call. Another important difference between
Date.now() and
performance.now() is that the latter is monotonically increasing, so the difference between two calls will never be negative.
Maybe you’re wondering how the High Resolution Time API will change your code. The good news is that it won’t change anything. All you need to do is
Date.now() with
performance.now() to increase the accuracy of your measurements. Taking this into account, the previous code would be rewritten as shown below.
var startTime = performance.now(); // A time consuming function foo(); var test1 = performance.now(); // Another time consuming function bar(); var test2 = performance.now(); // Print more accurate results console.debug("Test1 time: " + (test1 - startTime)); console.debug("Test2 time: " + (test2 - test1));
Compatibility
Currently, very few browsers support the High Resolution Time API. The only desktop browsers that support the API are Internet Explorer 10, Firefox 15+ without prefix, and Chrome from version 20 with its “webkit” prefix (
performance.webkitNow()). It seems that Chrome will begin using the unprefixed version starting in version 24. At time of writing, no mobile browsers support this API.
Since the support isn’t wide, the first thing you need is a function to test for browser support and if it is prefixed or not. The following function will return an empty string if the browser uses the unprefixed version of the API. If a prefixed version is used, then the prefix is returned. If the API is not supported,
null is returned.; }
For browsers that do not support the API, a shim is available.
The shim’s author, Tony Gentilcore, is one of the API’s contributors.
In his post, entitled “A better timer for JavaScript,” Gentilcore wrote code that searches for native support first, and uses the
Date.getTime() method as a fallback. The code is shown below.
window.performance = window.performance || {}; performance.now = (function() { return performance.now || performance.mozNow || performance.msNow || performance.oNow || performance.webkitNow || function() { return new Date().getTime(); }; })();
Putting it all Together
This section will guide you through a simple demonstration page. The demo will test for browser support first, and then uses a function called
doBenchmark that relies on two dummies functions to do a benchmark using the
performance.now() method. Please note that I introduced a
getTime() function that isn’t related to the API. Its only purpose is to avoid useless repetitions and to have cleaner code. The source code of the demo is shown below.
<!DOCTYPE html> <html> <head> <title>High Resolution Time API Test Page</title> <script> function foo() { for(var i = 0; i < 10000000; i++); } function bar() { for(var i = 0; i < 100000000; i++); }; } function getTime() { return (prefix === "") ? window.performance.now() : window.performance[prefix + "Now"](); } function doBenchmark() { if (prefix === null) document.getElementById("log").</p> </body> </html>
Conclusion
Throughout this article I showed what the High Resolution Time API is, and how you can use it. As I mentioned, it isn’t widely supported yet, so to accurately test your web applications, you still have a while to wait. However, as you’ve seen, the API is very simple since it consists of a single method. So, once browser support improves, migrating to high resolution time will be quick and painless.
No Reader comments
|
http://www.sitepoint.com/discovering-the-high-resolution-time-api/
|
CC-MAIN-2014-52
|
refinedweb
| 902
| 58.79
|
CloudWatch metrics for your Classic Load Balancer
Elastic Load Balancing publishes data points to Amazon CloudWatch for your load balancers and your back-end instances. CloudWatch enables you to retrieve statistics about those data points as an ordered set of time-series data, known as metrics. Think of a metric as a variable to monitor, and the data points as the values of that variable over time. For example, you can monitor the total number of healthy EC2 instances for a load balancer.
For more information about Amazon CloudWatch, see the Amazon CloudWatch User Guide.
Contents
Classic Load Balancer metrics
The
AWS/ELB namespace includes the following metrics.
The following metrics enable you to estimate your costs if you migrate a Classic Load Balancer to an Application Load Balancer. These metrics are intended for informational use only, not for use with CloudWatch alarms. Note that if your Classic Load Balancer has multiple listeners, these metrics are aggregated across the listeners.
These estimates are based on a load balancer with one default rule and a certificate that is 2K in size. If you use a certificate that is 4K or
greater in size, we recommend that you estimate your costs as follows: create an Application Load Balancer based on your Classic Load Balancer using the migration tool and monitor the
ConsumedLCUs metric for the Application Load Balancer. For more information, see Migrate from a Classic Load Balancer to an Application Load Balancer in the Elastic Load Balancing User Guide.
Metric dimensions for Classic Load Balancers
To filter the metrics for your Classic Load Balancer, use the following dimensions.
Statistics for Classic Load Balancer metrics
CloudWatch provides statistics based on the metric data points published by Elastic Load Balancing. Statistics are metric data aggregations over specified period of time. When you request statistics, the returned data stream is identified by the metric name and dimension. A dimension is a name/value pair that uniquely identifies a metric. For example, you can request statistics for all the healthy EC2 instances behind a load balancer launched in a specific Availability Zone.
The
Minimum and
Maximum statistics reflect the minimum and maximum reported by the individual load balancer nodes.
For example, suppose there are 2 load balancer nodes. One node has
HealthyHostCount with a
Minimum of 2,
a
Maximum of 10, and an
Average of 6, while the other node has
HealthyHostCount with a
Minimum of 1, a
Maximum of 5, and an
Average of 3. Therefore, the load balancer has a
Minimum of 1, a
Maximum of 10, and an
Average of about 4.
The
Sum statistic is the aggregate value across all load balancer nodes.
Because metrics include multiple reports per period,
Sum is only applicable to metrics that are aggregated
across all load balancer nodes, such as
RequestCount,
HTTPCode_ELB_XXX,
HTTPCode_Backend_XXX,
BackendConnectionErrors, and
SpilloverCount.
The
SampleCount statistic is the number of samples measured. Because metrics are gathered based on sampling intervals and
events, this statistic is typically not useful. For example, with
HealthyHostCount,
SampleCount is based
on the number of samples that each load balancer node reports, not the number of healthy hosts.
A percentile indicates the relative standing of a value in a data set. You can specify any percentile, using up to two decimal places (for example, p95.45). For example, the 95th percentile means that 95 percent of the data is below this value and 5 percent is above. Percentiles are often used to isolate anomalies. For example, suppose that an application serves the majority of requests from a cache in 1-2 ms, but in 100-200 ms if the cache is empty. The maximum reflects the slowest case, around 200 ms. The average doesn't indicate the distribution of the data. Percentiles provide a more meaningful view of the application's performance. By using the 99th percentile as an Auto Scaling trigger or a CloudWatch alarm, you can target that no more than 1 percent of requests take longer than 2 ms to process.
View CloudWatch metrics for your load balancer
You can view the CloudWatch metrics for your load balancers using the Amazon EC2 console. These metrics are displayed as monitoring graphs. The monitoring graphs show data points if the load balancer is active and receiving requests.
Alternatively, you can view metrics for your load balancer using the CloudWatch console.
To view metrics using the Amazon EC2 console
Open the Amazon EC2 console at
.
On the navigation pane, under LOAD BALANCING, choose Load Balancers.
Select your load balancer.
Choose the Monitoring tab.
(Optional) To filter the results by time, select a time range from Showing data for.
To get a larger view of a single metric, select its graph. The following metrics are available:
Healthy Hosts —
HealthyHostCount
Unhealthy Hosts —
UnHealthyHostCount
Average Latency —
Latency
Sum Requests —
RequestCount
Backend Connection Errors —
BackendConnectionErrors
Surge Queue Length —
SurgeQueueLength
Spillover Count —
SpilloverCount
Sum HTTP 2XXs —
HTTPCode_Backend_2XX
Sum HTTP 4XXs —
HTTPCode_Backend_4XX
Sum HTTP 5XXs —
HTTPCode_Backend_5XX
Sum ELB HTTP 4XXs —
HTTPCode_ELB_4XX
Sum ELB HTTP 5XXs —
HTTPCode_ELB_5XX
To view metrics using the CloudWatch console
Open the CloudWatch console at
.
In the navigation pane, choose Metrics.
Select the ELB namespace.
Do one of the following:
Select a metric dimension to view metrics by load balancer, by Availability Zone, or across all load balancers.
To view a metric across all dimensions, type its name in the search field.
To view the metrics for a single load balancer, type its name in the search field.
To view the metrics for a single Availability Zone, type its name in the search field.
|
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html
|
CC-MAIN-2022-33
|
refinedweb
| 923
| 54.02
|
Here we will see how to check a number is divisible by 20 or not. In this case the number is very large number. So we put the number as string.
A number will be divisible by 20, when that is divisible by 10, and after dividing 10, the remaining number is divisible by 2. So the case is simple. If the last digit is 0, then it is divisible by 10, and when it is divisible by 10, then the second last element is divisible by 2, the number is divisible by 20.
#include <bits/stdc++.h> using namespace std; bool isDiv20(string num){ int n = num.length(); if(num[n - 1] != '0') return false; int second_last = num[n - 2] - '0'; if(second_last % 2 == 0) return true; return false; } int main() { string num = "54871584540"; if(isDiv20(num)){ cout << "Divisible"; }else{ cout << "Not Divisible"; } }
Divisible
|
https://www.tutorialspoint.com/check-if-a-large-number-is-divisible-by-20-in-cplusplus
|
CC-MAIN-2022-21
|
refinedweb
| 145
| 64.71
|
Back to article
July 1, 1998
This session focused on the 'ease of use' release theme. The presenter stated that people are buying 2 to 300 copies of SQL Server at a time.
Microsoft's goal is to move out of the workgroup
niche and have SQL Server on the desktop and in large organizations.
For the desktop, they want to have SQL Server
self-configuring, with no DBA requirement. For workgroups, they want it to be
self-managing. For large organizations, they want the lowest total cost of
ownership, to support multi-server operations and to provide interoperability.
There will be 20 Wizards in Beta3, 5-10 more by ship date.
For the DBA, they are providing a data base designer This
is basically the Visual Database Tools diagrammer. It is not a full-blown data
modeling tool like ERWin or System Architect. There was a demo of the database
diagrammer. It will be possible to print the database diagrams by release; right
now it isn't. SQL Trace has been renamed SQL Profiler and is much
enhanced. The Index Tuning Wizard is almost frightening, but since it
doesn't handle concurrency issues I don't think I'll be out of
work yet! A functioning (unlike previous efforts) graphical showplan is
available and it is really good. Shows the costing info and decision process
clause by clause.
For data warehousing, they are providing the Data
Transformation Services, the Microsoft Repository, which will be part of SQL
Server, and Microsoft English Query. (More on both of these later.)
Distributed Management Framework and DMO
For ISV's and MIS Applications, they are going to
make it easy to embed and deploy SQL Server. SQL-DMO provides a COM
administrative interface. There is something called the SQL-namespace which I
don't completely understand. It is composed of administration objects and
the administration UI. You are supposed to be able to put specific dialogues or
wizards into any tool that supports SQL Server. I don't know how to do
his yet, but it would clearly solve some of the problems of presenting some but
not all functions to a power user who isn't a full-fledged administrator.
The new Distributed Management Framework looks like
this:
You will be able to use VB to launch a wizard, as well.
The events layer is used by the SQL Profiler, which allows it to see
"inside", stored procedures (the current SQL Trace only shows you the
exec statement for the procedure). Things like table scans and locks will also
be events. I assume this means we might be able to set up alerts for a table
scan of a big table but I don't know for sure.
The Network for Technology Professionals
About Internet.com
Legal Notices, Licensing, Permissions, Privacy Policy.
Advertise | Newsletters | E-mail Offers
|
http://www.databasejournal.com/features/mssql/print.php/1458801
|
CC-MAIN-2014-42
|
refinedweb
| 475
| 64.51
|
This tutorial program attempts to show how to use \(hp\) finite element methods with deal.II. It solves the Laplace equation and so builds only on the first few tutorial programs, in particular on step-4 for dimension independent programming and step-6 for adaptive mesh refinement.
The \(hp\) finite element method was proposed in the early 1980s by Babuska and Guo as an alternative to either (i) mesh refinement (i.e. decreasing the mesh parameter \(h\) in a finite element computation) or (ii) increasing the polynomial degree \(p\) used for shape functions. It is based on the observation that increasing the polynomial degree of the shape functions reduces the approximation error if the solution is sufficiently smooth. On the other hand, it is well known that even for the generally well-behaved class of elliptic problems, higher degrees of regularity can not be guaranteed in the vicinity of boundaries, corners, or where coefficients are discontinuous; consequently, the approximation can not be improved in these areas by increasing the polynomial degree \(p\) but only by refining the mesh, i.e. by reducing the mesh size \(h\). These differing means to reduce the error have led to the notion of \(hp\) finite elements, where the approximating finite element spaces are adapted to have a high polynomial degree \(p\) wherever the solution is sufficiently smooth, while the mesh width \(h\) is reduced at places wherever the solution lacks regularity. It was already realized in the first papers on this method that \(hp\) finite elements can be a powerful tool that can guarantee that the error is reduced not only with some negative power of the number of degrees of freedom, but in fact exponentially.
In order to implement this method, we need several things above and beyond what a usual finite element program needs, and in particular above what we have introduced in the tutorial programs leading up to step-6. In particular, we will have to discuss the following aspects:
Instead of using the same finite element on all cells, we now will want a collection of finite element objects, and associate each cell with one of these objects in this collection.
Degrees of freedom will then have to be allocated on each cell depending on what finite element is associated with this particular cell. Constraints will have to generated in the same way as for hanging nodes, but now also including the case where two neighboring cells.
We will need to be able to assemble cell and face contributions to global matrices and right hand side vectors.
We will discuss all these aspects in the following subsections of this introduction. It will not come as a big surprise that most of these tasks are already well supported by functionality provided by the deal.II libraries, and that we will only have to provide the logic of what the program should do, not exactly how all this is going to happen.
In deal.II, the \(hp\) functionality is largely packaged into the hp namespace. This namespace provides classes that handle \(hp\) discretizations, assembling matrices and vectors, and other tasks. We will get to know many of them further down below. In addition, many of the functions in the DoFTools, and VectorTools namespaces accept \(hp\) objects in addition to the non- \(hp\) ones. Much of the \(hp\) implementation is also discussed in the hp finite element support documentation module and the links found there.
It may be worth giving a slightly larger perspective at the end of this first part of the introduction. \(hp\) functionality has been implemented in a number of different finite element packages (see, for example, the list of references cited in the hp paper). However, by and large, most of these packages have implemented it only for the (i) the 2d case, and/or (ii) the discontinuous Galerkin method. The latter is a significant simplification because discontinuous finite elements by definition do not require continuity across faces between cells and therefore do not require the special treatment otherwise necessary whenever finite elements of different polynomial degree meet at a common face. In contrast, deal.II implements the most general case, i.e. it allows for continuous and discontinuous elements in 1d, 2d, and 3d, and automatically handles the resulting complexity. In particular, it handles computing the constraints (similar to hanging node constraints) of elements of different degree meeting at a face or edge. The many algorithmic and data structure techniques necessary for this are described in the hp paper for those interested in such detail.
We hope that providing such a general implementation will help explore the potential of \(hp\) methods further.
Now on again to the details of how to use the \(hp\) functionality in deal.II. The first aspect we have to deal with is that now we do not have only a single finite element any more that is used on all cells, but a number of different elements that cells can choose to use. For this, deal.II introduces the concept of a finite element collection, implemented in the class hp::FECollection. In essence, such a collection acts like an object of type
std::vector<FiniteElement>, but with a few more bells and whistles and a memory management better suited to the task at hand. As we will later see, we will also use similar quadrature collections, and — although we don't use them here — there is also the concept of mapping collections. All of these classes are described in the hp Collections overview.
In this tutorial program, we will use continuous Lagrange elements of orders 2 through 7 (in 2d) or 2 through 5 (in 3d). The collection of used elements can then be created as follows:
The next task we have to consider is what to do with the list of finite element objects we want to use. In previous tutorial programs, starting with step-2, we have seen that the DoFHandler class is responsible for making the connection between a mesh (described by a Triangulation object) and a finite element, by allocating the correct number of degrees of freedom for each vertex, face, edge, and cell of the mesh.
The situation here is a bit more complicated since we do not just have a single finite element object, but rather may want to use different elements on different cells. We therefore need two things: (i) a version of the DoFHandler class that can deal with this situation, and (ii) a way to tell the DoF handler which element to use on which cell.
The first of these two things is implemented in the hp::DoFHandler class: rather than associating it with a triangulation and a single finite element object, it is associated with a triangulation and a finite element collection. The second part is achieved by a loop over all cells of this hp::DoFHandler and for each cell setting the index of the finite element within the collection that shall be used on this cell. We call the index of the finite element object within the collection that shall be used on a cell the cell's active FE index to indicate that this is the finite element that is active on this cell, whereas all the other elements of the collection are inactive on it. The general outline of this reads like this:
Dots in the call to
set_active_fe_index() indicate that we will have to have some sort of strategy later on to decide which element to use on which cell; we will come back to this later. The main point here is that the first and last line of this code snippet is pretty much exactly the same as for the non- \(hp\) case.
Another complication arises from the fact that this time we do not simply have hanging nodes from local mesh refinement, but we also have to deal with the case that if there are two cells with different active finite element indices meeting at a face (for example a Q2 and a Q3 element) then we have to compute additional constraints on the finite element field to ensure that it is continuous. This is conceptually very similar to how we compute hanging node constraints, and in fact the code looks exactly the same:
In other words, the DoFTools::make_hanging_node_constraints deals not only with hanging node constraints, but also with \(hp\) constraints at the same time.
Following this, we have to set up matrices and vectors for the linear system of the correct size and assemble them. Setting them up works in exactly the same way as for the non- \(hp\) case. Assembling requires a bit more thought.
The main idea is of course unchanged: we have to loop over all cells, assemble local contributions, and then copy them into the global objects. As discussed in some detail first in step-3, deal.II has the FEValues class that pulls finite element description, mapping, and quadrature formula together and aids in evaluating values and gradients of shape functions as well as other information on each of the quadrature points mapped to the real location of a cell. Every time we move on to a new cell we re-initialize this FEValues object, thereby asking it to re-compute that part of the information that changes from cell to cell. It can then be used to sum up local contributions to bilinear form and right hand side.
In the context of \(hp\) finite element methods, we have to deal with the fact that we do not use the same finite element object on each cell. In fact, we should not even use the same quadrature object for all cells, but rather higher order quadrature formulas for cells where we use higher order finite elements. Similarly, we may want to use higher order mappings on such cells as well.
To facilitate these considerations, deal.II has a class hp::FEValues that does what we need in the current context. The difference is that instead of a single finite element, quadrature formula, and mapping, it takes collections of these objects. It's use is very much like the regular FEValues class, i.e. the interesting part of the loop over all cells would look like this:
In this tutorial program, we will always use a Q1 mapping, so the mapping collection argument to the hp::FEValues construction will be omitted. Inside the loop, we first initialize the hp::FEValues object for the current cell. The second, third and fourth arguments denote the index within their respective collections of the quadrature, mapping, and finite element objects we wish to use on this cell. These arguments can be omitted (and are in the program below), in which case
cell->active_fe_index() is used for this index. The order of these arguments is chosen in this way because one may sometimes want to pick a different quadrature or mapping object from their respective collections, but hardly ever a different finite element than the one in use on this cell, i.e. one with an index different from
cell->active_fe_index(). The finite element collection index is therefore the last default argument so that it can be conveniently omitted.
What this
reinit call does is the following: the hp::FEValues class checks whether it has previously already allocated a non- \(hp\) FEValues object for this combination of finite element, quadrature, and mapping objects. If not, it allocates one. It then re-initializes this object for the current cell, after which there is now a FEValues object for the selected finite element, quadrature and mapping usable on the current cell. A reference to this object is then obtained using the call
hp_fe_values.get_present_fe_values(), and will be used in the usual fashion to assemble local contributions.
One of the central pieces of the adaptive finite element method is that we inspect the computed solution (a posteriori) with an indicator that tells us which are the cells where the error is largest, and then refine them. In many of the other tutorial programs, we use the KellyErrorEstimator class to get an indication of the size of the error on a cell, although we also discuss more complicated strategies in some programs, most importantly in step-14.
In any case, as long as the decision is only "refine this cell" or "do not refine this cell", the actual refinement step is not particularly challenging. However, here we have a code that is capable of hp refinement, i.e. we suddenly have two choices whenever we detect that the error on a certain cell is too large for our liking: we can refine the cell by splitting it into several smaller ones, or we can increase the polynomial degree of the shape functions used on it. How do we know which is the more promising strategy? Answering this question is the central problem in \(hp\) finite element research at the time of this writing.
In short, the question does not appear to be settled in the literature at this time. There are a number of more or less complicated schemes that address it, but there is nothing like the KellyErrorEstimator that is universally accepted as a good, even if not optimal, indicator of the error. Most proposals use the fact that it is beneficial to increase the polynomial degree whenever the solution is locally smooth whereas it is better to refine the mesh wherever it is rough. However, the questions of how to determine the local smoothness of the solution as well as the decision when a solution is smooth enough to allow for an increase in \(p\) are certainly big and important ones.
In the following, we propose a simple estimator of the local smoothness of a solution. As we will see in the results section, this estimator has flaws, in particular as far as cells with local hanging nodes are concerned. We therefore do not intend to present the following ideas as a complete solution to the problem. Rather, it is intended as an idea to approach it that merits further research and investigation. In other words, we do not intend to enter a sophisticated proposal into the fray about answers to the general question. However, to demonstrate our approach to \(hp\) finite elements, we need a simple indicator that does generate some useful information that is able to drive the simple calculations this tutorial program will perform.
Our approach here is simple: for a function \(u({\bf x})\) to be in the Sobolev space \(H^s(K)\) on a cell \(K\), it has to satisfy the condition
\[ \int_K |\nabla^s u({\bf x})|^2 \; d{\bf x} < \infty. \]
Assuming that the cell \(K\) is not degenerate, i.e. that the mapping from the unit cell to cell \(K\) is sufficiently regular, above condition is of course equivalent to
\[ \int_{\hat K} |\nabla^s \hat u(\hat{\bf x})|^2 \; d\hat{\bf x} < \infty \]
where \(\hat u(\hat{\bf x})\) is the function \(u({\bf x})\) mapped back onto the unit cell \(\hat K\). From here, we can do the following: first, let us define the Fourier series of \(\hat u\) as
\[ \hat U_{\bf k} = \frac 1{(2\pi)^{d/2}} \int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat u(\hat{\bf x}) d\hat{\bf x} \]
with Fourier vectors \({\bf k}=(k_x,k_y)\) in 2d, \({\bf k}=(k_x,k_y,k_z)\) in 3d, etc, and \(k_x,k_y,k_z=0,\pi,2\pi,3\pi,\ldots\). If we re-compose \(\hat u\) from \(\hat U\) using the formula
\[ \hat u(\hat{\bf x}) = \frac 1{(2\pi)^{d/2}} \sum_{\bf k} e^{-i {\bf k}\cdot \hat{\bf x}} \hat U_{\bf k}, \]
then it becomes clear that we can write the \(H^s\) norm of \(\hat u\) as
\[ \int_{\hat K} |\nabla^s \hat u(\hat{\bf x})|^2 \; d\hat{\bf x} = \frac 1{(2\pi)^d} \int_{\hat K} \left| \sum_{\bf k} |{\bf k}|^s e^{-i{\bf k}\cdot \hat{\bf x}} \hat U_{\bf k} \right|^2 \; d\hat{\bf x} = \sum_{\bf k} |{\bf k}|^{2s} |\hat U_{\bf k}|^2. \]
In other words, if this norm is to be finite (i.e. for \(\hat u(\hat{\bf x})\) to be in \(H^s(\hat K)\)), we need that
\[ |\hat U_{\bf k}| = {\cal O}\left(|{\bf k}|^{-\left(s+1/2+\frac{d-1}{2}+\epsilon\right)}\right). \]
Put differently: the higher regularity \(s\) we want, the faster the Fourier coefficients have to go to zero. (If you wonder where the additional exponent \(\frac{d-1}2\) comes from: we would like to make use of the fact that \(\sum_l a_l < \infty\) if the sequence \(a_l = {\cal O}(l^{-1-\epsilon})\) for any \(\epsilon>0\). The problem is that we here have a summation not only over a single variable, but over all the integer multiples of \(\pi\) that are located inside the \(d\)-dimensional sphere, because we have vector components \(k_x, k_y, \ldots\). In the same way as we prove that the sequence \(a_l\) above converges by replacing the sum by an integral over the entire line, we can replace our \(d\)-dimensional sum by an integral over \(d\)-dimensional space. Now we have to note that between distance \(|{\bf k}|\) and \(|{\bf k}|+d|{\bf k}|\), there are, up to a constant, \(|{\bf k}|^{d-1}\) modes, in much the same way as we can transform the volume element \(dx\;dy\) into \(2\pi r\; dr\). Consequently, it is no longer \(|{\bf k}|^{2s}|\hat U_{\bf k}|^2\) that has to decay as \({\cal O}(|{\bf k}|^{-1-\epsilon})\), but it is in fact \(|{\bf k}|^{2s}|\hat U_{\bf k}|^2 |{\bf k}|^{d-1}\). A comparison of exponents yields the result.)
We can turn this around: Assume we are given a function \(\hat u\) of unknown smoothness. Let us compute its Fourier coefficients \(\hat U_{\bf k}\) and see how fast they decay. If they decay as
\[ |\hat U_{\bf k}| = {\cal O}(|{\bf k}|^{-\mu-\epsilon}), \]
then consequently the function we had here was in \(H^{\mu-d/2}\).
So what do we have to do to estimate the local smoothness of \(u({\bf x})\) on a cell \(K\)? Clearly, the first step is to compute the Fourier series of our solution. Fourier series being infinite series, we simplify our task by only computing the first few terms of the series, such that \(|{\bf k}|\le N\) with a cut-off \(N\). (Let us parenthetically remark that we want to choose \(N\) large enough so that we capture at least the variation of those shape functions that vary the most. On the other hand, we should not choose \(N\) too large: clearly, a finite element function, being a polynomial, is in \(C^\infty\) on any given cell, so the coefficients will have to decay exponentially at one point; since we want to estimate the smoothness of the function this polynomial approximates, not of the polynomial itself, we need to choose a reasonable cutoff for \(N\).) Either way, computing this series is not particularly hard: from the definition
\[ \hat U_{\bf k} = \frac 1{(2\pi)^{d/2}} \int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat u(\hat{\bf x}) d\hat{\bf x} \]
we see that we can compute the coefficient \(\hat U_{\bf k}\) as
\[ \hat U_{\bf k} = \frac 1{(2\pi)^{d/2}} \sum_{i=0}^{\textrm{\tiny dofs per cell}} \left[\int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat \varphi_i(\hat{\bf x}) d\hat{\bf x} \right] u_i, \]
where \(u_i\) is the value of the \(i\)th degree of freedom on this cell. In other words, we can write it as a matrix-vector product
\[ \hat U_{\bf k} = {\cal F}_{{\bf k},j} u_j, \]
with the matrix
\[ {\cal F}_{{\bf k},j} = \frac 1{(2\pi)^{d/2}} \int_{\hat K} e^{i {\bf k}\cdot \hat{\bf x}} \hat \varphi_j(\hat{\bf x}) d\hat{\bf x}. \]
This matrix is easily computed for a given number of shape functions \(\varphi_j\) and Fourier modes \(N\). Consequently, finding the coefficients \(\hat U_{\bf k}\) is a rather trivial job.
The next task is that we have to estimate how fast these coefficients decay with \(|{\bf k}|\). The problem is that, of course, we have only finitely many of these coefficients in the first place. In other words, the best we can do is to fit a function \(\alpha |{\bf k}|^{-\mu}\) to our data points \(\hat U_{\bf k}\), for example by determining \(\alpha,\mu\) via a least-squares procedure:
\[ \min_{\alpha,\mu} \frac 12 \sum_{{\bf k}, |{\bf k}|\le N} \left( |\hat U_{\bf k}| - \alpha |{\bf k}|^{-\mu}\right)^2 \]
However, the problem with this is that it leads to a nonlinear problem, a fact that we would like to avoid. On the other hand, we can transform the problem into a simpler one if we try to fit the logarithm of our coefficients to the logarithm of \(\alpha |{\bf k}|^{-\mu}\), like this:
\[ \min_{\alpha,\mu} Q(\alpha,\mu) = \frac 12 \sum_{{\bf k}, |{\bf k}|\le N} \left( \ln |\hat U_{\bf k}| - \ln (\alpha |{\bf k}|^{-\mu})\right)^2. \]
Using the usual facts about logarithms, we see that this yields the problem
\[ \min_{\beta,\mu} Q(\beta,\mu) = \frac 12 \sum_{{\bf k}, |{\bf k}|\le N} \left( \ln |\hat U_{\bf k}| - \beta + \mu \ln |{\bf k}|\right)^2, \]
where \(\beta=\ln \alpha\). This is now a problem for which the optimality conditions \(\frac{\partial Q}{\partial\beta}=0, \frac{\partial Q}{\partial\mu}=0\), are linear in \(\beta,\mu\). We can write these conditions as follows:
\[ \left(\begin{array}{cc} \sum_{{\bf k}, |{\bf k}|\le N} 1 & \sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}| \\ \sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}| & \sum_{{\bf k}, |{\bf k}|\le N} (\ln |{\bf k}|)^2 \end{array}\right) \left(\begin{array}{c} \beta \\ -\mu \end{array}\right) = \left(\begin{array}{c} \sum_{{\bf k}, |{\bf k}|\le N} \ln |\hat U_{{\bf k}}| \\ \sum_{{\bf k}, |{\bf k}|\le N} \ln |\hat U_{{\bf k}}| \ln |{\bf k}| \end{array}\right) \]
This linear system is readily inverted to yield
\[ \beta = }|)^2\right) \left(\sum_{{\bf k}, |{\bf k}|\le N} \ln |\hat U_{{\bf k}}|\right) - \left(\sum_{{\bf k}, |{\bf k}|\le N} \ln |{\bf k}|\right) \left(\sum_{{\bf k}, |{\bf k}|\le N} \ln |\hat U_{{\bf k}}| \ln |{\bf k}| \right) \right] \]
and
\[ ]. \]
While we are not particularly interested in the actual value of \(\beta\), the formula above gives us a mean to calculate the value of the exponent \(\mu\) that we can then use to determine that \(\hat u(\hat{\bf x})\) is in \(H^s(\hat K)\) with \(s=\mu-\frac d2\).
In the formulas above, we have derived the Fourier coefficients \(\hat U_{\vec k}\). Because \({\bf k}\) is a vector, we will get a number of Fourier coefficients \(\hat U_{{\bf k}}\) for the same absolute value \(|{\bf k}|\), corresponding to the Fourier transform in different directions. If we now consider a function like \(|x|y^2\) then we will find lots of large Fourier coefficients in \(x\)-direction because the function is non-smooth in this direction, but fast-decaying Fourier coefficients in \(y\)-direction because the function is smooth there. The question that arises is this: if we simply fit our polynomial decay \(\alpha |{\bf k}|^\mu\) to all Fourier coefficients, we will fit it to a smoothness averaged in all spatial directions. Is this what we want? Or would it be better to only consider the largest coefficient \(\hat U_{{\bf k}}\) for all \({\bf k}\) with the same magnitude, essentially trying to determine the smoothness of the solution in that spatial direction in which the solution appears to be roughest?
One can probably argue for either case. The issue would be of more interest if deal.II had the ability to use anisotropic finite elements, i.e. ones that use different polynomial degrees in different spatial directions, as they would be able to exploit the directionally variable smoothness much better. Alas, this capability does not exist at the time of writing this tutorial program.
Either way, because we only have isotopic finite element classes, we adopt the viewpoint that we should tailor the polynomial degree to the lowest amount of regularity, in order to keep numerical efforts low. Consequently, instead of using the formula
\[ ]. \]
to calculate \(\mu\) as shown above, we have to slightly modify all sums: instead of summing over all Fourier modes, we only sum over those for which the Fourier coefficient is the largest one among all \(\hat U_{{\bf k}}\) with the same magnitude \(|{\bf k}|\), i.e. all sums above have to replaced by the following sums:
\[ \sum_{{\bf k}, |{\bf k}|\le N} \longrightarrow \sum_{\begin{matrix}{{\bf k}, |{\bf k}|\le N} \\ {|\hat U_{{\bf k}}| \ge |\hat U_{{\bf k}'}| \ \textrm{for all}\ {\bf k}'\ \textrm{with}\ |{\bf k}'|=|{\bf k}|}\end{matrix}} \]
This is the form we will implement in the program.
One may ask whether it is a problem that we only compute the Fourier transform on the reference cell (rather than the real cell) of the solution. After all, we stretch the solution by a factor \(\frac 1h\) during the transformation, thereby shifting the Fourier frequencies by a factor of \(h\). This is of particular concern since we may have neighboring cells with mesh sizes \(h\) that differ by a factor of 2 if one of them is more refined than the other. The concern is also motivated by the fact that, as we will see in the results section below, the estimated smoothness of the solution should be a more or less continuous function, but exhibits jumps at locations where the mesh size jumps. It therefore seems natural to ask whether we have to compensate for the transformation.
The short answer is "no". In the process outlined above, we attempt to find coefficients \(\beta,\mu\) that minimize the sum of squares of the terms
\[ \ln |\hat U_{{\bf k}}| - \beta + \mu \ln |{\bf k}|. \]
To compensate for the transformation means not attempting to fit a decay \(|{\bf k}|^\mu\) with respect to the Fourier frequencies \({\bf k}\) on the unit cell, but to fit the coefficients \(\hat U_{{\bf k}}\) computed on the reference cell to the Fourier frequencies on the real cell \(|\vec k|h\), where \(h\) is the norm of the transformation operator (i.e. something like the diameter of the cell). In other words, we would have to minimize the sum of squares of the terms
\[ \ln |\hat U_{{\bf k}}| - \beta + \mu \ln (|{\bf k}|h). \]
instead. However, using fundamental properties of the logarithm, this is simply equivalent to minimizing
\[ \ln |\hat U_{{\bf k}}| - (\beta - \mu \ln h) + \mu \ln (|{\bf k}|). \]
In other words, this and the original least squares problem will produce the same best-fit exponent \(\mu\), though the offset will in one case be \(\beta\) and in the other \(\beta-\mu \ln h\). However, since we are not interested in the offset at all but only in the exponent, it doesn't matter whether we scale Fourier frequencies in order to account for mesh size effects or not, the estimated smoothness exponent will be the same in either case.
One of the problems with \(hp\) methods is that the high polynomial degree of shape functions together with the large number of constrained degrees of freedom leads to matrices with large numbers of nonzero entries in some rows. At the same time, because there are areas where we use low polynomial degree and consequently matrix rows with relatively few nonzero entries. Consequently, allocating the sparsity pattern for these matrices is a challenge.
Most programs built on deal.II use the DoFTools::make_sparsity_pattern function to allocate the sparsity pattern of a matrix, and later add a few more entries necessary to handle constrained degrees of freedom using ConstraintMatrix::condense. The sparsity pattern is then compressed using SparsityPattern::compress. This method is explained in step-6 and used in most tutorial programs. In order to work, it needs an initial upper estimate for the maximal number of nonzero entries per row, something that can be had from the DoFHandler::max_couplings_between_dofs function. This is necessary due to the data structure used in the SparsityPattern class.
Unfortunately, DoFHandler::max_couplings_between_dofs is unable to produce an efficient upper estimate in 3d and for higher order elements. If used in these situations, it therefore leads the SparsityPattern class to allocate much too much memory, almost all of which will be released again when we call SparsityPattern::compress. This deficiency, caused by the fact that DoFHandler::max_couplings_between_dofs must produce a single number for the maximal number of elements per row even though most rows will be significantly shorter, can be so severe that the initial memory allocation for the SparsityPattern exceeds the actual need by a factor of 10 or larger, and can lead to a program running out of memory when in fact there would be plenty of memory for all computations.
A solution to the problem has already been discussed in step-11 and step-18. It used an intermediate object of type CompressedSparsityPattern. This class uses a different memory storage scheme that is optimized to creating a sparsity pattern when maximal numbers of entries per row are not accurately available, but is unsuitable for use as the sparsity pattern actually underlying a sparse matrix. After building the intermediate object, it is therefore copied into a true SparsityPattern object, something that can be done very efficient and without having to over-allocate memory. Typical code doing this is shown in the documentation of the CompressedSparsityPattern class. This solution is slower than directly building a SparsityPattern object, but only uses as much memory as is really necessary.
As it now turns out, the storage format used in the CompressedSparsityPattern class is not very good for matrices with truly large numbers of entries per row — where truly large numbers mean in the hundreds. This isn't typically the case for lower order elements even in 3d, but happens for high order elements in 3d; for example, a vertex degree of freedom of a \(Q_5\) element in 3d may couple to as many as 1700 other degrees of freedom. In such a case CompressedSparsityPattern will work, but by tuning the memory storage format used internally in that class a bit will make it work several times faster. This is what we did with the CompressedSetSparsityPattern class — it has exactly the same interface as the CompressedSparsityPattern class but internally stores things somewhat differently. For most cases, there is not much of a difference in performance in the classes (though the old class has a slight advantage for lower order elements in 3d), but for high order and \(hp\) elements in 3d, the CompressedSetSparsityPattern has a definite edge. We will therefore use it later when we build the sparsity pattern in this tutorial program.
A second problem particular to \(hp\) methods arises because we have so many constrained degrees of freedom: typically up to about one third of all degrees of freedom (in 3d) are constrained because they either belong to cells with hanging nodes or because they are on cells adjacent to cells with a higher or lower polynomial degree. This is, in fact, not much more than the fraction of constrained degrees of freedom in non- \(hp\) mode, but the difference is that each constrained hanging node is constrained not only against the two adjacent degrees of freedom, but is constrained against many more degrees of freedom.
It turns out that the strategy presented first in step-6 to eliminate the constraints while computing the element matrices and vectors with ConstraintMatrix::distribute_local_to_global is the most efficient approach also for this case. The alternative strategy to first build the matrix without constraints and then "condensing" away constrained degrees of freedom is considerably more expensive. It turns out that building the sparsity pattern by this inefficient algorithm requires at least \({\cal O}(N \log N)\) in the number of unknowns, whereas an ideal finite element program would of course only have algorithms that are linear in the number of unknowns. Timing the sparsity pattern creation as well as the matrix assembly shows that the algorithm presented in step-6 (and used in the code below) is indeed faster.
In our program, we will also treat the boundary conditions as (possibly inhomogeneous) constraints and eliminate the matrix rows and columns to those as well. All we have to do for this is to call the function that interpolates the Dirichlet boundary conditions already in the setup phase in order to tell the ConstraintMatrix object about them, and then do the transfer from local to global data on matrix and vector simultaneously. This is exactly what we've shown in step-6.
The test case we will solve with this program is a re-take of the one we already look at in step-14: we solve the Laplace equation
\[ -\Delta u = f \]
in 2d, with \(f=(x+1)(y+1)\), and with zero Dirichlet boundary values for \(u\). We do so on the domain \([-1,1]^2\backslash[-\frac 12,\frac 12]^2\), i.e. a square with a square hole in the middle.
The difference to step-14 is of course that we use \(hp\) finite elements for the solution. The testcase is of interest because it has re-entrant corners in the corners of the hole, at which the solution has singularities. We therefore expect that the solution will be smooth in the interior of the domain, and rough in the vicinity of the singularities. The hope is that our refinement and smoothness indicators will be able to see this behavior and refine the mesh close to the singularities, while the polynomial degree is increased away from it. As we will see in the results section, this is indeed the case.
The first few files have already been covered in previous examples and will thus not be further commented on.
These are the new files we need. The first one provides an alternative to the usual SparsityPattern class and the DynamicSparsityPattern class already discussed in step-11 and step-18. The last two provide hp versions of the DoFHandler and FEValues classes as described in the introduction of this program.
The last set of include files are standard C++ headers. We need support for complex numbers when we compute the Fourier transform.
Finally, this is as in previous programs:
The main class of this program looks very much like the one already used in the first few tutorial programs, for example the one in step-6. The main difference is that we have merged the refine_grid and output_results functions into one since we will also want to output some of the quantities used in deciding how to refine the mesh (in particular the estimated smoothness of the solution). There is also a function that computes this estimated smoothness, as discussed in the introduction.
As far as member variables are concerned, we use the same structure as already used in step-6, but instead of a regular DoFHandler we use an object of type hp::DoFHandler, and we need collections instead of individual finite element, quadrature, and face quadrature objects. We will fill these collections in the constructor of the class. The last variable,
max_degree, indicates the maximal polynomial degree of shape functions used.
Next, let us define the right hand side function for this problem. It is \(x+1\) in 1d, \((x+1)(y+1)\) in 2d, and so on.
The constructor of this class is fairly straightforward. It associates the hp::DoFHandler object with the triangulation, and then sets the maximal polynomial degree to 7 (in 1d and 2d) or 5 (in 3d and higher). We do so because using higher order polynomial degrees becomes prohibitively expensive, especially in higher space dimensions.
Following this, we fill the collections of finite element, and cell and face quadrature objects. We start with quadratic elements, and each quadrature formula is chosen so that it is appropriate for the matching finite element in the hp::FECollection object.
The destructor is unchanged from what we already did in step-6:
This function is again a verbatim copy of what we already did in step-6. Despite function calls with exactly the same names and arguments, the algorithms used internally are different in some aspect since the dof_handler variable here is an hp object.
This is the function that assembles the global matrix and right hand side vector from the local contributions of each cell. Its main working is as has been described in many of the tutorial programs before. The significant deviations are the ones necessary for hp finite element methods. In particular, that we need to use a collection of FEValues object (implemented through the hp::FEValues class), and that we have to eliminate constrained degrees of freedom already when copying local contributions into global objects. Both of these are explained in detail in the introduction of this program.
One other slight complication is the fact that because we use different polynomial degrees on different cells, the matrices and vectors holding local contributions do not have the same size on all cells. At the beginning of the loop over all cells, we therefore each time have to resize them to the correct size (given by
dofs_per_cell). Because these classes are implement in such a way that reducing the size of a matrix or vector does not release the currently allocated memory (unless the new size is zero), the process of resizing at the beginning of the loop will only require re-allocation of memory during the first few iterations. Once we have found in a cell with the maximal finite element degree, no more re-allocations will happen because all subsequent
reinit calls will only set the size to something that fits the currently allocated memory. This is important since allocating memory is expensive, and doing so every time we visit a new cell would take significant compute time.
The function solving the linear system is entirely unchanged from previous examples. We simply try to reduce the initial residual (which equals the \(l_2\) norm of the right hand side) by a certain factor:
After solving the linear system, we will want to postprocess the solution. Here, all we do is to estimate the error, estimate the local smoothness of the solution as described in the introduction, then write graphical output, and finally refine the mesh in both \(h\) and \(p\) according to the indicators computed before. We do all this in the same function because we want the estimated error and smoothness indicators not only for refinement, but also include them in the graphical output.
Let us start with computing estimated error and smoothness indicators, which each are one number for each active cell of our triangulation. For the error indicator, we use the KellyErrorEstimator class as always. Estimating the smoothness is done in the respective function of this class; that function is discussed further down below:
Next we want to generate graphical output. In addition to the two estimated quantities derived above, we would also like to output the polynomial degree of the finite elements used on each of the elements on the mesh.
The way to do that requires that we loop over all cells and poll the active finite element index of them using
cell->active_fe_index(). We then use the result of this operation and query the finite element collection for the finite element with that index, and finally determine the polynomial degree of that element. The result we put into a vector with one element per cell. The DataOut class requires this to be a vector of
float or
double, even though our values are all integers, so that it what we use:
With now all data vectors available – solution, estimated errors and smoothness indicators, and finite element degrees –, we create a DataOut object for graphical output and attach all data. Note that the DataOut class has a second template argument (which defaults to DoFHandler<dim>, which is why we have never seen it in previous tutorial programs) that indicates the type of DoF handler to be used. Here, we have to use the hp::DoFHandler class:
The final step in generating output is to determine a file name, open the file, and write the data into it (here, we use VTK format):
After this, we would like to actually refine the mesh, in both \(h\) and \(p\). The way we are going to do this is as follows: first, we use the estimated error to flag those cells for refinement that have the largest error. This is what we have always done:
Next we would like to figure out which of the cells that have been flagged for refinement should actually have \(p\) increased instead of \(h\) decreased. The strategy we choose here is that we look at the smoothness indicators of those cells that are flagged for refinement, and increase \(p\) for those with a smoothness larger than a certain threshold. For this, we first have to determine the maximal and minimal values of the smoothness indicators of all flagged cells, which we do using a loop over all cells and comparing current minimal and maximal values. (We start with the minimal and maximal values of all cells, a range within which the minimal and maximal values on cells flagged for refinement must surely lie.) Absent any better strategies, we will then set the threshold above which will increase \(p\) instead of reducing \(h\) as the mean value between minimal and maximal smoothness indicators on cells flagged for refinement:
With this, we can go back, loop over all cells again, and for those cells for which (i) the refinement flag is set, (ii) the smoothness indicator is larger than the threshold, and (iii) we still have a finite element with a polynomial degree higher than the current one in the finite element collection, we then increase the polynomial degree and in return remove the flag indicating that the cell should undergo bisection. For all other cells, the refinement flags remain untouched:
At the end of this procedure, we then refine the mesh. During this process, children of cells undergoing bisection inherit their mother cell's finite element index:
The following function is used when creating the initial grid. It is a specialization for the 2d case, i.e. a corresponding function needs to be implemented if the program is run in anything other then 2d. The function is actually stolen from step-14 and generates the same mesh used already there, i.e. the square domain with the square hole in the middle. The meaning of the different parts of this function are explained in the documentation of step-14:
This function implements the logic of the program, as did the respective function in most of the previous programs already, see for example step-6.
Basically, it contains the adaptive loop: in the first iteration create a coarse grid, and then set up the linear system, assemble it, solve, and postprocess the solution including mesh refinement. Then start over again. In the meantime, also output some information for those staring at the screen trying to figure out what the program does:
This last function of significance implements the algorithm to estimate the smoothness exponent using the algorithms explained in detail in the introduction. We will therefore only comment on those points that are of implementational importance.
The first thing we need to do is to define the Fourier vectors \({\bf k}\) for which we want to compute Fourier coefficients of the solution on each cell. In 2d, we pick those vectors \({\bf k}=(\pi i, \pi j)^T\) for which \(\sqrt{i^2+j^2}\le N\), with \(i,j\) integers and \(N\) being the maximal polynomial degree we use for the finite elements in this program. The 3d case is handled analogously. 1d and dimensions higher than 3 are not implemented, and we guard our implementation by making sure that we receive an exception in case someone tries to compile the program for any of these dimensions.
We exclude \({\bf k}=0\) to avoid problems computing \(|{\bf k}|^{-mu}\) and \(\ln |{\bf k}|\). The other vectors are stored in the field
k_vectors. In addition, we store the square of the magnitude of each of these vectors (up to a factor \(\pi^2\)) in the
k_vectors_magnitude array – we will need that when we attempt to find out which of those Fourier coefficients corresponding to Fourier vectors of the same magnitude is the largest:
After we have set up the Fourier vectors, we also store their total number for simplicity, and compute the logarithm of the magnitude of each of these vectors since we will need it many times over further down below:
Next, we need to assemble the matrices that do the Fourier transforms for each of the finite elements we deal with, i.e. the matrices \({\cal F}_{{\bf k},j}\) defined in the introduction. We have to do that for each of the finite elements in use. Note that these matrices are complex-valued, so we can't use the FullMatrix class. Instead, we use the Table class template.
In order to compute them, we of course can't perform the Fourier transform analytically, but have to approximate it using quadrature. To this end, we use a quadrature formula that is obtained by iterating a 2-point Gauss formula as many times as the maximal exponent we use for the term \(e^{i{\bf k}\cdot{\bf x}}\):
With this, we then loop over all finite elements in use, reinitialize the respective matrix \({\cal F}\) to the right size, and integrate each entry of the matrix numerically as \({\cal F}_{{\bf k},j}=\sum_q e^{i{\bf k}\cdot {\bf x}}\varphi_j({\bf x}_q) w_q\), where \(x_q\) are the quadrature points and \(w_q\) are the quadrature weights. Note that the imaginary unit \(i=\sqrt{-1}\) is obtained from the standard C++ classes using
std::complex<double>(0,1).
Because we work on the unit cell, we can do all this work without a mapping from reference to real cell and consequently do not need the FEValues class.
The next thing is to loop over all cells and do our work there, i.e. to locally do the Fourier transform and estimate the decay coefficient. We will use the following two arrays as scratch arrays in the loop and allocate them here to avoid repeated memory allocations:
Then here is the loop:
Inside the loop, we first need to get the values of the local degrees of freedom (which we put into the
local_dof_values array after setting it to the right size) and then need to compute the Fourier transform by multiplying this vector with the matrix \({\cal F}\) corresponding to this finite element. We need to write out the multiplication by hand because the objects holding the data do not have
vmult-like functions declared:
The next thing, as explained in the introduction, is that we wanted to only fit our exponential decay of Fourier coefficients to the largest coefficients for each possible value of \(|{\bf k}|\). To this end, we create a map that for each magnitude \(|{\bf k}|\) stores the largest \(|\hat U_{{\bf k}}|\) found so far, i.e. we overwrite the existing value (or add it to the map) if no value for the current \(|{\bf k}|\) exists yet, or if the current value is larger than the previously stored one:
Note that it comes in handy here that we have stored the magnitudes of vectors as integers, since this way we do not have to deal with round-off-sized differences between different values of \(|{\bf k}|\).
As the final task, we have to calculate the various contributions to the formula for \(\mu\). We'll only take those Fourier coefficients with the largest magnitude for a given value of \(|{\bf k}|\) as explained above:
With these so-computed sums, we can now evaluate the formula for \(\mu\) derived in the introduction:
The final step is to compute the Sobolev index \(s=\mu-\frac d2\) and store it in the vector of estimated values for each cell:
The main function is again verbatim what we had before: wrap creating and running an object of the main class into a
try block and catch whatever exceptions are thrown, thereby producing meaningful output if anything should go wrong:
In this section, we discuss a few results produced from running the current tutorial program. More results, in particular the extension to 3d calculations and determining how much compute time the individual components of the program take, are given in the hp_paper .
When run, this is what the program produces:
The first thing we learn from this is that the number of constrained degrees of freedom is on the order of 20-25% of the total number of degrees of freedom, at least on the later grids when we have elements of relatively high order (in 3d, the fraction of constrained degrees of freedom can be up to 30%). This is, in fact, on the same order of magnitude as for non- \(hp\) discretizations. For example, in the last step of the step-6 program, we have 18401 degrees of freedom, 4104 of which are constrained. The difference is that in the latter program, each constrained hanging node is constrained against only the two adjacent degrees of freedom, whereas in the \(hp\) case, constrained nodes are constrained against many more degrees of freedom. Note also that the current program also includes nodes subject to Dirichlet boundary conditions in the list of constraints. In cycle 0, all the constraints are actually because of boundary conditions.
Of maybe more interest is to look at the graphical output. First, here is the solution of the problem:
Secondly, let us look at the sequence of meshes generated:
It is clearly visible how the mesh is refined near the corner singularities, as one would expect it. More interestingly, we should be curious to see the distribution of finite element polynomial degrees to these mesh cells:
While this is certainly not a perfect arrangement, it does make some sense: we use low order elements close to boundaries and corners where regularity is low. On the other hand, higher order elements are used where (i) the error was at one point fairly large, i.e. mainly in the general area around the corner singularities and in the top right corner where the solution is large, and (ii) where the solution is smooth, i.e. far away from the boundary.
This arrangement of polynomial degrees of course follows from our smoothness estimator. Here is the estimated smoothness of the solution, with blue colors indicating least smoothness and red indicating the smoothest areas:
The first conclusion one can draw from these images is that apparently the estimated smoothness is a fairly stable quantity under mesh refinement: what we get on the coarsest mesh is pretty close to what we get on the finest mesh. It is also obvious that the smoothness estimates are independent of the actual size of the solution (see the picture of the solution above), as it should be. A point of larger concern, however, is that one realizes on closer inspection that the estimator we have overestimates the smoothness of the solution on cells with hanging nodes. This in turn leads to higher polynomial degrees in these areas, skewing the allocation of finite elements onto cells.
We have no good explanation for this effect at the moment. One theory is that the numerical solution on cells with hanging nodes is, of course, constrained and therefore not entirely free to explore the function space to get close to the exact solution. This lack of degrees of freedom may manifest itself by yielding numerical solutions on these cells with suppressed oscillation, meaning a higher degree of smoothness. The estimator picks this signal up and the estimated smoothness overestimates the actual value. However, a definite answer to what is going on currently eludes the authors of this program.
The bigger question is, of course, how to avoid this problem. Possibilities include estimating the smoothness not on single cells, but cell assemblies or patches surrounding each cell. It may also be possible to find simple correction factors for each cell depending on the number of constrained degrees of freedom it has. In either case, there are ample opportunities for further research on finding good \(hp\) refinement criteria. On the other hand, the main point of the current program was to demonstrate using the \(hp\) technology in deal.II, which is unaffected by our use of a possible sub-optimal refinement criterion.
|
http://dealii.org/8.4.1/doxygen/deal.II/step_27.html
|
CC-MAIN-2017-43
|
refinedweb
| 8,871
| 52.63
|
WhatsApp recently announced that users will soon be able to backup all of their data to Google Drive without that storage eating into their allocated Drive storage. What the company ommitted to say at the time was that the data backed-up to Drive will be stored in plaintext, without any encryption.
Google Drive Backups Not Encrypted
Starting November 12, WhatsApp backups to Google Drive will not be counted towards users’ storage quota. This is the result of a deal made between Google and Facebook, which owns WhatsApp. Google already analyzes its users' Drive accounts for targeted advertising purposes, so it was to be expected that if Google is going to store WhatsApp’s data for free, the company is going to get something in return (such as access to WhatsApp users’ data).
WhatsApp confirmed in its FAQ that the data that is backed up to Google Drive will not benefit from the same default end-to-end encryption implemented for real-time conversations, with the following line: “Media and messages you back up aren't protected by WhatsApp end-to-end encryption while in Google Drive.”
How WhatsApp Could Have Encrypted The Data
When a communications service is end-to-end encrypted, the users are in full control because they own the private key that is used to encrypt the communications. This key normally stored locally on the device in a secure environment such as a hardware security module. However, if the users want to change their devices and then access that data, they’d first need to transfer their private key, too. The vast majority of users don’t know how to do that or don't want to do it.
Data stored in the cloud isn’t normally encrypted with the user’s private key for the same reason. It's easier to transfer and access unencrypted data. This may also be part of the reason why WhatsApp disabled end-to-end encryption for backed-up data.
However, WhatsApp had another option here it could have easily implemented, which is allowing the user to encrypt the data with a password, just like you would normally encrypt a .zip file. The password could be required only during the setup of the Drive backup, and then it could be stored safely on the device, the same way private keys are stored, so that new messages are backed up automatically. In this case, neither Google, nor anyone else breaking into your Google Drive account would have access to that data.
On the other hand, if WhatsApp had done that, Google would have had little reason to negotiate this deal with the company. We suspect WhatsApp not encrypting the backups was likely not a technical issue primarily, but a business one.
Moving on From End-to-End Encryption
WhatsApp’s last remaining co-founder, Jan Koum, quit earlier this year after allegedly clashing with Facebook leadership, which apparently wanted to cripple the app’s encryption in order to implement more business-friendly tools into the app.
With both co-founders and many of the original employees quitting WhatsApp, Facebook may start looking to recuperate its $22 billion investment. End-to-end encryption could be in the way of doing that, as it doesn’t allow Facebook to look into users’ private messages or for the company to interpose itself in between users’ conversations. Time will tell how far Facebook will be willing to go with its plans.
Ape2: "Spaceballs FaceBook!"
Ape1" "There goes the planet WhatsApp!"
Brought to you by another property owned by FB, who doesn't give a flip about your privacy, Google (Not that Google cares that much either,) and those who had the Patriot Act penned and waiting for an excuse to make it law.
|
https://www.tomshardware.com/news/whatsapp-google-drive-backups-plaintext,37718.html
|
CC-MAIN-2020-16
|
refinedweb
| 628
| 55.68
|
strcoll - Man Page <string.h> int strcoll(const char *s1, const char *s2); int strcoll_l(const char *s1, const char *s2, locale_t locale);
Description
For strcoll(): The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2017.
Return Value
Upon successful completion, strcoll() shall return..
Errors
These functions may fail if:
- EINVAL
The s1 or s2 arguments contain characters outside the domain of the collating sequence.
The following sections are informative.
Examples
Comparing Nodes
The following example uses an application-defined function, node_compare(), to compare two nodes based on an alphabetical ordering of the string field.
#include <string.h> ... struct node { /* These are stored in the table. */ char *string; int length; }; ... int node_compare(const void *node1, const void *node2) { return strcoll(((const struct node *)node1)->string, ((const struct node *)node2)->string); } ...
Application Usage
The strxfrm() and strcmp() functions should be used for sorting large lists.
Rationale
None.
Future Directions
None.
See Also
alphasort(), strcmp(), strxfrm()
The Base Definitions volume of POSIX.1-2017, <string
alphasort(3p), localeconv(3p), setlocale(3p), string.h(0p), strxfrm(3p).
|
https://www.mankier.com/3p/strcoll
|
CC-MAIN-2022-33
|
refinedweb
| 198
| 50.94
|
Explain an ETF and why they are becoming a staple of investment portfolios
Please explain what an ETF is? Why are ETFs increasingly becoming a staple of investment portfolios? and How are they? Please discuss in detail Provide at least 1 reference
It is often said that money markets are not used to get rich, but to avoid being poor. Do you agree or disagree with this statement? Provide support for your opinion. Which markets do you prefer? Explain why.
Recently there have been large shifts in the prices of stocks in the stock market. What do you think makes the prices of a firms stock change so much? Include proof of your opinion.?
In answering the following questions, it is given that the potential investment has the following range of possible outcomes and probabilities: 10% probability of a -20% return, 40% probability of a 15% return, 40% probability of a 25% return, and a 10% probability of a 50% return. (a) Calculate the weighted mean of the
You plan to purchase a $ 175,000 house using a 15- year mortgage obtained from your local bank. The mortgage rate offered to you is 7.75 percent. You will make a down pay-ment of 20 percent of the purchase price. a. Calculate your monthly payments on this mortgage. b. Calculate the amount of interest and, separately, princi-pal
5) Taggart Technologies is considering issuing new common stock and using the proceeds to reduce its outstanding debt. The stock issue would have no effect on total assets, the interest rate Taggart pays, EBIT, or the tax rate. What is likely to occur if the company goes ahead with the stock issue? offi
The Wei Company's last dividend was $1.65. The dividend growth rate is expected to be constant at 1.50% for 2 years, after which dividends are expected to grow at a rate of 8.00% forever. Wei's required return (rs) is 12.00%. What is Wei's current stock price?
Before approving a loan to a small business, the banker must be satisfied with the owner's character. Why is this? Do you agree or disagree? Support your post material from an outside source or personal experience.
Read the article and answer two questions 1. Do the pressures of living up to analyst's expectations encourage fraudulent financial reporting? 2. What are your thoughts on the guilt/innocence of the Diebold executives when the two different CFOs are accused and the
What is a banker's acceptance? How are they initiated? Why are they desirable for the exporter?
Micro Brewery borrows $300,000 to be paid off in three years. The loan payments are semiannual with the first payment due in six months, and interest is at 6%. What is the amount of each payment? a. $55,379 b. $106,059 c. $30,138 d. $60,276
Travel history analysis is important in forecasting future needs. What role does the City-Pair-Analysis chart and graph play in this inquiry?
** Please see the attached file for the complete problem description ** Consider the following income statement for WatchoverU Savings Inc. (in millions) (see attached). a. What is WatchoverU's expected net interest income at year-end? b. What will be the net interest income at year-end if interest rates rise by 2 perce
The following is ABC Inc.'s balance sheet (in thousands) (see attached). Also assume that sales equal $500, cost of goods sold equals $360, interest payments equal $62, taxes equal $56, and net income equals $22. Assume the beginning retained earnings is $0, the market value of equity is equal to its book value, and the com
|
https://brainmass.com/business/finance/pg52
|
CC-MAIN-2018-22
|
refinedweb
| 614
| 66.33
|
# Introduce Static Analysis in the Process, Don't Just Search for Bugs with It
*This article is an authorized translation of [the original post](https://habr.com/ru/post/436868/). The translation was made with the kind help of the guys from PVS-Studio. Thank you, guys!*
What encouraged me to write this article is considerable quantity of materials on static analysis, which recently has been increasingly coming up. Firstly, this is a [blog of PVS-Studio](https://habr.com/en/company/pvs-studio/blog/), which actively promotes itself on Habr posting reviews of errors, found by their tool in open source projects. PVS-Studio has recently implemented [Java support](https://habr.com/en/company/pvs-studio/blog/436496/), and, of course, developers from IntelliJ IDEA, whose built-in analyzer is probably the most advanced for Java today, [could not stay away](https://habr.com/ru/company/jetbrains/blog/436278/).
When reading these reviews, I get a feeling that we are talking about a magic elixir: click the button, and here it is — the list of defects right in front of your eyes. It seems that as analyzers get more advanced, more and more bugs will be found, and products, scanned by these robots, will become better and better without any effort on our part.
Well, but there are no magic elixirs. I would like to talk about what is usually not spoken in posts like «here are things that our robot can find»: what analyzers are not able to do, what's their real part and place in the process of software delivery, and how to implement the analysis properly.

*Ratchet (source: [Wikipedia](https://ru.wikipedia.org/wiki/%D0%A5%D1%80%D0%B0%D0%BF%D0%BE%D0%B2%D0%BE%D0%B9_%D0%BC%D0%B5%D1%85%D0%B0%D0%BD%D0%B8%D0%B7%D0%BC#/media/File:Sperrklinke_Schema.svg)).*
What Static Analyzers Will Never Be Able to Do
----------------------------------------------
What is the analysis of source code from the practical point of view? We take the source files and get some information about the system quality in a short time (much shorter than a tests run). Principal and mathematically insurmountable limitation is that this way we can answer only a very limited subset of questions about the analyzed system.
The most famous example of a task, not solvable using static analysis is a [halting problem](https://en.wikipedia.org/wiki/Halting_problem): this is a theorem, which proves that one cannot work out a general algorithm, which would define whether a program with a given source code looped forever or completed for the final time. The extension of this theorem is a [Rice's theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem), asserting that for any non-trivial property of computable functions, the question of determination whether a given program calculates a function with this property is an algorithmically unsolvable task. For example, you cannot write an analyzer, which determines based on source code if the analyzed program is an implementation of a specific algorithm, say, one that computes squaring an integer number.
Thus, functionality of static analyzers has insurmountable limitations. Static analyzer will never be able to detect all the cases of, for example, «null pointer exception» bug in languages without the [void safety](https://en.wikipedia.org/wiki/Void_safety). Or detect all the occurrences of «attribute not found» in dynamically typed languages. All that the most perfect static analyzer can do is to catch particular cases. The number of them among all possible problems with your source code, without exaggeration, is a drop in the ocean.
Static Analysis Is not a Search for Bugs
----------------------------------------
Here is a conclusion that follows from the above: static analysis is not the way to decrease the number of defects in a program. I would venture to claim the following: being first applied to your project, it will find «amusing» places in the code, but most likely will not find any defects that affect the quality of your program.
Examples of defects automatically found by analyzers are impressive, but we should not forget that these examples were found by scanning a huge set of code bases against the set of relatively simple rules. In the same way hackers, having the opportunity to try several simple passwords on a large number of accounts, eventually find the accounts with a simple password.
Does this mean that it is not necessary to apply static analysis? Of course not! It should be applied for the same reason you might want to verify every new password in the stop-list of non-secure passwords.
Static Analysis Is More Than Search for Bugs
---------------------------------------------
In fact, the tasks that can be solved by the analysis in practice are much wider. Because generally speaking, static analysis represents any check of source code, carried out before running it. Here are some things you can do:
* A check of the coding style in the broadest sense of this word. It includes both a check of formatting and a search of empty/unnecessary parentheses usage, setting of threshold values for metrics such as a number of lines / cyclomatic complexity of a method and so on — all things that complicate the readability and maintainability of code. In Java, Checkstyle represents a tool with such functionality, in Python — `flake8`. Such programs are usually called «linters».
* Not only executable code can be analyzed. Resources like JSON, YAML, XML, and `.properties` files can (and must!) be automatically checked for validity. The reason for it is that it's better to find out the fact that, say, the JSON structure is broken because of the unpaired quotes at the early stage of automated check of a pull request than during tests execution or in run time, isn't it? There are some relevant tools, for example, [YAMLlint](https://github.com/adrienverge/yamllint), [JSONLint](https://github.com/zaach/jsonlint) and `xmllint`.
* Compilation (or parsing for dynamic programming languages) is also a kind of static analysis. Usually, compilers can issue warnings that signal about problems with the quality of the source code, and they should not be ignored.
* Sometimes compilation is applied not only to executable code. For example, if you have documentation in the [AsciiDoctor](https://asciidoctor.org/) format, then in the process of compiling it into HTML/PDF, the AsciiDoctor ([Maven plugin](https://github.com/asciidoctor/asciidoctor-maven-plugin)) can issue warnings, for example, on broken internal links. This is a significant reason not to accept a pull request with documentation changes.
* Spell checking is also a kind of static analysis. The [aspell](http://aspell.net/) utility is able to check spelling not only in documentation, but also in source code of programs (comments and literals) in various programming languages including C/C++, Java and Python. Spelling error in the user interface or documentation is also a defect!
* Configuration tests actually represent a form of static analysis, as they don't execute source code during the process of their execution, even though configuration tests are executed as `pytest` unit tests.
As we can see, search of bugs has the least significant role in this list and everything else is available when using free open source tools.
Which of these static analysis types should be used in your project? Sure, the more the better! What is important here is a proper implementation, which will be discussed further.
A Delivery Pipeline As a Multistage Filter and Static Analysis As Its First Stage
---------------------------------------------------------------------------------
A pipeline with a flow of changes (starting from changes of the source code to delivery in production) is a classic metaphor of continuous integration. The standard sequence of stages of this pipeline looks as follows:
1. static analysis
2. compilation
3. unit tests
4. integration tests
5. UI tests
6. manual verification
Changes rejected at the N-th stage of the pipeline are not passed on stage N+1.
Why so and not otherwise? In the part of the pipeline, which deals with testing, testers recognize the well-known test pyramid:

*Test pyramid. Source: [the article](https://martinfowler.com/bliki/TestPyramid.html) by Martin Fowler.*
At the bottom of this pyramid there are tests that are easier to write, which are executed faster and don't tend to produce false positives. Therefore, there should be more of them, they should cover most of the code and should be executed first. At the top of the pyramid the situation is quite opposite, so the number of integration and UI tests should be reduced to the necessary minimum. People in this chain are the most expensive, slow and unreliable resource, so they are located at the very end and do the work only if the previous steps haven't detected any defects. In the parts not related to testing, the pipeline is built by the same principles!
I'd like to suggest the analogy in the form of a multistage system of water filtration. Dirty water (changes with defects) is supplied in the input, and as the output we need to get clean water, which won't contain all unwanted contaminations.

*Multi-stage filter. Source: [Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Milli-Q_Water_filtration_station.JPG)*
As you may know, purification filters are designed so that each subsequent stage is able to remove contaminant particles of smaller size. Input stages of rough purification have greater throughput and lower cost. In our analogy it means that input quality gates have greater performance, require less effort to launch and have less operating costs. The role of static analysis, which (as we now understand) is able to weed out only the most serious defects is the role of the sump filter as the first stage of the multi-stage purifiers.
Static analysis doesn't improve the quality of the final product by itself, the same as the «sump» doesn't make the water potable. Yet in conjunction with other pipeline elements, its importance is obvious. Even though in a multistage filter the output stages potentially can remove everything the input ones can — we're aware of consequences that will follow when attempting to get by only with stages of fine purification, without input stages.
The purpose of the «sump» is to offload subsequent stages from the capture of very rough defects. For example, a person performing code review should not be distracted by incorrectly formatted code and code standards violation (like redundant parentheses or branching nested too deeply). Bugs like NPE should be caught by the unit tests, but if before that the analyzer indicates that a bug is to appear inevitably — this will significantly accelerate its fixing.
I suppose it is now clear why static analysis doesn't improve the quality of the product when applied occasionally, and must be applied continuously to filter changes with serious defects. The question of whether the application of a static analyzer improves the quality of your product is roughly equivalent to the question «if we take water from dirty ponds, will its drinking quality be improved when we pass it through a colander?»
Introduction in a Legacy Project
--------------------------------
An important practical issue: how to implement static analysis in the continuous integration process, as a «quality gate»? In case of automated tests all is clear: there is a set of tests, a failure of any of them is a sufficient reason to believe that a build hasn't passed a quality gate. An attempt to set gate in the same way by the results of static analysis fails: there are too many analysis warnings on legacy code, you don't want to ignore them all, on the other hand it's impossible to stop the product delivery just because there are analyzer warnings in it.
For any project the analyzer issues a great number of warnings being applied in the first time. The majority of warnings have nothing to do with the the proper functioning of the product. It will be impossible to fix all of them and many of them don't have to be fixed at all. In the end, we know that our product actually works even before the introduction of static analysis!
As a result, many developers confine themselves by occasional usage of static analysis or using it only in the informative mode which involves getting an analyzer report when building a project. This is equivalent to the absence of any analysis, because if we already have many warnings, the emergence of another one (however serious) remains unnoticed when changing the code.
Here are the known ways of quality gates introduction:
* Setting the limit of the total number of warnings or the number of warnings, divided by the number of lines of code. It works poorly, as such a gate lets changes with new defects through until their limit is exceeded.
* Marking of all of the old warnings in the code as ignored in a certain moment and build failure when new warnings appear. Such functionality can be provided by PVS-Studio and some other tools, for example, Codacy. I haven't happened to work with PVS-Studio. As for my experience with Codacy, their main problem is that the distinction of an old and a new error is a complicated and not always working algorithm, especially if files change considerably or get renamed. To my knowledge, Codacy could overlook new warnings in a pull request and at the same time impede a pull request due to warnings, not related to changes in the code of this PR.
* In my opinion, the most effective solution is the «ratcheting» method described in the "[Continuous Delivery](https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912)" book. The basic idea is that the number of static analysis warnings is a property of each release and only such changes are allowed, which don't increase the total number of warnings.
Ratchet
-------
It works in the following way:
1. In the initial phase, an entry about a number of warnings found by the code analyzers is added in the release metadata. Thus, when building the main branch not just «release 7.0.2» is written in your repository manager, but «release 7.0.2, containing 100500 Checkstyle-warnings». If you are using advanced repositories manager (such as Artifactory) it's easy to keep such metadata about your release.
2. When building, each pull request compares the number of resulting warnings with their number in a current release. If a PR leads to a growth of this number, the code does not pass quality gate on static analysis. If the number of warnings is reduced or not changed — then it passes.
3. During the next release the recalculated number will be written in the metadata again.
Thus slowly but surely, the number of warnings will be converging to zero. Of course, the system can be fooled by introducing a new warning and correcting someone else's. This is normal, because in the long run it gives the result: warnings get fixed, usually not one by one, but by groups of a certain type, and all easily-resolved warnings are resolved fairly quickly.
This graph shows the total number of Checkstyle warnings for six months of such a «ratchet» on the [one of our OpenSource projects](https://github.com/CourseOrchestra/celesta). The number of warnings has been considerably reduced, and it happened naturally, in parallel with the development of the product!

I apply the modified version of this method. I count the warnings separately for different project modules and analysis tools. The YAML-file with metadata about the build, which is formed in doing so, looks as follows:
```
celesta-sql:
checkstyle: 434
spotbugs: 45
celesta-core:
checkstyle: 206
spotbugs: 13
celesta-maven-plugin:
checkstyle: 19
spotbugs: 0
celesta-unit:
checkstyle: 0
spotbugs: 0
```
In any advanced CI-system a «ratchet» can be implemented for any static analysis tools, without relying on plugins and third-party tools. Each of the analyzers issues its report in a simple text or XML format, which will be easily analyzed. The only thing to do after, is to write the needed logic in a CI-script. You can peek and see [here](https://github.com/CourseOrchestra/2bass/blob/dev/Jenkinsfile) or [here](https://github.com/CourseOrchestra/celesta/blob/dev/Jenkinsfile) how it is implemented in our source projects based on Jenkins and Artifactory. Both examples depend on the library [ratchetlib](https://github.com/inponomarev/ratchetlib): method `countWarnings()` in the usual way counts xml tags in files generated by Checkstyle and Spotbugs, and `compareWarningMaps()` implements that very ratchet, throwing an error in case, if the number of warnings in any of the categories is increasing.
An interesting way of «ratchet» implementation is possible for analyzing spelling of comments, text literals and documentation using aspell. As you may know, when checking the spelling, not all words unknown to the standard dictionary are incorrect, they can be added to the custom dictionary. If you make a custom dictionary a part of the source code project, then the quality gate for spelling can be formulated as follows: running aspell with standard and custom dictionary [should not](https://github.com/CourseOrchestra/celesta/blob/271dcfc8dc3ad65ac2d1dcaa39b7fd3ea8fb5891/Jenkinsfile#L36) find any spelling mistakes.
The Importance of Fixing the Analyzer Version
---------------------------------------------
In conclusion, it is necessary to note the following: whichever way you choose to introduce the analysis in your delivery pipeline, the analyzer version must be fixed. If you let the analyzer update itself spontaneously, then when building another pull request new defects may emerge, which don't relate to changed code, but to the fact that the new analyzer is simply able to detect more defects. This will break your process of pull request verification. The analyzer upgrade must be a conscious action. Anyway, rigid version fixation of each build component is a general requirement and a subject for a another topic.
Conclusions
-----------
* Static analysis will not find bugs and will not improve the quality of your product as a result of a single run. Only its continuous running in the process of delivery will produce a positive effect.
* Bug hunting is not the main analysis objective at all. The vast majority of useful features is available in opensource tools.
* Introduce quality gates by the results of static analysis on the first stage of the delivery pipeline, using the «ratchet» for legacy code.
References
----------
1. [Continuous Delivery](https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912)
2. [Alexey Kudryavtsev: Program analysis: are you a good developer?](https://2018.jpoint.ru/en/talks/1qghn5o70siuweuqeesuoa/) Report on different analysis methods of code, not just static!
---
Excerpts from the discussion of the original article
----------------------------------------------------
**Evgeniy Ryzhkov**
Ivan, thanks for the article and for helping us to do our work, which is to popularize the technology of static code analysis. You are absolutely right that articles from the PVS-Studio blog, in case with immature minds, may affect them and lead to such conclusions as «I’ll check the code just once, fix the errors and that will do». This is my personal pain, which I don't know how to overcome for already several years. The fact of the matter is that articles about project checks:
1. Cause wow-effect in people. People like to read how developers of such companies like Google, Epic Games, Microsoft, and other companies sometimes fail. People like to think that anyone can be wrong, even industry leaders make mistakes. People like to read such articles.
2. In addition, authors can write articles on the flow, without having to think hard. Of course, I don't want to offend our guys who write these articles. But coming up every time with a new article is much harder than to write an article about a project check (a dozen of bugs, a couple of jokes, mix it up with unicorns pictures).
You wrote a very good article. I also have a couple of articles on this topic. So have other colleagues. Moreover, I visit various companies with a report on the theme «Philosophy of static code analysis», in which I’m speaking about the process itself, but not about specific bugs.
But it is not possible to write 10 articles about the process. Well, to promote our product we need to write a lot regularly. I’d like to comment a few more points from the article with a separate comment to bring the discussion more conveniently.
This short [article](https://www.viva64.com/ru/b/0531/) is about “Philosophy of static code analysis”, which is my topic when visiting different companies.
**Ivan Ponomarev**
Evgeniy, thanks so much for the informative review on the article! Yes, you got my concern in the post about the impact on the «immature minds » absolutely correctly!
There is no one to blame here, as authors of the articles/reports on *analyzers* don’t aim to make articles/reports on the *analysis*. But after a couple of recent posts by [Andrey2008](https://habr.com/en/users/andrey2008/) and [lany](https://habr.com/en/users/lany/), I decided that I couldn’t be silent any more.
**Evgeniy Ryzhkov**
Ivan, as wrote above, I will comment on three points of the article. It means that I agree with the ones, that I don’t comment.
1. *The standard sequence of stages of this pipeline looks as follows...*
I don’t agree that the first step is static analysis, and only the second one is compilation. I believe that, on average, compilation checking is faster and more logical than an immediate run of «heavier» static analysis. We can discuss if you think otherwise.
2. *I haven’t happened to work with PVS-Studio. As for my experience with Codacy, their main problem is that the distinction of an old and a new error is a complicated and not always working algorithm, especially if files change considerably or get renamed.*
In PVS-Studio it is done awesomely handy. This is one of the killer features of the product, which, unfortunately, is difficult to be described in the articles, that’s why people are not very familiar with its. We gather information about the existing errors in a base. And not just «the file name and the line», but also additional information (hash mark of three lines — current, previous, next), so that in case of shifting the code fragment, we could still find it. Therefore, when having minor modifications, we still understand that it's an old error. And the analyzer doesn’t complain about it. Now someone may say: «Well, what if the code has changed a lot, then this would not work, and you complain about it as if it was the newly written?» Yes. We complain. But actually this is new code. If the code has changed a lot, this is now new code, rather than the old one.
Thanks to this feature, we personally participated in the implementation of the project with 10 million lines of C++ code, which is every day «touched» by a bunch of developers. Everything went without any problems. So we recommend using this feature of PVS-Studio to anyone who introduces static analysis in his process. The option with fixing the number of warnings according to a release seems to be less likable for me.
3. *Whichever way you choose to introduce your delivery pipeline analysis, the analyzer version must be fixed*
I can’t agree with this. A definite adversary of such approach. I recommend updating the analyzer in automatic mode. As we add new diagnostics and improve the old ones. Why? Firstly, you’ll get warnings for new real errors. Secondly, some old false positives might disappear if we overcome them.
Not to update the analyzer is the same as not to update anti-virus databases (“what if they start notifying about viruses”). We will not discuss here the true usefulness of antivirus software as a whole.
If after upgrading the analyzer version you have many new warnings, then suppress them, as I wrote above, through that function. But not to update the version...As a rule, such clients (sure, there are some) don’t update the analyzer version for years. No time for that. They PAY for license renewal, but don’t use the new versions. Why? Because once they decided to fixate a version. The product today and the three years ago is night and day. It turns out like “I’ll buy the ticket, but won’t come”.
**Ivan Ponomarev**
1. Here you are right. I’m ready to agree with a compiler/parser in the beginning and this even should be changed in the article! For example, the notorious `spotbugs` is not able to act in a different way at all, as it analyzes compiled bytecode. There are exotic cases, for example, in the pipeline for Ansible playbooks, static analysis is better to be set before parsing because it is lighter there. But this is the exotic itself)
2. *The option with fixing the number of warnings according to a release seems to be less likable for me...* — well, yes, it is less likable, less technical but very practical :-) The main thing is that it is a general method, by which I can effectively implement static analysis anywhere, even in the scariest project, having any codebase and any analyzer (not necessarily yours), using Groovy or bash scripts on CI. By the way, now we are counting the warnings separately for different project modules and tools, but if we divide them in a more granulating way (for files), it will be much closer to the method of comparing new/old ones. But we felt that way and I liked that ratcheting because it stimulates developers monitor the total number of warnings and slowly decrease this number. If we had the method of old/new ones, would it motivate developers to monitor the curve of warnings number? — probably, yes, may be, no.
As for the point 3, here's a real example from my experience. Look at [this commit](https://github.com/AdoptOpenJDK/openjdk-infrastructure/pull/648/files). Where did it come from? We set linters in the TravisCI-script. They worked there as quality gates. But suddenly, when a new version of Ansible-lint which was finding more warnings, some pull request builds began failing due to warnings in code, which they hadn’t changed!!! In the end, the process was broken and urgent pull requests were merged without passing quality gates.
Nobody says that it is not necessary to update the analyzers. Of course, it is! Like all other build components. But it must be a conscious process, reflected in the source code. And every time the actions will depend on circumstances (whether we fix the warnings detected again or just reset the «ratchet»)
**Evgeniy Ryzhkov**
When I am asked: “Is there an ability to check each commit in PVS-Studio?”, I answer, that yes, there is. And then add: «Just for the sake of God don’t fail the build if PVS-Studio finds something!» Because otherwise, sooner or later, PVS-Studio will be perceived as a disruptive thing. And there are situations when IT IS NECESSARY to commit quickly, rather than fight with the tools, which don’t let the commit pass.
My opinion that it’s bad to fail the build in this case. It’s good to send messages to the authors of problem code.
**Ivan Ponomarev**
My opinion is that there is no such thing as «we need to commit quickly». This is all just a poor process. Good process generates speed not because we break a process/quality gates, when we need to “do it quickly”.
This does not contradict the fact that we can do without failing a build on some classes of static analysis findings. It just means that the gate is set up in the way that certain types of findings are ignored and for other findings we have Zero Tolerance.
[My favorite commitstrip on the «quickly» topic.](http://twitter.com/commitstrip/status/932673606840127489)
**Evgeniy Ryzhkov**
I’m a definite adversary of the approach to use the old analyzer version. What if a user found a bug in that version? He writes to a tool developer and a tool developer will even fix it. But in the new version. No one will support the old version for some clients. If we’re not speaking about contracts worth millions of dollars.
**Ivan Ponomarev**
Evgeniy, we're not talking about this at all. Nobody says we must keep them old. It is about fixating versions of build component dependencies for their controlled update — it is a common discipline, it applies to everything, including libraries and tools.
**Evgeniy Ryzhkov**
I understand how «it should be done in theory». But I see only two choices made by the clients. Either stick to the new or the old one. So we ALMOST don’t have such situations when “we have discipline and we lag behind from the current version on two releases”. It’s not important for me to say right now is it good or bad. I just say what I see.
**Ivan Ponomarev**
I got it. Anyway, it all strongly depends on what tools / processes your clients have and how they use them. For example, I know nothing about how it all works in the C++ world.
|
https://habr.com/ru/post/440610/
| null | null | 4,941
| 54.32
|
Tiny.
Lode can be installed with pip:
$ pip install lode
Or from source:
$ git clone $ cd lode $ python setup.py install
For the most basic usage, just import lode and call the
log function. Usage
is largely the same as the print function, with it taking any number of
positional arguments, formatting those to a string, and joining them together
separated by spaces.
import lode; lode.log('hi there!')
Tests are stored the
tests subdirectory of the main lode package. Set up lode
for testing by running in the root directory of the git repository the
following commands:
$ pip install -e . $ pip install -r test-requirements.txt
Then run the tests:
$ py.test -v
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/lode/
|
CC-MAIN-2017-30
|
refinedweb
| 133
| 75.61
|
Interactivity. Every web app needs it.
And Rails comes with a number of tools, which, together, generate that feeling of a fast, responsive app, with mostly server rendered HTML and a little Javascript sprinkled on top. ActiveJob fetches data from a remote JSON api in the background. ActionCable routes data from the background job out to the front end, and Stimulus.js puts the new data right into place.
Here is a tutorial that wraps these three pieces together. It will pull the recent Rails Repository tags from Github and display them. It will then let someone select the tag, and load the commits using a previous tutorial’s Stimulus controller. It even has a challenge assignment at the end for a little extra practice.
Background Service Jobs
ActiveJob provides a mechanism to take work out of the main request-response cycle between the browser and the application server. As a way to show how multiple different people could be served without receiving the wrong data, a unique UUID is generated in the ruby controller when the page is rendered and passed to the ActionCable channel and to the ActiveJob.
rails g job LoadGithubCommits
The job loads in the tags from Github’s API, parse them to JSON, then passes them to a partial for rendering. ActionCable broadcasts the request back to the client whose
request_id matches the ActionCable channel waiting for the commits.
class LoadGithubCommitsJob < ApplicationJob queue_as :default def perform(request_id) tags = JSON.parse Http.get("").to_s ActionCable.server.broadcast "CommitTagsChannel:#{request_id}", { tags: CommitsController.render( partial: 'commits', locals: {tags: tags}).squish } end end
ActionCable Channel
The CommitTagsChannel kicks off the job when a client subscribes to the channel. It passes along the
request_id to keep track of the clients request.
rails g channel CommitTagsChannel
The channel, instead of the ActionController, is responsible for kicking off the job because a race condition occurs if the job is started by ActionController. The job may be finished before the ActionCable channel is setup, and the data would never be loaded by the client. Starting the background job via ActionCable guarantees the client is listening before anything is started off. There is nothing worse than speaking when no one is listening, and just as bad is processing data that cannot be sent over a WebSocket connection.
class CommitTagsChannel < ApplicationCable::Channel def subscribed stream_from "CommitTagsChannel:#{params[:request_id]}" LoadGithubCommitsJob.perform_later params[:request_id] end def unsubscribed stop_all_streams end end
Stimulus – Commits Controller
The Stimulus controller is responsible for setting up the ActionCable connection, and then inserting the HTML received over the wire into the page.
The Javascript for
commit_tags_controller.js, based off of a previous example, is:
import { Controller } from "stimulus" import createChannel from "cables/cable"; export default class extends Controller { static targets = ['tags'] connect() { let thisController = this; createChannel( { channel: 'CommitTagsChannel', request_id: this.data.get('request') }, { received({ tags }) { thisController.tagsTarget.innerHTML = tags; } }); } }
A second controller comes over the wire. It takes a selected tag and pulls in data from a JSON API and templates it.
The HTML and Controller
This controller generates the
request_id as a UUID using SecureRandom. The HTML of the page starts with a single Stimulus controller, and a loading spinner.
<div data- <h1 class="title">Latest Rails Tags</h1> <div data- <div class="columns"> <div class="column"> Refreshing Tags... <svg width="44px" height="44px" viewBox="0 0 44 44" class="spinner show"> <circle fill="none" stroke- </circle> </svg> </div> </div> </div> </div>
The
_commits.html.erb partial takes the JSON response from Github. It loads each tag, and sets up the JSON templating Stimulus controller.
<div class="columns" data- <div class="column is-one-third"> <aside class="menu"> <ul class="menu-list"> <% tags.each do |tag| %> <li> <a data-<%= tag['name'] %></a> </li> <% end %> </ul> </aside> </div> <div class="column content is-two-thirds"> <p>rails/rails@<span data-</span></p> <ul data- </ul> </div> </div>
Some More Practice
You can find all the code on Github at.
Here are two different practice scenarios to level you up:
I. Use a different API endpoint
Try this tutorial but with a different API endpoint.
II. Use a real life request identifier
Try this tutorial, but add in a different kind of mechanism for linking the background job with the client waiting for data.
Interactivity
This example shows the powerful tools provided by Rails that let you make interactive web apps, without a whole rewrite in the Javascript flavor of the month.
Comments or Questions? Let me know how the practice problems went on twitter @jpbeatty
Want To Learn More?
Try out some more of my Stimulus.js Tutorials.
One Comment
[…]. […]
|
https://johnbeatty.co/2019/02/19/stimulus-actioncable-activejob-loading-asynchronous-api-data/
|
CC-MAIN-2019-35
|
refinedweb
| 767
| 55.03
|
How to Make a Chatbot: AWS Lex Weather Bot for Slack Tutorial
Creating chatbots is fun. With a variety of tools like AWS Lex, you can build conversational bots for business or for entertainment and use them in any text-powered applications.
Implementations of conversational bots may include hailing a taxi, setting medical appointments, organizing calendars, shopping or money transfers.
In this article I will guide you through the process of creating an AWS Lex-based weather bot for Slack. First, I wanted to test the possibilities of this Amazon service for building conversational interfaces. I chose Slack as we use it on a daily basis at Apptension.
Before we continue with a step-by-step tutorial of how I approached developing my weather bot, let’s take a quick overview of how chatbots work.
How chatbots work
At first glance, chatbots look just like other apps. They consist of an application layer, a database and APIs of external services. But in order to understand the user’s intent, chatbots need to be trained with the data.
Based on how bots work, we can name three systems of classifying chatbots:
- Rule-based systems (aka pattern matching) – these are bots that use patterns to match queries like “Call <somebody>”, “Book a hotel in <city>” with the correct answers.
- Artificial Intelligence – these bots use machine learning to select a category (intent) for the input phrases.
- Hybrid – a program that uses pattern matching, but is also backed by the human customer support.
A chatbot workflow looks as follows:
As you can see, the workflow consists of two components:
- The natural language understanding unit is responsible for natural language processing to decide which intent has to be selected (based on pattern matching or using ML techniques).
- The dialog manager holds the state of the conversation, keeps track of the selected intents as well as the slots, and communicates with the NLU unit to ask for the missing values.
Below, you can see how the raw input text is being processed by the natural language understanding unit. The main purpose is to extract the structure of a sentence from the raw input, using the following scheme:
As in any natural language algorithms, the first step is to tokenize and clean up the raw input.
Tokenization splits the raw input into a sequence of “words”, removing the articles, prepositions and other parts of sentences. Next, using this tuple of words, the algorithm builds a structure tree out of the sentence. Tokenization is basically breaking up sentences into words.
By using the tree, we are able to get a partial representation of where the main verb in the sentence is (eg. like which is the function) and then, in the final representation, we can assign the main noun (e.g. John) and the subject of the action (like) to be cat.
The interaction between different components may look like this:
This particular workflow corresponds to the bot that I’ll describe later on in my weather bot example.
DialogFlow vs. AWS Lex: Examples of bot platforms
If you want to develop a chatbot, you can create one from scratch, or you can use a platform like Amazon’s AWS Lex to do it in an easier way.
To give you a better overview of possibilities of such a service, I’ve compared AWS Lex (the one I ended up using for this tutorial) with Google’s Dialog Flow (another popular chatbot platform available).
DialogFlow:
- Supports multiple languages (also multilanguage agents)
- Cooperation with backend via webhooks (not limited to Google services) or custom Cloud Functions
- Integrations with: Slack, Facebook, Twitter, Skype, Kik, Telegram, Amazon Alexa, Microsoft Cortana, etc.
- Enriched response cards for different integrations (eg. Slack, Facebook)
- Voice response via Google Assistance (using SSML format)
- Prebuilt agents that can be added to the project
- Small talks: simple agents that enhance personal experience (responding to “Hi”, “How are you”, etc.)
- Easy access to session attributes (so-called context)
- Intents can be triggered by button click, etc., not only by text recognition
AWS Lex:
- Only US English
- Cooperation with backend only via AWS Lambda
- Integrations with: Slack, Facebook, Kik, Twillio SMS, (it’s possible to export chatbot to Amazon Alexa platform)
- Enriched response cards for different integrations (eg. Slack, Facebook)
- Voice response by using AWS Polly service (using SSML format)
- Session attributes only available via API
DialogFlow is a more mature product at this stage, offering multiple integrations with external services and backend resources. However, if you work inside an Amazon ecosystem, then Lex may seem like a more natural choice.
Now, as you know how chatbots work and what tools you can use to create one, let’s take a closer look at my weather bot, developed with AWS Lex.
Creating a bot: Weather information on Slack
I’ve divided this examples into 5 steps, so we can discuss them one by one at every stage of developing a bot. These steps are:
- Creating a bot
- Adding the engine
- Testing the bot
- Connecting with Slack
Let’s start with setting our bot with AWS Lex.
Step 1: Create a bot
AWS Lex projects are composed of three objects:
- Bot
- Intents
- Slots
The figure below depicts the relationship between these elements. As you can see, a bot has to consist of at least one intent.
The intent is what the user asks for, and will be executed whenever the intent is selected by the NLU module and all required slots are filled up. Intents are defined by a set of sample sentences, which are used to train the model, and slots. The sentences should be defined in such a way that they contain slots.
For example, if we have a slot type City, then one of our sentences could be Show me the weather in {city}. Now, the underlying ML uses this combination of sentences and slot types to train the model.
Slots are used to fetch the parameters required by the intent to fulfill the user request. There are two types of slots: the predefined and the custom ones. Amazon Lex supports built-in slot types from the Alexa Skills Kit.
Apart from the predefined slots, we can define our own slot type using a dialog shown below.
Here, you have to specify your Slot type name and its Description. Then the Slot Resolution defines whether allowed values are restricted to the defined list. Alternatively, you can set Slot Resolution to expand slot values, so they will be used as training data.
With the enumerated restricted list of values, you can specify the synonyms that will be resolved to the declared Value.
Set up your project
We start the project from creating a bot, following the image below. First, you have to select if you want to see the sample bots or if you want to create a custom one. We will continue with the second option.
The following fields need to be specified here:
- Bot name – contains the name of your bot, however, it’s restricted only to letters without spaces.
- Language – you currently have no other choices than English.
- Output voice – select if the bot produces sound output and what voice will be used to synthesis output.
- Session timeout – controls how long AWS Lex will keep the context, the slot data and any of session attributes. This can be set between zero and 1440 minutes (24 hours).
The last question is if the bot is a subject of COPPA.
The result is the following dashboard of the bot:
At the top you can see the name of the bot and a dropdown list with its version.
Use the Build button to create a bot, which can be tested later on. By clicking Publish button you will push the bot live.
From the list on the left side of the screen you can create a list of intents and slot types your bot will use, that will be visible in the central part of the screen. The right panel is where you can test your bot.
Define intents
To define intent, first you need to name it. The rule we discussed when naming the bot applies here, too.
Now, you can see the following screen that allows you to define the whole intent.
In the Sample utterances, you need to place the sample sentences that will be used to train the model. The sentences can also contain the slot variables, e.g. What is the weather; What is the weather in {location}, What is the weather in Berlin.
Next, you can specify if you use a custom hook for validation (Lambda initialization and validation) of the user input.
The Slots section contains a list of all slots that are used in the intent. Each of the slot should have a unique name, type and optionally, a custom prompt. Moreover, you can declare if the slot is required or optional.
The slot’s type can be selected from a list of predefined slots or you can create a custom one. In this example, we use AMAZON.EUROPE_CITY.
In the Confirmation prompt, you can specify if you want to receive a confirmation message before the fulfillment action. Last, in the Fulfillment, you can specify what happens after all slots are set.
Now you have a complete definition of your intent that will be used whenever user asks for the weather in a specific European city.
Step 3: Add the engine: AWS Lambda
In Lex, Lambdas are essential for the communication between Lex and the backend. Lambdas can be used in two ways:
- first, to validate user input – this entry point can be used not only for the custom validation but also to steer the workflow by redirecting users to different intents.
- second, to fulfill user requests – used when all required slots are filled out and the request is ready to be processed.
Here you can associate the Lambdas with the fulfillment event, that is called when all of the slots are filled out.
Below is a code of a simple Lambda service that delivers temperature and the wind speed/direction for a given city.
import os import weather def get_weather(event, context): """Returns the weather for a given location.""" slot_values = populate_slots(event) location = slot_values.get('location') if location: wl = weather.Weather().lookup_by_location(location) if not wl: return aws_lex_return_close('Location {} not found'.format(location), 'Failed') output = ( '{city} {country} (last-update: {date}) ' ' {text} Temp: {temp}{temp_units} Wind: {speed}{speed_units}' ).format(city=wl.location.city, country=wl.location.country, date=wl.condition.date, text=wl.condition.text, temp=wl.condition.temp, temp_units=wl.units.temperature, speed=wl.wind.speed, speed_units=wl.units.speed) return aws_lex_return_close(output) return aws_lex_return_close('Location {location} not found'.format(location=location), 'Failed') def populate_slots(event): slot_values = {} for slot_name, v in event['currentIntent']['slots'].items(): slot_values[slot_name] = v # Populate resolved values for slot_name, v in event['currentIntent']['slotDetails'].items(): if v is not None and len(v['resolutions']) > 0 and not slot_values.get(slot_name): slot_values[slot_name] = v['resolutions'][0]['value'] return slot_values def aws_lex_return_close(message_content, return_type=None, session=None): valid_return_types = ('Fulfilled', 'Failed') if return_type is None: return_type = 'Fulfilled' if return_type not in valid_return_types: raise ValueError('Wrong return_type, got {}, expected {}'.format(return_type, ''.join(valid_return_types))) out = { 'dialogAction': { 'type': 'Close', 'fulfillmentState': return_type, 'message': { 'contentType': 'PlainText', 'content': message_content } } }
The message format that is fetched by Lambda:
{'messageVersion': '1.0', 'invocationSource': 'FulfillmentCodeHook', 'userId': 'k2lr53f71vqynb9p4kerbh8phbffk06c', 'sessionAttributes': {}, 'requestAttributes': None, ' bot': {'name': 'TodaysWeather', 'alias': '$LATEST', 'version': '$LATEST'}, 'outputDialogMode': 'Text', 'currentIntent': { 'name': 'TodaysWeather', 'slots': {'location': 'Berlin'}, 'slotDetails': {'location': {'resolutions': [], 'originalValue': 'Berlin'}}, 'confirmationStatus': 'None' }, 'inputTranscript': 'What is the weather in Berlin?' }
The field invocationSource can be used to detect if a given message is at the last stage of processing (where the slot values are populated, like in the example above) or if it’s in the validation phase (DialogCodeHook).
In the fulfillment state, the current intent contains the populated slot values in the slots field.
In the example above, we expect to get only the location slot type from AMAZON.EUROPE_CITY.
The raw input text is available in the field inputTranscript.
What you have to return from the function is the following response:
{ 'dialogAction': { 'type': 'Close', 'fulfillmentState': Fulfilled|Failed, 'message': { 'contentType': 'PlainText|SSML|CustomPayload', 'content': “Some message” } } }
In this particular case, you send a response to AWS Lex with information that you don’t expect any response from the user (type: Close). AWS Lex will only send a message that is declared in the message.content field. The message could be a PlainText, voice (in SSML format) or custom data (the last option could be useful to customize the messages sent to external services like Slack).
Use the following settings file to deploy this part with Serverless framework:
service: WeatherBot provider: name: aws runtime: python3.6 region: eu-west-1 environment: BOT_NAME: TodaysWeather BOT_ALIAS: alpha iamRoleStatements: - Effect: "Allow" Action: - lex:* Resource: - "arn:aws:lex:*:*:bot:TodaysWeather:alpha" - Effect: "Allow" Action: - lambda:InvokeFunction - lambda:InvokeAsync - lex:* Resource: "*" functions: get_weather: handler: handler.get_weather events: - http: GET get_weather plugins: - serverless-python-requirements custom: pythonRequirements: dockerizePip: non-linux package: exclude: - node_modules/** - venv/**
Step 4: Test the bot
Fast prototyping with ngrok
Testing a bot could be a quite tedious task as every change in the code would have to be deployed in order to check how the bot reacts. We can simplify tests by creating a general function that will take whatever AWS Lex is sending and proxy it to the local computer, using eg. ngrok, where a business logic can be implemented.
Here’s an exemplary logic diagram:
The function below is responsible for proxying the data received in the event object to the external TEST_ENDPOINT.
def general_proxy_handler(event, context): """This will proxy all requests to the TEST_ENDPOINT.""" outside_http = os.environ.get('TEST_ENDPOINT') logger.info('general_setup event={} url={}'.format(event, outside_http)) params = json.dumps(event).encode('utf8') logger.info('params={}'.format(params)) req = urllib.request.Request(outside_http, data=params, headers={'content-type': 'application/json'}) response = urllib.request.urlopen(req) response_json = json.loads(response.read()) logger.info('general_setup response={}'.format(response_json)) return response_json
You also have to add the Lambda function to the serverless configuration:
functions: (...) test_endopoint: handler: handler.general_proxy_handler events: - http: GET test_endpoint
Then, ngrok passes this data to the local computer. In order to handle the data, you have to set up a simple HTTP server that will listen on a specific port (which is called by ngrok) and pass the data to the appropriate handler.
class Handler(http.server.BaseHTTPRequestHandler): def _set_response(self): self.send_response(200) self.send_header('Content-type', 'application/json') self.end_headers() def do_POST(self): content_length = int(self.headers['Content-Length']) # Gets the size of data post_data = self.rfile.read(content_length) # Gets the data itself input_json = json.loads(post_data.decode('utf-8')) output = handler(input_json, '') self._set_response() self.wfile.write(json.dumps(output).encode('utf-8')) httpd = http.server.HTTPServer(('', port), Handler) httpd.serve_forever()
This is nothing more than a HTTP server that is listening on a port. Whenever an HTTP request is retrieved, the payload is forwarded to the handler. The handler will process the request and send the response back, through which the ngrok will end up in AWS Lex.
You can find the full code on my GitHub page.
Step 5: Connect with Slack
AWS Lex can be used in two scenarios – as a standalone tool that is called via AWS SDK or as a tool integrated with channels like Slack.
In order to link our bot to Slack, first you have to create an application on the Slack side (go to Slack’s API):
Next, you have to create a bot user in the app. Set the Display name and the Default username:
Next, go to the Interactive components tab and enable it. Set the request URL (anything valid like, you can change it later). With this, you can retrieve important information from Basic Information tab:
Use the above information in the AWS Lex channel tab:
You have to copy the data from Slack directly to this tab and then click Activate button. You will see the URLs that you have to use in your Slack bot configuration displayed below the activate button:
Go back to the Slack setup. Enter the OAuth & Permissions tab and set the Redirect URLs section. Click Add a new Redirect URL and put the OAuth Url from AWS Lex into that field. Click Save URLs.
Next, go to the Scopes section and select two permissions from the Select Permission Scopes:
- chat:write:bot
- chat:write:user
- im:write
- team:read
After saving changes, you need to set up interactive components by setting the request URL to Postback URL from AWS Lex.
You also have to set the event subscription. Go to the Event Subscriptions tab, enable it by choosing the On option, set the Request URL to Postback URL and subscribe to message.im event.
Lastly, you have to add your app to a workspace. Go to Basic Information, click Install your app to your workspace. You will be redirected to the page below:
By clicking Authorize, you will install the app in your workspace.
Conclusions
As I showed you in this example, creating a bot is a rather straightforward task.
AWS Lex very nicely integrates with Amazon ecosystem. This can be considered as a drawback, as we rely our product on one service provider. However, the interoperability between different providers is a challenging task not only in the context of chatbots but generally in the world of cloud dependent services.
if you work inside an Amazon ecosystem, AWS Lex is definitely worth trying out. It offers an easy to follow setup process, and can be integrated with a variety of external services of your choosing.
|
https://apptension.com/blog/2018/07/19/how-to-make-a-chatbot-aws-lex-tutorial/
|
CC-MAIN-2020-10
|
refinedweb
| 2,937
| 53.21
|
remainder, remainderf, remainderl
remainderlis called. Otherwise, if any argument has integer type or has type double,
remainderis called. Otherwise,
remainderfis called.
The IEEE floating-point remainder of the division operation x/y calculated by this function is exactly the value x - n*y, where the value
n is the integral value nearest the exact value x/y. When |n-x/y| = ½, the value
n is chosen to be even.
In contrast to fmod(), the returned value is not guaranteed to have the same sign as
x.
If the returned value is
0, it will have the same sign as
x.
[edit] Parameters
[edit] Return value
If successful, returns the IEEE floating-point remainder of the division x/y as defined above.
If a domain error occurs, an implementation-defined value is returned (NaN where supported)
If a range error occurs due to underflow, the correct result is returned.
If
y is zero, but the domain error does not occur, zero is returned.
[edit] Error handling
Errors are reported as specified in math_errhandling
Domain error may occur if
y is zero.
If the implementation supports IEEE floating-point arithmetic (IEC 60559),
- The current rounding mode has no effect.
- FE_INEXACT is never raised, the result is always exact.
- If
xis ±∞ and
yis not NaN, NaN is returned and FE_INVALID is raised
- If
yis ±0 and
xis not NaN, NaN is returned and FE_INVALID is raised
- If either argument is NaN, NaN is returned
[edit] Notes
POSIX requires that a domain error occurs if
x is infinite or
y is zero.
fmod, but not
remainder is useful for doing silent wrapping of floating-point types to unsigned integer types: (0.0 <= (y = fmod(rint(x), 65536.0)) ? y : 65536.0 + y) is in the range
[-0.0 .. 65535.0], which corresponds to unsigned short, but remainder(rint(x), 65536.0 is in the range
[-32767.0, +32768.0], which is outside of the range of signed short.
[edit] Example
#include <stdio.h> #include <math.h> #include <fenv.h> #pragma STDC FENV_ACCESS ON int main(void) { printf("remainder(+5.1, +3.0) = %.1f\n", remainder(5.1,3)); printf("remainder(-5.1, +3.0) = %.1f\n", remainder(-5.1,3)); printf("remainder(+5.1, -3.0) = %.1f\n", remainder(5.1,-3)); printf("remainder(-5.1, -3.0) = %.1f\n", remainder(-5.1,-3)); // special values printf("remainder(-0.0, 1.0) = %.1f\n", remainder(-0.0, 1)); printf("remainder(+5.1, Inf) = %.1f\n", remainder(5.1, INFINITY)); // error handling feclearexcept(FE_ALL_EXCEPT); printf("remainder(+5.1, 0) = %.1f\n", remainder(5.1, 0)); if(fetestexcept(FE_INVALID)) puts(" FE_INVALID raised"); }
Output:
remainder(+5.1, +3.0) = -0.9 remainder(-5.1, +3.0) = 0.9 remainder(+5.1, -3.0) = -0.9 remainder(-5.1, -3.0) = 0.9 remainder(+0.0, 1.0) = 0.0 remainder(-0.0, 1.0) = -0.0 remainder(+5.1, Inf) = 5.1 remainder(+5.1, 0) = -nan FE_INVALID raised
|
http://en.cppreference.com/w/c/numeric/math/remainder
|
CC-MAIN-2014-42
|
refinedweb
| 493
| 58.48
|
Why Aren't You Using An OODMS?
timothy posted more than 12 years ago | from the because-you-have-another-mantra? dept.213
Dare Obasanjo contributed this piece about a subject that probably only a very few people have ever taken the time to consider, or had to. Below he asks the musical question "Why aren't you using an Object Oriented Database Management System?"Update: 05/04 02:11 PM by H :This is also running on K5 - yes, that's on purpose, and yes, Dare, myself and Rusty all know. *grin*
Why Aren't You Using An Object Oriented Database Management System?
In today's world, Client-Server applications that rely on a database on the server as a data store while servicing requests from multiple clients are quite commonplace. Most of these applications use a Relational Database Management System (RDBMS) as their data store while using an object oriented programming language for development. This causes a certain inefficency as objects must be mapped to tuples in the database and vice versa instead of the data being stored in a way that is consistent with the programming model. The "impedance mismatch" caused by having to map objects to tables and vice versa has long been accepted as a necessary performance penalty. This paper is aimed at seeking out an alternative that avoids this penalty.
What follows is a condensed version of the following paper; An Exploration of Object Oriented Database Management Systems, which I wrote as part of my independent study project under Dr. Sham Navathe.
Introduction
The purpose of this paper is to provide answers to the following questions
- What is an Object Oriented Database Management System (OODBMS)?
- Is an OODBMS a viable alternative to an RDBMS?
- What are the tradeoffs and benefits of using an OODBMS over an RDBMS?
- What does code that interacts with an OODBMS look like?
An OODBMS is the result of combining object oriented programming principles with database management principles.. The Object Oriented Database Manifesto [Atk 89] specifically lists the following features as mandatory for a system to support before it can be called an OODBMS; Complex objects, Object identity, Encapsulation , Types and Classes ,Class or Type Hierarchies, Overriding,overloading and late binding, Computational completeness , Extensibility, Persistence , Secondary storage management, Concurrency, Recovery and an Ad Hoc Query Facility.
>From the aforementioned description, an OODBMS should be able to store objects that are nearly indistinguishable from the kind of objects supported by the target programming language with as little limitation as possible. Persistent objects should belong to a class and can have one or more atomic types or other objects as attributes. The normal rules of inheritance should apply with all their benefits including polymorphism, overridding inherited methods and dynamic binding. Each object has an object identifier (OID) which used as a way of uniquely identifying a particuler object. OIDs are permanent, system generated and not based on any of the member data within the object. OIDs make storing references to other objects in the database simpler but may cause referential intergrity problems if an object is deleted while other objects still have references to its OID. An OODBMS is thus a full scale object oriented development environment as well as a database management system. Features that are common in the RDBMS world such as transactions, the ability to handle large amounts of data, indexes, deadlock detection, backup and restoration features and data recovery mechanisms also exist in the OODBMS world.
A primary feature of an OODBMS is that accessing objects in the database is done in a transparent manner such that interaction with persistent objects is no different from interacting with in-memory objects. This is very different from using an RDBMSs in that there is no need to interact via a query sub-language like SQL nor is there a reason to use a Call Level Interface such as ODBC, ADO or JDBC. Database operations typically involve obtaining a database root from the the OODBMS which is usually a data structure like a graph, vector, hash table, or set and traversing it to obtain objects to create, update or delete from the database. When a client requests an object from the database, the object is transferred from the database into the application's cache where it can be used either as a transient value that is disconnected from its representation in the database (updates to the cached object do not affect the object in the database) or it can be used as a mirror of the version in the database in that updates to the object are reflected in the database and changes to object in the database require that the object is refetched from the OODBMS.Comparisons of OODBMSs to RDBMSs
There are concepts in the relational database model that are similar to those in the object database model. A relation or table in a relational database can be considered to be analogous to a class in an object database. A tuple is similar to an instance of a class but is different in that it has attributes but no behaviors. A column in a tuple is similar to a class attribute except that a column can hold only primitive data types while a class attribute can hold data of any type. Finally classes have methods which are computationally complete (meaning that general purpose control and computational structures are provided [McF 99]) while relational databases typically do not have computationally complete programming capabilities although some stored procedure languages come close.
Below is a list of advantages and disadvantages of using an OODBMS over an RDBMS with an object oriented programming language.Advantages
- Composite Objects and Relationships: Objects in an OODBMS can store an arbitrary number of atomic types as well as other objects. It is thus possible to have a large class which holds many medium sized classes which themselves hold many smaller classes, ad infinitum. In a relational database this has to be done either by having one huge table with lots of null fields or via a number of smaller, normalized tables which are linked via foreign keys. Having lots of smaller tables is still a problem since a join has to be performed every time one wants to query data based on the "Has-a" relationship between the entities. Also an object is a better model of the real world entity than the relational tuples with regards to complex objects..
- Class Hierarchy: Data in the real world is usually has hierarchical characteristics. The ever popular Employee example used in most RDBMS texts is easier to describe in an OODBMS than in an RDBMS. An Employee can be a Manager or not, this is usually done in an RDBMS by having a type identifier field or creating another table which uses foreign keys to indicate the relationship between Managers and Employees. In an OODBMS, the Employee class is simply a parent class of the Manager class.
- Circumventing the Need for a Query Language: A query language is not necessary for accessing data from an OODBMS unlike an RDBMS since interaction with the database is done by transparently accessing objects. It is still possible to use queries in an OODBMS however.
- No Impedence Mismatch: In a typical application that uses an object oriented programming language and an RDBMS, a signifcant amount of time is usually spent mapping tables to objects and back. There are also various problems that can occur when the atomic types in the database do not map cleanly to the atomic types in the programming language and vice versa. This "impedance mismatch" is completely avoided when using an OODB. Thus there is no limitation on the values that can be stored in an object.
- One Data Model: A data model typically should model entities and their relationships, constraints and operations that change the states of the data in the system. With an RDBMS it is not possible to model the dynamic operations or rules that change the state of the data in the system because this is beyond the scope of the database. Thus applications that use RDBMS systems usually have an Entity Relationship diagram to model the static parts of the system and a seperate model for the operations and behaviors of entities in the application. With an OODBMS there is no disconnect between the database model and the application model because the entities are just other objects in the system. An entire application can thus be comprehensively modelled in one UML diagram.
- Schema Changes: In an RDBMS modifying the database schema either by creating, updating or deleting tables is typically independent of the actual application. In an OODBMS based application modifying the schema by creating, updating or modifying a persistent class typically means that changes have to be made to the other classes in the application that interact with instances of that class. This typically means that all schema changes in an OODBMS will involve a system wide recompile. Also updating all the instance objects within the database can take an extended period of time depending on the size of the database.
The following information was gleaned from the ODBMS Facts website.
- The Chicago Stock Exchange manages stock trades via a Versant ODBMS.
- Radio Computing Services is the world's largest radio software company. Its product, Selector, automates the needs of the entire radio station -- from the music library, to the newsroom, to the sales department. RCS uses the POET ODBMS because it enabled RCS to integrate and organize various elements, regardless of data types, in a single program environment.
- The Objectivity/DB ODBMS is used as a data repository for system component naming, satellite mission planning data, and orbital management data deployed by Motorola in The Iridium System.
- The ObjectStore ODBMS is used in SouthWest Airline's Home Gate to provide self-service to travelers through the Internet.
- Ajou University Medical Center in South Korea uses InterSystems' Cachè ODBMS to support all hospital functions including mission-critical departments such as pathology, laboratory, blood bank, pharmacy, and X-ray.
- The Large Hadron Collider at CERN in Switzerland uses an Objectivity DB. The database is currently being tested in the hundreds of terabytes at data rates up to 35 MB/second.
- As of November, 2000, the Stanford Linear Accelerator Center (SLAC) stored 169 terabytes of production data using Objectivity/DB. The production data is distributed across several hundred processing nodes and over 30 on-line servers.
Below are Java code samples for accessing a relational database and accessing an object database. Compare the size of the code in both examples. The examples are for an instant messaging application.
- Validating a user.
Java code accessing an ObjectStore(TM) database
import COM.odi.*;
import COM.odi.util.query.*;
import COM.odi.util.*;
import java.util.*;
try {
//start database session
Session session = Session.create(null, null);
session.join();
//open database and start transaction
Database db = Database.open("IMdatabase", ObjectStore.UPDATE);
Transaction tr = Transaction.begin(ObjectStore.READONLY);
//get hashtable of user objects from DB
OSHashMap users = (OSHashMap) db.getRoot("IMusers");
//get password and username from user
String username = getUserNameFromUser();
String passwd = getPasswordFromUser();
//get user object from database and see if it exists and whether password is correct
UserObject user = (UserObject) users.get(username);
if(user == null)
System.out.println("Non-existent user");
else
if(user.getPassword().equals(passwd))
System.out.println("Successful login");
else
System.out.println("Invalid Password");
//end transaction, close database and retain terminate session
tr.commit();
db.close();
session.termnate();
}
//exception handling would go here ...
Java JDBC code accessing an IBM's DB2 Database(TM)
import java.sql.*;
import sun.jdbc.odbc.JdbcOdbcDriver;
import java.util.*;
try {
//Launch instance of database driver.
Class.forName("COM.ibm.db2.jdbc.app.DB2Driver").newInstance();
//create database connection
Connection conn = DriverManager.getConnection("jdbc:db2:IMdatabase");
//get password and username from user
String username = getUserNameFromUser();
String passwd = getPasswordFromUser();
//perform SQL query
Statement sqlQry = conn.createStatement();
ResultSet rset = sqlQry.executeQuery("SELECT password from user_table WHERE username='" + username +"'");
if(rset.next()){
if(rset.getString(1).equals(passwd))
System.out.println("Successful login");
else
System.out.println("Invalid Password");
}else{
System.out.println("Non-existent user");
}
//close database connection
sqlQry.close();
conn.close();
}
//exception handling would go here ...
There isn't much difference in the above examples although it does seem a lot clearer to perform operations on a UserObject instead of a ResultSet when validating the user.
- Getting the user's contact list.
Java code accessing an ObjectStore(TM) database
import COM.odi.*;
import COM.odi.util.query.*;
import COM.odi.util.*;
import java.util.*;
try {
/* start session and open DB, same as in section 1a */
//get hashmap of users from the DB
OSHashMap users = (OSHashMap) db.getRoot("IMusers");
//get user object from database
UserObject c4l = (UserObject) users.get("Carnage4Life");
UserObject[] contactList = c4l.getContactList();
System.out.println("This are the people on Carnage4Life's contact list");
for(int i=0; i <contactList.length; i++)
System.out.println(contactList[i].toString()); //toString() prints fullname, username, online status and webpage URL
/* close session and close DB, same as in section 1a */
}//exception handling code
Java JDBC code accessing an IBM's DB2 Database(TM)
import java.sql.*;
import sun.jdbc.odbc.JdbcOdbcDriver;
import java.util.*;
try {
/* open DB connection, same as in section 1b */
//perform SQL query
Statement sqlQry = conn.createStatement();
ResultSet rset = sqlQry.executeQuery("SELECT fname, lname, user_name, online_status, webpage FROM contact_list, user_table" + "WHERE contact_list.owner_name='Carnage4Life' and contact_list.buddy_name=user_table.user_name");
System.out.println("This are the people on Carnage4Life's contact list");
while(rset.next())
System.out.println("Full Name:" + rset.getString(1) + " " + rset.getString(2) + " User Name:" + rset.getString(3) + " OnlineStatus:" + rset.getString(4) + " HomePage URL:" + rset.getString(5));
/* close DB connection, same as in section 1b*/
}//exception handling code
The benefits of using an OODBMS over an RDBMS in Java slowly becomes obvious. Consider also that if the data from the select needs to be returned to another method then all the data from the result set has to be mapped to another object (UserObject).
- Get all the users that are online.
Java code accessing an ObjectStore(TM) database
import COM.odi.*;
import COM.odi.util.query.*;
import COM.odi.util.*;
import java.util.*;
try{
/* same as above */
//use a OODBMS query to locate all the users whose status is 'online'
Query q = new Query (UserObject.class, "onlineStatus.equals(\"online\"");
Collection users = db.getRoot("IMusers");
Set onlineUsers = q.select(users);
Iterator iter = onlineUsers.iterator();
// iterate over the results
while ( iter.hasNext() )
{
UserObject user = (UserObject) iter.next();
// send each person some announcement
sendAnnouncement(user);
}
/* same as above */
}//exception handling goes here
Java JDBC code accessing an IBM's DB2 Database(TM)
import java.sql.*;
import sun.jdbc.odbc.JdbcOdbcDriver;
import java.util.*;
try{
/* same as above */
//perform SQL query
Statement sqlQry = conn.createStatement
();
ResultSet rset = sqlQry.executeQuery
("SELECT fname, lname, user_name, online_status,
webpage FROM user_table WHERE
online_status='online'");
while(rset.next()){
UserObject user = new UserObject
(rset.getString(1),rset.getString
(2),rset.getString(3),rset.getString
(4),rset.getString(5));
sendAnnouncement(user);
}
/* same as above */
}//exception handling goes here
Proprietary
- Object Store
- O2
- Gemstone
- Versant
- Ontos
- DB/Explorer ODBMS
- Ontos
- Poet
- Objectivity/DB
- EyeDB
Open Source
- Ozone
- Zope
- FramerD
- XL2
The gains from using an OODBMS while developing an application using an OO programming language are many. The savings in development time by not having to worry about separate data models as well as the fact that there is less code to write due to the lack of impedance mismatch is very attractive. In my opinion, there is little reason to pick an RDBMS over an OODBMS system for newapplication development unless there are legacy issues that have to be dealt with.
OOP useful, but not a silver bullet (1)
Anonymous Coward | more than 12 years ago | (#245626)
My impression is that the relational model of databases is more natural to most DBAs than the object oriented model. Object oriented software tends to have a large nunber of derived types, and furthermore operator overloading (or function overloading) makes it impossible to read a snippet of code and really know for certain what it does. For certain applications (e.g. management of large data sets and low level systems programming) these features do not provide a sufficient "win" to offset the additional complexity and overhead. The competing paradigms in programming language design that I think are most compelling are:
Because I haven't found a compelling reason to (2)
Anonymous Coward | more than 12 years ago | (#245630)
One way to look at OODBMS is the second coming of IMS, the old IBM hierarchial DBMS.
I do Java programming (I hear boos and hisses from the peanut gallery, but I persist...) and could use a seamless way to store state of my object hierarchies, but OODBMS haven't been it. (The wag will say at this point that I should be using Smalltalk, which has this seamless storage, but I duck this brick and go on my way).
Why aren't you using Oracle? (4)
Anonymous Coward | more than 12 years ago | (#245632)
Oh my god... (3)
pb (1020) | more than 12 years ago | (#245634)
I didn't think I'd see the day when someone got actual content posted on Slashdot.
Or, for that matter, that you'd post a Java article that I thought was somewhat interesting and useful...
Anyhow, wouldn't it be easier to integrate all this with C? Especially considering the huge body of existing code, and the well-known primitives involved.
And are there any less proprietary OODBMSes out there that anyone would recommend?
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Blah blah blah. (2)
defile (1059) | more than 12 years ago | (#245635) totally not amused by the Objectify-Everything mindset. This is a very difficult perspective to have, FWIW. All academics teach OOP as the "one, true, way." and in practice, people believe them. In practice, they also find that it's not the silver bullet that it's trumped up to be. Regardless, saying that you think it's bullshit leads most people to the conclusion that you're an unwashed uneducated fool.
Oh well. I never accused programmers of being open-minded.
Re:it is worth it! (2)
sql*kitten (1359) | more than 12 years ago | (#245640)
They don't use just Objectstore. Lots of them use Informix [informix.com] too - like 8/10 of the world's traffic if you believe the adverts.
It's more about Mainstream vs. Performance (2)
Juju (1688) | more than 12 years ago | (#245641)
Although I had worked before on Oracle systems.
People have more confidence that they can find developers for Oracle than ObjectStore. But the same is true about COBOL programmers vs Java or C++ ones.
But this is why we have got consultants to help start the projects and get the design right. Once started on the right track, development usually goes a lot faster than in a traditional RDBMS environment.
I think the main reason people stick with Oracle and co. is that they prefer a known and tested solution like an Oracle database to store their business data.
This is the old "no one got fired for choosing IBM" argument.
By the way, there is no dba with ObjectStore (at least not in the way people think of them).
The optimizations are done by the programmers because the db layout is really dependant on your object model.
But the speed factor is something real which is why ObjectStore is doing so well with telcos (where speed is of the essence) with C++ applications.
On the EJB side, we really blow RDBMS out of the water. Oracle is nowhere near Javlin (EJB containers for ObjectStore) in term of performance...
it is worth it! (3)
Juju (1688) | more than 12 years ago | (#245642)
I agree about the complexity and skill availability arguments, it is still easier (and cheaper) to get several COBOL and VB programmers than Java or C++ ones.
But then you can always get a consultant to help with the design. And as a matter of fact, it will be faster to develop that way than having a bunch of COBOL developers put together some kind of server side app while some VB coders put the client interface together...
Having done both, I can tell you what kind of system scales and which one does not.
Have you done some EJB programming? You would be surprised how much faster and easier it is to go the OODB route.
My opinion on what the biggest problem really is, is mainstream recognition. OODB vendors are vulnerable to FUD from RDBMS vendors as much as Linux was suffering from Microsoft FUD two years ago. Note that for OODB systems (as for Linux 2 years ago) there are some good reasons to stick with the mainstream solution. Going the OODB route is far more risky (from a business decision making point of view).
Re:Ahhh, more FUD (2)
miniver (1839) | more than 12 years ago | (#245643)
Yeah, there's a standard
... but what good is the standard if none of the vendors do more than implement subsets of the standard, and none of the vendors implement the same subset?
Are you moderating this down because you disagree with it,
Oh, OODMS (1)
booch (4157) | more than 12 years ago | (#245646)
RDBMS w/ CORBA layer (3)
johnnyb (4816) | more than 12 years ago | (#245647)
I don't have a lot of experience with OODBMSs - I'd be curious exactly how they work. The closest I've worked with is PostgreSQL which is Object-relational. Are there any intro guides, especially to schema definition and stuff like that.
Is there a free software OODBMS?
Performance? (1)
jbert (5149) | more than 12 years ago | (#245648)
"No Primary Keys" is _not_ an advantage. (1)
armb (5151) | more than 12 years ago | (#245649)
But anyway, I just wanted to pick up on one point.
When you." that's just flat wrong (and dangerously so).
Object identity is important. If you rely on invisible Object IDs to wave a magic wand and handle it for you, you will almost certainly end up with a single real life object having multiple inconsistent representations in the database, so to avoid screwing it up you will need to do explicit defining of keys and checking for duplicates.
On the other hand, if you have a well defined keys in your Relational Database, there isn't a problem - the database won't _allow_ duplicates. In a worst case, you have to define artificial IDs in your RDBMS and you are back to the OID case except that you actually have control over them.
Of course lots of people do make a mess of choosing keys - but OIDs _don't_ solve the problem.
--
I'll tell you why (1)
ajm (9538) | more than 12 years ago | (#245652)
Schema Smema (1)
ArthurDent (11309) | more than 12 years ago | (#245655)
Ben
Re:You didn't read the article, did you? (2)
esper (11644) | more than 12 years ago | (#245656)
So I take it you've only worked on pristine new projects coded in a vacuum, then? While I've never known anyone (well, anyone who knew what they were doing) to start putting data into a new database before designing their app, I've encountered many cases of new apps being written to use existing databases, generally either because the new version needs to be backwards compatible with the old one or because the need has arisen to look at old data in a new way.
If you only found "one" disadvantage, you ... (1)
Augusto (12068) | more than 12 years ago | (#245657)
Object databases are cool, specially from a programing standpoint, but there are a lot many disadvantages than the single one you listed, including : cost, familiarity (most people are not), less vendors (and open source alternatives), legacy databases (migrate or build bridges ?), etc.
Whenever you compare technologies, if you only find 1 disadvantage, you have probably not looked hard enough.
the most pragmatic reason i've encountered... (3)
dohmp (13306) | more than 12 years ago | (#245658)
in all cases that we had gone through rigourous prototypes of products and used ODBMS', it always seemed to come down to the same few things:
1) critical mass (everyone already knew the relational databases very well)
2) tool robustness (there are a wide variety of good tools (most 3rd party supplied) to MANAGE relational instances. i'm referring to more subtle circumstances than managing users & schema here)
3) reporting and data-mining was ALWAYS more difficult (usually by an order of magnitude or more).
now, my last involvement in a prototype is YEARS ago, so i'm absolutely positive things have changed...
the reality remains that people haven't yet gotten by what they learned in their first few experiences and simply haven't re-examined the landscape, just like myself...
a weak excuse, but i'm certain this is a more common answer than we'd all like to admit.
just my 0.02.
Peter
Re:Hmm.. (1)
An Ominous Coward (13324) | more than 12 years ago | (#245659)
You should be, if you're a software engineer. If you're just a code monkey then it doesn't really matter as you don't create the design, just bang out code from a specification.
Re:Why not? I'll tell you why not . . . (2)
funkman (13736) | more than 12 years ago | (#245661)
Adding columns and design changes are fact of life. Until it is as easy and safe as relational - OODBMS apps will definitely be widespread.
When OODBMS store their data in an XML document style format is when OOBMS will take off since the dtd can be written with versions in mind and older objects can still be understood with relative ease.
Re:Why not? I'll tell you why not . . . (2)
Waldmeister (14499) | more than 12 years ago | (#245662)
But I have some additional points:
6. The world is not object oriented. Even if oo is a usefull tool, it is no silver bullet.
7. RDBMS are proven technology and rather well standardised, OODBMS aren't. Currently there is a proposal for a standard (java data objects), but even that only addresses one plattform.
Re:Why not? I'll tell you why not . . . (2)
Tim C (15259) | more than 12 years ago | (#245663)
Well, I don't know about anyone else around here, but I knew OO before I'd even heard of SQL.
I know that C, Perl, etc are still very popular languages, and deservedly so, but C++, Java, etc are also extremely popular. I think OO has been around long enough now for there to be little excuse for people not to know anything about it. They're even teaching it to the Physics students at my old university, fer chris' sake!
(Although not until after I'd been forced to learn Fortran, mind you...)
Cheers,
Tim
Why OODBMSs did not take over the world (5)
geophile (16995) | more than 12 years ago | (#245667)
As someone else has pointed out, OODBMSs require a very different skill set. The problem isn't that your typical SQL developer didn't have these skills. The problem is that the things were ever referred to as database systems.
If you walk into a potential customer selling a "database system", then the database guys come and hear what you have to say. They ask about SQL support and point-and-click development tools. They are going to be looking for very high levels of concurrency, at isolation levels below serializable.
Selling a "database system" meant that once we got past the early adopters, we were selling against Oracle and we hit a wall. What we should have done from day one was to sell persistence for C++. We did start out like this, e.g. trying to convince ECAD vendors to build their products on top of ObjectStore. That had some limited success because the customers knew that they needed persistence, but they were C/C++ hackers at heart, and an RDBMS was a poor compromise. A "database for C++ with no impedance mismatch" sounds great to someone writing a 3d modeler. We then went on to apply the same logic selling to satisfied RDBMS users without changing our strategy, and that's when things stalled.
That strategy was necessary in some ways, because we were venture-funded, and the VCs weren't going to be happy with a small niche. They wanted something that would get into every insurance company and bank. However, by aiming high and failing (by VC standards), we abandoned our natural market too soon and avoided becoming a small success in that market.
Re:why am I not using one? (2)
platypus (18156) | more than 12 years ago | (#245672)
Why use an OODBMS (1)
Phill Hugo (22705) | more than 12 years ago | (#245674)
It keeps things elegant, tidy and dev time is slashed considerably (perhaps 40% of similar things in PHP/RDBMS from my experience)
If you don't try them, you'll never know.
Re:Practicality vs Performance (3)
Brento (26177) | more than 12 years ago | (#245677)
And that's precisely the same reason it took Linux so long to catch on in the enterprise, and why it still hasn't invaded small to medium businesses with only 1-2 network-savvy people. I'd love to switch to Linux fileservers instead of upgrading our NT boxes to 2k, but since we can't find anybody with the appropriate experience to manage them when I'm not around, we stick with the point-and-click OS's. Don't flame me for the decision, I'm just stating why we don't always switch to things we all know are best. (Reminds me of OS/2 for some reason.)
How about a data storage example? (1)
badmonkey (29600) | more than 12 years ago | (#245678)
Main reason: (1)
Godeke (32895) | more than 12 years ago | (#245680)
GOTO Considered Harmful (4)
hey! (33014) | more than 12 years ago | (#245681)
Relational systems are useful for a wide variety of tasks specifically because they are limited in their expressive power. This limitation in their expressive power means that certain desirable properties are maintained.
The objects that are recognized in the relational programming model are scalars, tuples and tables. Most operations are closed on the set of all tables -- that is to say the take tables and produce tables. This means that you can compose operations in various kinds of ways and still have more raw material for further operations.
To take a more modern view of this: relational databases are about the reuse of facts. The process of designing a database is one of analyzing factual relationships so that eventually each fact is stored in one and only one place. This, along with the closed nature of relational operations, facilitates recombining these facts in various novel ways. I believe this is the source of the relational model's sustained popularity.
The cost is that the resultant model is not ideal for any single application. I believe this is the nature of the "impedence mismatch" -- you are dealing with an off-the-rack, one-size-fits-most-applications representation of data. Naturally, for complex applications with severe performance constraints, a more tailored representation is required.
I've never had the cash to hack around with OO databases, so I'd like to learn more. Do they support the kind of composition of operations that you get with relational systems? Presumably objects can be re-used in different applications, but how well does this work in practice?
Re:why am I not using one? (2)
costas (38724) | more than 12 years ago | (#245686)
ObjectStore PSE Pro (1)
Percible (39773) | more than 12 years ago | (#245687)
Re:You didn't read the article, did you? (5)
Mr. McGibby (41471) | more than 12 years ago | (#245688)
it in any way including putting data in a DB? I've worked at two companies and
had a ton of projects in school and none involved implemnting the database
before the application was designed.
I'd like to know what world you are living in. In the real world, most databases
are legacy databases and FULL of data. I've had to design applications around
databases for years now. In my field (Programming for Engineers) the data is king
and people need to access it in multiple ways. True, if you are designing a
system from the ground up, then you will be able to design the DB and
make it nice and pretty. This is seldom the case in any case but web development..
'Generic' may be the wrong word here. A better one would be 'simpler'. A lot of applications
just don't need all the OO stuff. The reason that RDBMSs are so pervasive is
because most data can be represented well and in an easy to understand way with
just tables and keys..
Developers aren't the only ones who have to query the database. In my shop,
we have 10-20 people querying the same database. Many of whom have spent a lot
of time learning SQL. Most of the people who need to look at the data are
not able to pick up a new query language quickly enough. SQL is simple
enough to learn. RDBMSs are simple and easy to understand. With an OODBMS,
these people have to be trained on what the heck OO is. This is not an easy
concept for a non-progammer. On the other hand, tell someone that the database
is a collection of tables, and they can easily understand..
Sure. I'll believe it when I see it. This sounds like marketing hype to me.
Sounds like someone who didn't know how to program for an RDBMS wrote some
crappy code. Correctly written code for an RDBMS would not experience these
kinds of gains when converted to an OODBMS. The overhead for the conversion
process could be this large, but only if the original code is crap.
Re:Oracle? (1)
Chandon Seldon (43083) | more than 12 years ago | (#245689)
Perhaps.... because Oracle isn't an OODBMS.
Re:Hmm.. (1)
paRcat (50146) | more than 12 years ago | (#245691)
oh well. I should never try to be funny again.
I wish there was one... for PHP (1)
Betcour (50623) | more than 12 years ago | (#245692)
O/R Mapping Layers? (2)
dubl-u (51156) | more than 12 years ago | (#245694)
A couple of interesting open-source ones are Castor [exolab.org] and Osage [sourceforge.net]. I haven't had the chance to use either one in a serious project yet, but as a NeXT refugee I'm looking forward to using a good O/R mapping layer again. Do people have any recommendations?
For those interested in the topic, there is useful information at Scott Ambler's site [ambysoft.com], including his white paper The Design of a Robust Persistence Layer for Relational Databases [ambysoft.com].
OODBMS issues (1)
theMAGE (51991) | more than 12 years ago | (#245695)
2. From the developer perspective: the overhead between of the OO layer on top of the database is known. What if I get a OODBMS with superb C++/Smalltalk integration but poor Java support?
There are certain design/implementation patterns that deal with OO/RDBMS impedance mismatch and the burden of reimplementing them for every project is accepted as a fact of life...
Don't forget Cache... (2)
1010011010 (53039) | more than 12 years ago | (#245696)
- - - - -
Re:Why not? I'll tell you why not . . . (3)
1010011010 (53039) | more than 12 years ago | (#245697)
- - - - -
Re:Why not? I'll tell you why not . . . (3)
1010011010 (53039) | more than 12 years ago | (#245698)
- - - - -
Re:Cuz it's just a BUZZWORD. (1)
Coy0t3 (62077) | more than 12 years ago | (#245701)
OODBMS have different goals (1)
const (62774) | more than 12 years ago | (#245702)
OODBS have different goal (1)
const (62774) | more than 12 years ago | (#245703)
OODBMS systems usually state different goal from RDBMS. The OODBMS goal is an persistentce for object. This is an cardinal break from RDMS parspective, which goal is to provide interface to structured persistence storage.
Lets consider full typical stack of layers in RDBMS.
All three layers provide different interface to the bits on the disk but throw different concepts. They update disks on bits and start from bits on the disk.
OODBMS-es do not start from bits on the disk, they start from objects an make them "persistent". The goal is unreachable. Object is transient thing and could not be made peristent. If you will kill a program and start it again you will have other object. OODBMS system usually invest to many efforts to maintain the Illusion of Persistence, but the Illusion is broken on every step. A schizophrenical double thought is required to program on OODBMS-es. Object is persistent and transient at the same time. It is especialy hard to think in this way in clustered environment.
OODBMS-es will start to conquire the world when they will acknowledge reality of transient objects and persitent bits on the disk. So task will be good OO interfaces to those bits.
Only OODBMS' innovation that worth trees killed in name of them is OID (and therefore an fast navigation) and which could be easily put in the RDBMS. I guess that SQL3 (or SQL99) already done it.
Inheritance is really syntactic sugar over linking.
Composition is also easy with oids and cascade on delete.
"No Impedence Mismatch" cost loosing flexibility in schema changes.
One data model is easily solved by automatic generation of interfaces from database, database scheme model just need to be enriched with binding info. But database should be an real source of data model.
Disadvantage: Lack of mathematical completeness. (1)
ryarger (69279) | more than 12 years ago | (#245705)
When I join two tables in an RDBMS, a presice mathematical formula can be followed to join the tables with great effeciency (especially with the existence of keys).
Let's take an extremely simple example: I have a Company object that contains an array of People objects. The People object contains among it's attributes, a Last Name field.
I want to find all the Company's that employ 'Anderson's.
The join required in an RDBMS to do this is simple and effecient. How is this performed in an OODBMS? Is each company instantiated and it's People array walked, searching for 'Anderson'? Ick.
OODBMS has it's place, particularly in tree-based systems, where ad hoc queries are rare (cf. Zope's content management system), but RDBMS has a fundamental advantage when dealing with large data sets and/or joins.
More disadvanatges of OODBMS's (1)
gradbert (80505) | more than 12 years ago | (#245707)
Good, but somewhat misleading (2)
figlet (83424) | more than 12 years ago | (#245709)
Just check out what google(tm) gives you for the search terms: object relational mapping [google.com].
yabba
Vendor lock-in (1)
trcull (83999) | more than 12 years ago | (#245710)
1) If your database isn't performing up to snuff, you can't switch to another vendor without re-writing tons of code.
2) If you need to integrate with another piece of software (ie. your company just bought another company and wants to incorporate that company's product into its own) you're out of luck.
3) If you have a large volume of legacy data you need to roll over into your system, loading it in and querying it will be a painful exercise.
We have a home-grown OODBMS-ish system at my company that we have to build our systems on and consequently it is very painful to integrate with anything from the outside world.
this means nothing to me (1)
Dalroth (85450) | more than 12 years ago | (#245711)
Re:Blah blah blah. (1)
radish (98371) | more than 12 years ago | (#245713) not so sure what makes you feel so superior, doesn't everyone do this? You're not getting the difference here. In your model, you may be going against any old DB, but you're still accessing (and thinking of) your data in the same old tabular way. An OODBMS totally changes your way of thinking. If you try and use an OODBMS in the exact same way as an RDBMS you will most likely hit problems, not least with performance. But if you were to adjust your thinking to using an OODBMS properly you would see some amazing things start to happen.
I've never used a real OODBMS, but in a project I was recently on we built an objectrelational mapping layer which attempted to do more than the usual. Rather than allowing us to build & populate individual objects from the DB one at a time, we could specify (at run time) a query. This would be run against the backend DB and the mapping layer would look at the data and create all the objects it could from that data set, with allowances for mandatory/optional fields etc. So for instance you could do something like "select * from customers, orders where order.value>1000" and you would get back a set of order objects where the value was over 1000, and also the appropriate customer objects. All the inter-object references would already be populated and if you asked for something which didn't exist (maybe order.getProduct()) it would go off and build it for you.
In short this approach (and I'm sure real OODBMS systems do this better) allows you to change how you think about the data, often with great results.
Re:Too Much Integration (1)
Lacutis (100342) | more than 12 years ago | (#245714)
Once you have the hash table of objects, you should be able to print them out and then be able to work with them.
Ahhh, more FUD (3)
Carnage4Life (106069) | more than 12 years ago | (#245717)
Which makes more sense when writing an application using an object oriented programming language to develop an application? Using a database that is consistent with the programming paradigm and performs database operations transparently or one that requires the developer to go through additional hoops to get data, is generally slower, and involves writing more code?
7. RDBMS are proven technology and rather well standardised, OODBMS aren't. Currently there is a proposal for a standard (java data objects), but even that only addresses one plattform.
Not only is there a standard but the ODMG standard is on version 3 [odmg.org], JDO is merely a Java standard. Please know the facts before flaming.
--
You didn't read the article, did you? (4)
Carnage4Life (106069) | more than 12 years ago | (#245718)
Complexity. These systems are much more difficult to design than RDBMS. The application must be designed first, then the data structures must accomodate that. This kind of design is very expensive.
Aren't you supposed to design an application before implemnting it in any way including putting data in a DB? I've worked at two companies and had a ton of projects in school and none involved implemnting the database before the application was designed.
RDBMSs are generic. Since an OO system is designed for a specific application, it's difficult to use that system for anything else. A well-designed, properly normalized RDBMS can be used for many different applications. When a DB is going to fill many terabytes, you don't want to have multiple copies of it for each distinct reporting application..
Schema changes. As mentioned in the article, schema changes are a nightmare with an OO system. In a relational system, some changes can be made with no impact on existing applications. Others are relatively uncomplicated compared to similar OO changes.
Skills availability. Yes, the old management problem. Everyone knows SQL; nobody knows OO..
It's just not worth it. Given the dramatically higher costs associated with designing and maintaining an OO system, most applications just don't need the incremental performance gains associated with it. Very specialized, very high performance systems would benefit, but smaller or more general systems would not..
Finally, where the heck are you getting this BS that designing an application with a single data model (i.e. one set of UML diagrams) is more expensive than designing one with 2 data models (i.e. an ER model for the DB, UML for the application).
--
Re:Try generating reports on an OO database! (1)
BinxBolling (121740) | more than 12 years ago | (#245720)
An RDBMS wouldn't have necessarily made reporting any easier. The sort of highly-normalized RDBMS schema that is appropriate for an OLTP-type application is quite hard to report against - data in such a schema isn't usually very "tabular", either.
Many shops that use an RDBMS for their OLTP systems end up building a seperate data warehouse with a flatter, more tabular schema that can be more easily queried for report generation.
Speed? Reliability? (1)
Jason Cwik (124849) | more than 12 years ago | (#245724)
On the topic of reliability, what about advanced DB features such as replication? Can I have ADSM log into the database and do a hot backup?
Of course then you have the age old issue of a company saying "We run DB2. Period." Then choosing an OO DB is a moot point...
Price? Tools? (1)
TheLink (130905) | more than 12 years ago | (#245729)
Then from the "cheap OODMS" subset, what are the OODMS which have interfaces with Python/Ruby/other scripting language?
If the result is zero, then maybe that's one answer to "Why aren't you using an OODMS?".
Of course there are other things to "constrain" the query on like docs, examples, mailing lists.
Cheerio,
Link.
Re:More disadvanatges of OODBMS's (1)
TheLink (130905) | more than 12 years ago | (#245730)
Just write a function/method so that given an SQL, you get a list/array of stuff.
Of course that uses more RAM - everything loaded into program's memory space. Oh well
Cheerio,
Link.
No solid foundation (1)
kill -9 $$ (131324) | more than 12 years ago | (#245732)
IMO, extended-relational has the best chance at success because at its core you have the solid relational mathematics developed by Codd so many years ago (a solid starting point). But has anybody used the OO features of Oracle 8i for instance. I have, and I thought it sucked. It seemed all they did was put an OO interface on relational tables. You could still access OO data via standard SQL so you couldn't guarantee encapsulation, etc. and all the other great promises of the OO paradigm.
I've also used O2. Very interesting appproach, but hardly solid. (and as I've mentioned, they've gone out of business) Each vendor builds the quailties of an OO database they feel are necessary and there is no consistancy among vendors (as in the RDB world). Products are either significantly immature, lack serious qualities required for a production DBMS, or just plain don't work. (Note: I don't necessarily think this is a bad approach to the design of an OODB, in that I felt these types of products preserved a lot of what OO was trying to do. Its just that nobody has been able to define an concise, solid theory (like the relational model) upon which to build these products)
So to answer the question, why don't I use a OODBMS? Simply because the technology isn't superior to my existing RDB yet. Only in certain niche solutions (engineering, BOM's, etc.) might I actually consider using such a product. I feel its a straight-forward example of right tool for the right job.
Other OODBs (1)
BMazurek (137285) | more than 12 years ago | (#245735)
SAL [kachinatech.com] has listings of a bunch of OODBs.
Re:Why not? I'll tell you why not . . . (2)
BMazurek (137285) | more than 12 years ago | (#245736)
Don't system designs using UML or any modelling technique used today translate quite simply to an OODB, since they are OO to start with?
The application must be designed first...of course, on successful projects people don't immediately start coding without knowing what they're coding. I don't see how that differs depending on an RDB or OODB world...
Isn't that like saying you can't use the RDB you designed for project A on project B? It seems to me if you can move tables representing objects from project A to project B, you should be able to move the objects from the OODB used in project A to project B. The transportability of the objects OR tables depends on the relationship/similarity between the two projects.
I don't understand what you mean by multiple copies of an multi terabyte OODB....
If changes can be made to the RDB tables without impacting the system, can't correspondingly similar changes be made to the OODB object models? If code has to change in one, it would seem to me that code would have to change in the other.
Can't argue with that. Reminds me of a quote from a book that went something like this: "Like it or not, SQL is intergalactic interspeak."
As I said, I'm not very familiar with either, and any clarification of your points would be appreciated.
Warn Modem Users (1)
ekrout (139379) | more than 12 years ago | (#245737)
Reasons for NOT going OO (1)
Cestus (141759) | more than 12 years ago | (#245740)
Why not OODBMS? Perhaps immaturity... (1)
jpietrzak (143114) | more than 12 years ago | (#245741)
I went on to work at a job involving a similar product, which stored data persistently in a Poet OODB. Although the basic functionality of Poet seemed to work correctly, my manager had chosen Poet in order to use a special feature that had been advertised, but (as it turns out) wasn't completed at the time the DB was released. Support e-mails turned into intercontinental discussions with the German developers of the product, weeks turned into months, and before long this project was a year behind schedule, due almost solely to a dependence on an OODB. Finally, Poet modified their licensing scheme (forcing potential users of our project to absorb exorbitant sub-licensing fees), and that broke the back of the whole effort.
I moved on to a new job at a large information services company. Here, they've been using Objectivity internally for some years, but we're now in the process of converting over to Oracle. Problems with scalability, with bugs, with support, and with licensing has driven the company to look for alternatives.
In short, I have no arguments against the underlying technology of OODBMSs, but I have yet to find an OODB company that can execute well enough to make their product worthwhile.
--John
OODBMS Hiccoughs (1)
theProf (146375) | more than 12 years ago | (#245745)
Problems with OODBMS
1. OQL implemations
2. dont' share data so well because most
are file-based implementations NOT servers
3. schema
4. administration
O2 worked, but the schema issue turned into a problem. Same sort of thing as versioning interfaces in CORBA. Every time an object is changed, Phoof.
OQL implementations are not particularly well done. Some systems do, some don't. And the variation in these is wide.
Administrative tools for a application on the OODBMS we used had to be written as part of the project. Reporting out of a OODBMS data set of course needs you to write more code. You cannot just use a copy of Access or similar.
You do not have a tidy client-server relationship across the wire in most of the products I checked out. There was one, but it cost $50,000. so you have to host the OODBMS on the same server as the application. Unless you care to get into CORBA or other 3rd party wire protocol.
Oh, and there are few 3rd party tools around for them.
OODBMS really must
My 1/2 penny worth.
Another disadvantage... (1)
jmichaelg (148257) | more than 12 years ago | (#245746)
Way back when, I worked on a new-fangled air-defense system that was going to be 100% high level language - no assembly language for us, no-siree! The machine was state of the art - 16 mhz with 24 terminals hung off it - power to burn! Fortunately, my boss had the sense to write a small snippet of code that pretended to be a radar feeding a control program - classic client-server code. No real radar processing was involved and the control program didn't have to do anything with the fake return except count it. Our design spec called for handling 2000 returns per minute. The simple client-server code he wrote could only manage 500 returns/minute. Fortunately, the test happened early in the coding phase so we were able to re-design portions of the system to adjust to the physical realities of the hardware.
Abstraction is nice but be sure it's not so abstract that the hardware can't handle it.
standards, compatibility, performance (4)
_|()|\| (159991) | more than 12 years ago | (#245747)
As patchy as the SQL, ODBC, and JDBC standards may be, they have commoditized the DBMS market. Until object databases can do the same (the ODMG standards [odmg.org] don't even come close), they lock you into a proprietary solution. Ultimately, if your database doesn't scale as well as you'd like, that will hurt performance.
Research and Problems (1)
addbo (165128) | more than 12 years ago | (#245748)
Re:Cuz it's just a BUZZWORD. (1)
shippo (166521) | more than 12 years ago | (#245749)
At work I use a system that sends financial transactions over an IP connection, writing them to an RDBMS. Depending upon the configuration this consists of between 3 and 5 processes per environment. Each process consumes a minimum of 6MB. but I've seen processes of 400MB in size, caused by handling a modest number of transactions. Everything is coded in C++.
10 years ago, the entire system would probably run as fast on a machine with a total of 6MB of RAM. Progress?
Sorry, but I've never been convinced by OOP. There is too much abstraction between the source code and the processor.
WebObjects and Oracle (1)
TeamSPAM (166583) | more than 12 years ago | (#245750)
At my job we use WebObjects [apple.com] to go the OO route. This is NeXT technology that Apple bought. One of WebObjects features that is very nice is EnterpriseObjects. I can suck in an existing database schema or create my own in eomodel and all my table become objects that I can use. Granted there is a layer between my code and the database, but I never put a line of SQL in my code. Usually all I do is make calls to accessor methods to get or modify the EnterpriseObject which is the data from the db.
This is something our DBA thinks is a slighty bad thing. If I have data in 2 seperate DBs, I can use eomodel to connect that data up and display it to the user. The really cool part to this is that they don't even have to be the same kind of db. You can mix data from Oracle with DB2 or SQL server. With a database abstraction layer this powerful why should I be using a OODBMS when I've yet to see them have the same performance of a powerful RDBMS like Oracle?
Another reason (1)
decesare (167184) | more than 12 years ago | (#245751)
To abuse an old quote: "No one ever got fired for buying a database from Oracle."
I used to work for one of the OODBMS firms mentioned, and a lot of our customers at that time were universities and a few companies doing small pilot projects with the database. But if you're an IT manager doing a serious database project, which vendor are you going to choose: a multi-billion dollar company like Oracle, or a relatively tiny OODBMS vendor that might not be around in a few years?
Sounds more like an ad.. (1)
hask3ll (171878) | more than 12 years ago | (#245753)
This piece sounds more like an ad than a real research paper. There aren't any real numbers, just the assertion that an OODMS can be faster on complex datatypes than an RDBMS. In what *specific* cases is an object-oriented database faster ? Then there's the "statisfied customer" section, "All these people are using object-oriented databases..."
The code size comparison is pretty useless. The code I use with Apple's EOF is more compact than any example here, and it can accomodate any kind of DB.I'm sure that there are times when using an OODMS is a good choice. This article does nothing to help me determine what those times are.
Re:I concur... (1)
dannywyatt (175432) | more than 12 years ago | (#245756)
While "object" math could get there, it hasn't yet. And, since we're all angelic coders who wouldn't think of deploying software that we hadn't proved computationally correct, how do we prove computational correctness of OO-data?
At least with relational-OO "mapping" we can prove that what we lose in the mapping is inconsequential to the application at hand.
Re:Too Much Integration (1)
Coz (178857) | more than 12 years ago | (#245758)
One of the issues I've argued with OODBMS vendors is how to do ad-hoc queries. Some of them provide SQL interfaces to let you access the OO database, meaning they're mirroring their OO structure with a relational structure - but the query responsiveness always bites. Just don't try to do complex joins - they ain't worth the effort (yet).
As for reusing the database - if you have access to the object model, you can reuse the database - and each vendor has their own ways to make different parts visible to different applications.
I concur... (2)
motek (179836) | more than 12 years ago | (#245760)
Theoretical rudiments for relational databases do exist and are well understood by some. I am not referring to SQL here, but rather to math.
Term 'relation' is well defined. Math behind it is pretty and simple. Simple is good.
So, on the one side we have well defined mathematical concept that can be worked on. On the other hand - elusive 'art of objects'. Although I personally prefer the latter (makes me feel good about myself) I always appreciate ability to define software in more solid terms. A relational database engine (what it does, what it needs to do) can be defined in terms of quite comprehensible algebra.
There are similar efforts for OODBMS (evolving algebras, for instance) but these are relatively recent - and most people doesn't care about them.
Conclusion: relational database engines are simple in construction. OO databases are not. Simple is good. Complex is bad. Long live RDBMS.
-m-
KISS Principle (2)
Alien54 (180860) | more than 12 years ago | (#245761)
[Insert snide comment here] Take a look at that pinnacle of Object Oriented Programming, Microsoft Office
That cheap shot aside, The ramp up to a level of truly competent understanding is much longer than anticipated. The problem is that often OOP can give the appearance of competancy to those not in the know, but you still have the same problems that you had before, that can be much more difficult to find, if you are not expert
Re:I'll tell you why (1)
cnkeller (181482) | more than 12 years ago | (#245762)
Versant is fairly well used in the government sector (where due to budgeting constraints, they are often forced to use other than industry standard solutions). There have been numerous succesful implementations of OODBMS (Versant specfically), you just don't hear about them because they're not high profile commercial applications. Chicken and the egg right? Until a big name (Yahoo, IBM, MS) creates a highly publicized project incorporating OODBMS, you're going to have to do your own searching to find the "proven" successful projects who have done this. The ones I'm most familiar with are all classified applications, so Joe Programmer isn't going to ever hear of those...
Re:Practicality vs Performance (1)
nooekanami (192720) | more than 12 years ago | (#245765)
Thanks... (2)
ChaoticCoyote (195677) | more than 12 years ago | (#245767)
--
Scott Robert Ladd
Master of Complexity
Destroyer of Order and Chaos
Re:I don't have enough Objects (2)
sv0f (197289) | more than 12 years ago | (#245768)
Eiffel? C#? How about the originial -- Smalltalk.
(god bless Objective C and Scheme)
I don't know much about Objective C, but Scheme? In no sense are, for example, numbers treated like OOP objects in Scheme. I mean, you can't subclass them. They are objects in a non-OOP sense, I guess -- you can query their type and all. But this sense is irrelevant to the current thread.
Re:Ahhh, more FUD (2)
sv0f (197289) | more than 12 years ago | (#245769)
How about using the appropriate paradigm for the application at hand (which is not always OO), the right paradigm for the data in the database (which may be relational, OO, etc.), and establishing a sensible protocol between the two?
The point of your target article is well-taken, but don't get too religious about OO. uniformity for uniformity's sake is seldom convincing.
Re:Warn Modem Users (1)
dxfreak (197816) | more than 12 years ago | (#245770)
Re:why am I not using one? (1)
mark_lybarger (199098) | more than 12 years ago | (#245771)
Why not? I'll tell you why not . . . (4)
micromoog (206608) | more than 12 years ago | (#245776)
Re:You didn't read the article, did you? (1)
Placido (209939) | more than 12 years ago | (#245778)
Aren't you supposed to design an application before implemnting it in any way including putting data in a DB?
Yes you are. The data for the application should be analysed and the database designed around the data. Unfortunately in the real world dumb ass managers and accountants are <%do while foresight < average%>very, very, very <loop> reluctant to spend alot of money on the proper design and development of systems. Not only does this prevent the use of expensive-to-implement and expensive-to-support products but it also means that system designers do not have the time/budget to forsee all the changes that the data will go through. This means that almost every single database is not configured correctly and goes through many schema changes in it's lifetime. As with everything in life it is a balance between financial limitations and desired result.
Talking about financial limitations and desired end result, I desired a porsche but financial limitations prevented this... all I got was this lousy t-shirt.
Pinky: "What are we going to do tomorrow night Brain?"
I don't use OODMS (3)
stille (213453) | more than 12 years ago | (#245781)
Re:Why not? I'll tell you why not . . . (2)
WinterSolstice (223271) | more than 12 years ago | (#245783)
Here's another example for your case against OO: We implemented Poet, (at no small cost, I might add) only to find that within 6 months of beginning to use Poet as our major DB, a major 'Organizational Re-Adjustment' and a 'Data Center Consolidation' project wiped the whole dang thing in favor of DB2/OS390.
Now, our DB2 performance is actually much better than Poet ever was. Perhaps this is due to having highly skilled DB2 DBAs, or something. All the same, it is not worth going to a very expensive, single application db. Always use something standard, flexible, and easy to find admins for. It's worth it in the long run.
-WS
Oracle? (2)
NineNine (235196) | more than 12 years ago | (#245788)
And the reason I'm not using it yet is because it simply hasn't been around long enough. Oracle's implementation from what I understand is still a bit buggy. The RSBMS version has been around for muuch, much longer, and when you're dealing with enterprise-class applications, you sure as hell don't want to use anything that's even close to bleeding-edge.
Because, where can we find the guys & the time (1)
ishrat (235467) | more than 12 years ago | (#245790)
This seems reason enough to me.
Relating The Object (2)
grovertime (237798) | more than 12 years ago | (#245791)
what the? [nowjones.com]
Re:RDBMS w/ CORBA layer (1)
yzxbmlf (238008) | more than 12 years ago | (#245792)
Where are all the DBAs here? (1)
CrazyLegs (257161) | more than 12 years ago | (#245797)
PERFORMANCE: They just don't perform for high-volume, on-line, real-time situations. Period.
TOO CLOSE TO THE CODE: The structure and nature of OODBMS stores forces too much thought from the Developer's perspective (consider the impact of object deep copies). We prefer to keep a cleaner line between database design and code access.
VISIBILTY OF DATA: A golden rule of Data/Function placement to put the data in a place (logical and physical) where it has the most visibility to its users and potential users. Fact is, it's tough to put an OODBMS on a boring old S/390 box - and I haven't seen a Unix or NT cluster that can take its place (even if they can house an OODBMS).
FUTURE APP USES: If I implement an OO app that uses an OODBMS, I know it works today. But the next app that comes along and needs to access that data may not be OO and may not be able to access the data.
OBJECT FIDELITY: Dumb problem.... but if the objects I store atrophy (i.e. the classes from which they're derived change), then I have a versioning problem. Sure there are ways to mitigate this problem through sound design, careful mgmt, etc. - but it's more crap to worry about.
Bottom-line for this kind of corporate computing environment is that OODBMS is a problematic technology. It forces too much up-front code design and doesn't provide enough long-term flexibility.
Ok... now tell me where I'm wrong (and you're probably right).
All that code makes my head hurt (2)
typical geek (261980) | more than 12 years ago | (#245798)
Its all marketing hype (1)
boltar (263391) | more than 12 years ago | (#245799)
Re:If you only found "one" disadvantage, you ... (1)
EllisDees (268037) | more than 12 years ago | (#245804)
Re:RDBMS w/ CORBA layer (1)
vulg4r_m0nk (304652) | more than 12 years ago | (#245806)
Well, this is fine, but then you have to build your object-flattening code into your CORBA objects, which is one of the primary reasons to go with an OODBMS in the first place. With an OODBMS you can build CORBA-capable objects that get stored directly in the DB, as long as the CORBA OIDs match or map to the OODB OIDs, and you settle on a policy to correlate activated and deactivated CORBA objects with their representations in the DB.
Only one disadvantage listed ... (2)
vulg4r_m0nk (304652) | more than 12 years ago | (#245807)
But it's a big one. A company I worked for recently employed a RogueWave product to emulate an OODBMS on top of an relational DB, and the result was utter horror. Having to recompile everything as the result of a schema change is a major pain, especially if you have to deal with multiple versions of the codebase. Of course, the situation was worsened in this case because the RW software had to generate all of the mapping code.
I also find that building entity-relation models for relational DBs -- that is, thinking of objects in the form of tables, rows, and columns -- is a very clear way to figure out the problem domain and evaluate different solutions. A successful development process might well include a preliminary stage working with an RDBMS, even if only in working out the conceptual kinks, and then move on to an OODMS.
Finally, a crucial criterion I would employ in which system to go with is the complexity of the data to be stored. The application I worked with was a horrible candidate for an OODBMS because the information itself was simple: names, contact info, and the like, which fits the relational model quite naturally. On the other hand, I'm about to start on a project of my own utilizing highly complex objects, capable of much greater sophistication than in my other example. I will most likely use an OODBMS, for instance db40 [db40.com].
Too Much Integration (2)
Tricolor Paulista (323547) | more than 12 years ago | (#245810)
OODB vs. JDBC (2)
bokmann (323771) | more than 12 years ago | (#245813)
---
Premise:
When Java first started appearing in enterprise-wide systems, there were large existing systems containing the enterprise's data. Early on, for Java to have acceptance as a solution to problems in this domain, there had to be a way to access this data... data was not going to be 'recreated' for a new, unproven language.
By far, this data was stored in relational databases, like Oracle and Microsoft's Sql Server. The shortest path to accessing this data in a way that made sense to Java's cross-platform nature was to take an existing specification, ODBC, and create a java-specification based on it. Thus, JDBC was born.
In the years since then, JDBC has matured into a very usable API for accessing relational databases, and a lot of Java developers have had to learn how to use it. Many developers don't even realize it, but there is a mismatch between storing data as an object graph and storing it in a relational database. As a developer, you have to write a lot of code to map between the object world and the relational world. There is a better way.
JDBC is great for accessing existing relational data... It is and should be considered a bridge to legacy systems. If you are starting a new project using Java, there should be a better way. There should be a way to store your data without thinking about it. You should be able to hand your objects to a service that will store them. You should be able to maintain complex relationships between those objects, and you should be able to query that service for objects that match certain criteria. You should not have to muddy up your domain objects with code for storing themselves, and you shouldn't have to extend any objects that a framework provides. You shouldn't have to complicate your build process with 'enhancement' steps, and your objects should not be bytecode modified, so they are still usable in debugging environments.
Does such an object-oriented database exist? In some respects, yes. There are object-oriented databases such as Object Store and Versant. They operate very closely to the ideals listed above. But for most developers, especially those that already know JDBC, their price tags are a large barrier to entry. Until you know the technology, you won't recommend it. You won't recommend the technology until you use it. You are not going to spend $10,000 of your own money to get to know and use a product like this... there are too many other things happening in the Java community anyway.
Conclusion:
These are the reasons object databases have not become popular: There is a high barrier to entry for their use. The databases themselves are expensive. And Java Developers doing database access know JDBC, which is a 'good enough' solution.
In order to overcome this and make object databases take their position in the java community as a preferred way to store data, we need a good, Free, Open Source implementation with all the benefits of transparency of use. Once the database is Free, it must be evangelized by developers. Other developers need to know of it and learn it.
Practicality vs Performance (3)
MatthewNYC (413607) | more than 12 years ago | (#245814)
I don't have enough Objects (2)
RalphTWaP (447267) | more than 12 years ago | (#245819)
But there's a good reason why it stops my *good* use of an OODB.
If you look well and hard at C++, it's mildly object oriented, you can at least *create* objects, right?
If you look well and hard at Java (god bless it), it's *mostly* object oriented, at least there's a *root* object in the hierarchy, right?... Hmm, what are all those basic components though, is that an integer over there? What kind of object is that again?
Now then, if you managed to find a language whose native types were *all* expressed as objects, where my data were all most naturally expressed as an object (god bless Objective C and Scheme)... well, I'd be much more likely to store my data objectively (come to think of it, I do hate most of my data more than a little).
At any rate, I think the answer is that Object technology is only just coming into its own, and the rigor required of us the programmers to *USE* object orientation to its fullest extent is something that we don't enjoy doing for something as crunchy as DBMS access. Of course, just trying to get the data-creating departments to specify your data in an object oriented fashion might very well bite arse also...
Nietzsche on Diku:
sn; at god ba g
:Backstab >KILLS< god.
Try generating reports on an OO database! (1)
ubeans (449308) | more than 12 years ago | (#245820)
|
http://beta.slashdot.org/story/17995
|
CC-MAIN-2014-15
|
refinedweb
| 12,311
| 62.17
|
[Data Points]
DDD-Friendlier EF Core 2.0, Part 2
By Julie Lerman | October 2017 | Get the Code: C# VB
In my September column (msdn.com/magazine/mt842503), I laid out the many Entity Framework Core (EF Core) 2.0 fea-tures that align nicely with Domain-Driven Design (DDD) principles. In addition to providing great guidance and patterns for software development, DDD principles are also critical if you’re designing microservices. In the examples throughout the article, I used simplistic patterns in order to focus on the particular EF Core feature. Doing this meant that the code didn’t represent well-designed DDD-guided classes, and I promised that in an upcoming column I’d evolve those classes to look more like what you might write for a real-world implementation using DDD. And that’s what I’m going to do in this article. I’ll walk you through these better-architected classes and show you how they continue to work well as I use EF Core 2.0 to map them to my database.
The Original Domain Model
I’ll begin with a quick refresher on my little domain model. Because it’s for a sample, the domain lacks the complex business problems that would generally drive you to lean on DDD, but even without those complicated problems, I can still apply the patterns so you can see them in action, and see how EF Core 2.0 responds to them.
The domain comprises the Samurai characters from the movie “Seven Samurai,” where I keep track of their first appearance in the movie and their secret identities.
In the original article, the Samurai was the root of the aggregate and I had constrained the model to ensure the Samurai was responsible for managing its entrances and its secret identity. I demonstrated some of those constraints as follows:
Samurai and Entrance have a one-to-one relationship. Samurai’s Entrance field is private. Entrance has a foreign key field, SamuraiId. Because Samurai.Entrance is private, I needed to add a Fluent API mapping in the DbContext class to be sure EF Core was able to comprehend the relationship for retrieving and persisting this data. I evolved the Entrance property to be tied to a backing field, and then modified the mappings to let EF Core know about this, as well.
PersonName_ValueObject (named so elaborately for your benefit) is a value object type without its own identity. It can be used as a property in other types. Samurai has a PersonName_ValueObject property called SecretIdentity. I used the new EF Core Owned Entity feature to make SamuraiContext know to treat the SecretIdentity the same as earlier versions of EF would handle a ComplexType, storing the properties of the value object in columns of the same table to which the Samurai type maps.
The Enhanced Domain Model
What follows are the more advanced classes in the aggregate, along with the EF Core 2.0 DbContext I’m using to map to the database, which in my case happens to be SQLite. The diagram in Figure 1 shows the aggregate with its class details. The code listings will start with the non-root entities and finish up with the root, Samurai, which controls the others. Note that I’ve removed namespace references, but you can see them in the download that accompanies this article.
Figure 1 Diagram of the Advanced Aggregate
Figure 2 shows the evolved Entrance class.
public class Entrance { public Entrance (Guid samuraiGuidId,int movieMinute, string sceneName, string description) { MovieMinute = movieMinute; SceneName = sceneName; ActionDescription = description; SamuraiGuidId=samuraiGuidId; } private Entrance () { } // Needed by ORM public int Id { get; private set; } public int MovieMinute { get; private set; } public string SceneName { get; private set; } public string ActionDescription { get; private set; } private int SamuraiFk { get; set; } public Guid SamuraiGuidId{get;private set;} }
So much of DDD code is about protecting your domain from being unintentionally misused or abused. You constrain access to the logic within the classes to ensure they can be used only in the way you intend. My intention for the En-trance class (Figure 1) is that it be immutable. You can define its property values using the overloaded constructor, passing in the values for all of its properties except for SamuraiFk. You’re allowed to read any of the properties—but notice they all have private setters. The constructor is the only way to affect those values. Therefore, if you need to modify it, you’ll need to replace it with a whole new Entrance instance. This class looks like a candidate for a value object, especially because it’s immutable, but I want to use it to demonstrate one-to-one behavior in EF Core.
With EF Core (and earlier iterations of EF), when you query for data, EF is able to materialize results even when properties have private setters because it uses reflection. So EF Core can work with all these properties of Entrance that have private setters.
There’s a public constructor with four parameters to populate properties of Entrance. (In the previous sample, I used a factory method that added no value to this class, so I’ve removed it in this iteration.) In this domain, an Entrance with any of those properties missing makes no sense, so I’m constraining its design to avoid that. Following that constructor is a private parameterless constructor. Because EF Core and EF use reflection to materialize results, like other APIs that instantiate objects for you such as JSON.NET it requires that a parameterless constructor be available. The first constructor overrides the parameterless constructor that’s provided by the base class (object) that all classes derive from. Therefore, you must explicitly add that back in. This is not new behavior to EF Core; it’s something you’ve had to do with EF for a long time. In the context of this article, however, it bears repeating. If you’re new to EF with this version, it’s also notable that when an Entrance is created as a result of a query, EF Core will only use that parameterless constructor to create the object. The public constructor is available for creating new Entrance objects.
What about that Guid and int pointing back to Samurai? The Guid is used by the domain to connect the samurai and entrance so that the domain logic has no reliance on the data store for its Ids. The SamuraiFk will only be used for per-sistence. SamuraiFk is private, but EF Core is able to infer a backing field for it. If it were named SamuraiId, EF Core would recognize it as the foreign key, but because it doesn’t follow convention, there’s a special mapping in the context to let EF Core know that it is, indeed, the foreign key. The reason it’s private is that it’s not relevant to the domain but needed for EF Core to comprehend the relationship in order to store and retrieve the data correctly. This is a concession to avoiding persistence logic in my domain class but, in my opinion, a minor one that doesn’t justify the extra effort of introducing and maintaining a completely separate data model.
There’s a new entity in my aggregate: Quote, shown in Figure 3. In the movie this sample domain honors, the various characters have some notable quotes that I want to keep track of in this domain. It also gives me a chance to demonstrate a one-to-many relationship.
public class Quote { public Quote (Guid samuraiGuidId,string text) { Text = text; SamuraiGuidId=samuraiGuidId; } private Quote () { } //ORM requires parameterless ctor public int Id { get; private set; } public string Text { get; private set; } private int SamuraiId { get; set; } public Guid SamuraiGuidId{get;private set;} }
Note that the patterns are the same as those I’ve explained for the Entrance entity: the overloaded public constructor and the private parameterless constructor, the private setters, the private foreign key property for persistence, and the Guid. The only difference is that the SamuraiId, used as the persistence FK, follows EF Core convention. When it’s time to look at the DbContext class, there won’t be a special mapping for this property. The reason I’ve named these two properties inconsistently is so you can see the difference in the mappings for the conventional vs. unconventional nam-ing.
Next is the PersonFullName type (renamed from PersonName), shown in Figure 4, which is a value object. I explained in the previous article that EF Core 2.0 now allows you to persist a value object by mapping it as an Owned Entity of any entity that owns it, such as the Samurai class. As a value object, PersonFullName is used as a property in other types and entities. A value object has no identity of its own, is immutable and isn’t an entity. In addition to the previous article, I have also explained value objects in more depth in other articles, as well as in the Pluralsight course, Domain-Driven Design Fundamentals, which I created with Steve Smith (bit.ly/PS-DDD). There are other important facets to a value object and I use a ValueObject base class created by Jimmy Bogard (bit.ly/13SWd9h) to implement them.
public class PersonFullName : ValueObject<PersonFullName> { public static PersonFullName Create (string first, string last) { return new PersonFullName (first, last); } public static PersonFullName Empty () { return new PersonFullName (null, null); } private PersonFullName () { } public bool IsEmpty () { if (string.IsNullOrEmpty (First) && string.IsNullOrEmpty (Last)) { return true; } else { return false; } } private PersonFullName (string first, string last) { First = first; Last = last; } public string First { get; private set; } public string Last { get; private set; } public string FullName () => First + " " + Last; } }
PersonFullName is used to encapsulate common rules in my domain for using a person’s name in any other entity or type. There are a number of notable features of this class. Alt-hough it hasn’t changed from the earlier version, I didn’t provide the full listing in the previous article. Therefore, there are a few things to explain here, in particular the Empty factory method and the IsEmpty method. Because of the way Owned Entity is implemented in EF Core, it can’t be null in the owning class. In my domain, PersonFullName is used to store a samurai’s secret identity, but there’s no rule that it must be populated. This creates a conflict between my business rules and the EF Core rules. Again, I have a simple enough solution that I don’t feel the need to create and maintain a separate data model, and it doesn’t impact how Samurai is used. I don’t want anyone using my domain API to have to remember the EF Core rule, so I built two factory methods: You use Create if you have the values and Empty if you don’t. And the IsEmpty method can quickly determine the state of a PersonFullName. The entities that use PersonFullName as a prop-erty will need to leverage this logic and then anyone using those entities won’t have to know anything about the EF Core rule.
Tying It All Together with the Aggregate Root
Finally, the Samurai class is listed in Figure 5. Samurai is the root of the aggregate. An aggregate root is a guardian for the entire aggregate, ensuring the validity of its internal objects and keeping them consistent. As the root of this aggre-gate, the Samurai type is responsible for how its Entrance, Quotes and SecretIdentity properties are created and man-aged.
public class Samurai { public Samurai (string name): this() { Name = name; GuidId=Guid.NewGuid(); IsDirty=true; } private Samurai () { _quotes = new List<Quote> (); SecretIdentity = PersonFullName.Empty (); } public int Id { get; private set; } public Guid GuidId{get;private set;} public string Name { get; private set; } public bool IsDirty { get; private set; } private readonly List<Quote> _quotes = new List<Quote> (); public IEnumerable<Quote> Quotes => _quotes.ToList (); public void AddQuote (string quoteText) { // TODO: Ensure this isn't a duplicate of an item already in Quotes collection _quotes.Add (Quote.Create(GuidId,quoteText)); IsDirty=true; } private Entrance _entrance; private Entrance Entrance { get { return _entrance; } } public void CreateEntrance (int minute, string sceneName, string description) { _entrance = Entrance.Create (GuidId, minute, sceneName, description); IsDirty=true; } public string EntranceScene => _entrance?.SceneName; private PersonFullName SecretIdentity { get; set; } public string RevealSecretIdentity () { if (SecretIdentity.IsEmpty ()) { return "It's a secret"; } else { return SecretIdentity.FullName (); } } public void Identify (string first, string last) { SecretIdentity = PersonFullName.Create (first, last); IsDirty=true; } }
Like the other classes, Samurai has an overloaded constructor, which is the only way to instantiate a new Samurai. The only data expected when creating a new samurai is the samurai’s known name. The constructor sets the Name property and also generates a value for the GuidId property. The SamuraiId property will get populated by the database. The GuidId property ensures that my domain isn’t dependent on the data layer to have a unique identity and that’s what’s used to connect the non-root entities (Entrance and Quote) to the Samurai, even if the Samurai hasn’t yet been persisted and honored with a value in the Sam-uraiId field. The constructor appends “: this()” to call the parameterless constructor in the constructor chain. The parame-terless constructor (reminder: it’s also used by EF Core when creating objects from query results) will ensure that the Quotes collection is instantiated and that SecretIdentity is created. This is where I use that Empty factory method. Even if someone writing code with the Samurai never provides values for the SecretIdentity property, EF Core will be satisfied because the property isn’t null.
The full encapsulation of Quotes in Samurai isn’t new. I’m taking advantage of the support for IEnumerable that I discussed in an earlier column on EF Core 1.1 (msdn.com/magazine/mt745093).
The fully encapsulated Entrance property has changed from the previous sample in only two minor ways. First, be-cause I removed the factory method from Entrance, I’m now instantiating it directly. Second, the Entrance constructor now takes additional values so I’m passing those in even though at this time the Samurai class isn’t currently doing any-thing with these extra values.
There are some enhancements to the SecretIdentity property since the earlier sample. First, the property originally was public, with a public getter and a private setter. This allowed EF Core to persist it in the same way as in earlier versions of EF. Now, however, SecretIdentity is declared as a private property yet I’ve defined no backing property. When it comes time to persist, EF Core is able to infer a backing property so it can store and retrieve this data without any additional mapping on my part. The Identify method, where you can specify a first and last name for the secret identity, was in the earlier sample. But in that case, if you wanted to read that value, you could access it through the public property. Now that it’s hidden, I’ve added a new method, RevealSecretIdentity, which will use the PersonFullName.IsEmpty method to determine if the prop-erty is populated or not. If so, then it returns the FullName of the SecretIdentity. But if the person’s true identity wasn’t identified, the method returns the string: “It’s a secret.”
There’s a new property in Samurai, a bool called IsDirty. Any time I modify the Samurai properties, I set IsDirty to true. I’ll use that value elsewhere to determine if I need to call SaveChanges on the Samurai.
So throughout this aggregate, there’s no way to get around the rules I built into the entities and the root, Samurai. The only way to create, modify or read Entrance, Quotes and SecretIdentity is through the constrained logic built into Samurai, which, as the aggregate root, is guarding the entire aggregate.
Mapping to the Data Store with EF Core 2.0
The focus of the previous article was on how EF Core 2.0 is able to persist and retrieve data mapped to these con-strained classes. With this enhanced domain model, EF Core is still able to work out most of the mappings even with things so tightly encapsulated in the Samurai class. In a few cases I do have to provide a little help to the DbContext to make sure it comprehends how these classes map to the database, as shown in Figure 6.
public class SamuraiContext : DbContext { public DbSet<Samurai> Samurais { get; set; } protected override void OnConfiguring (DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlite ("Filename=DP0917Samurai.db"); } protected override void OnModelCreating (ModelBuilder modelBuilder) { modelBuilder.Entity<Samurai> () .HasOne (typeof (Entrance), "Entrance") .WithOne ().HasForeignKey(typeof (Entrance), "SamuraiFk"); foreach (var entityType in modelBuilder.Model.GetEntityTypes ()) { modelBuilder.Entity (entityType.Name).Property<DateTime> ("LastModified"); modelBuilder.Entity (entityType.Name).Ignore ("IsDirty"); } modelBuilder.Entity<Samurai> ().OwnsOne (typeof (PersonFullName), "SecretIdentity"); } public override int SaveChanges () { foreach (var entry in ChangeTracker.Entries () .Where (e => e.State == EntityState.Added || e.State == EntityState.Modified)) { if (!(entry.Entity is PersonFullName)) entry.Property ("LastModified").CurrentValue = DateTime.Now; } return base.SaveChanges (); } }
Not a lot has changed in the SamuraiContext since the first sample from my first article, but there are a few things to point out as reminders. For example, the OwnsOne mapping lets EF Core know that SecretIdentity is an Owned Entity and that its properties should be persisted *as though they were individual properties of Samurai. For the sake of this sample, I’m hardcoding the provider in the OnConfiguring method as opposed to leveraging dependency injection and inversion of control (IoC) services. As mentioned in the first article, EF Core can figure out the one-to-one relationship between Samurai and Entrance, but I have to express the relationship in order to access the HasForeignKey method to inform the context about the non-conventional foreign key property, SamuraiFk. In doing so, because Entrance is private in Samurai, I can’t use a lambda expression and am using an alternate syntax for the HasForeignKey parameters.
LastModifed is a shadow property—new to EF Core—and will get persisted into the database even though it’s not a property in the entities. The Ignore mapping is to ensure that the IsDirty property in Samurai isn’t persisted as it’s only for domain-relevant logic.
And that’s it. Given how much of the DDD patterns I’ve applied in my domain classes, there’s very little in the way of special mappings that I have to add to the SamuraiContext class to inform EF Core 2.0 what the database looks like or how to store and retrieve data from that database. And I’m pretty impressed by that.
There’s No Such Thing as a Perfect DDD Sample
This is still a simple example because other than outputting “It’s a secret” when a SecretIdentity hasn’t been given a value, I’m not solving any complex problems in the logic. The subtitle of Eric Evan’s DDD book is “Tackling Complexity in the Heart of Software.” So much of the guidance regarding DDD is about breaking down overwhelming complex problems in to smaller solvable problems. The code design patterns are only a piece of that. Everyone has different problems to solve in their domains and, often, readers ask for a sample that can be used as a template for their own software. But all that those of us who share our code and ideas can do is provide ex-amples as learning tools. You can then extrapolate those lessons and apply some of the thinking and decision making to your own problems. I could spend even more time on this tiny bit of code and apply additional logic and patterns from the DDD arsenal, but this sample does go pretty far in leveraging DDD ideas to create a deeper focus on behavior rather than on properties, and further encapsulate and protect the aggregate.
My goal in these two columns was to show how EF Core 2.0 is so much friendlier for mapping your DDD-focused domain model to your database. While I demonstrated that, I hope you were also inspired by the DDD patterns I’ve in-cluded in these classes.: Cesar de La Torre
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
Data Points - DDD-Friendlier EF Core 2.0, Part 2
I don't really understand the link in the Entrance and Quote classes between SamuraiGuidId and the private FK to the Samurai table. How is EF able to apply the correct foreign key from Quote to Samurai? By the fact that is is located in the collecti...
Nov 23, 2017
Data Points - DDD-Friendlier EF Core 2.0, Part 2
The magazine links seem to be there now (top of the page). Also my GitHub repo is at. The master branch is empty. You'll find the code for the September article in the SimplerPatterns branch and from this ar...
Oct 23, 2017
Data Points - DDD-Friendlier EF Core 2.0, Part 2
I am trying to download the code samples for the article, but they cannot be found. Can you check the link?
Oct 9, 2017. Read this a...
Oct 2, 2017
|
https://msdn.microsoft.com/en-us/magazine/mt826347.aspx
|
CC-MAIN-2019-09
|
refinedweb
| 3,552
| 53
|
> ThreadLibrary.zip > ThreadPool.h
/** * @file */ #pragma once #include "ThreadRequest.h" #include "ManualEvent.h" #include "CriticalSection.h" #include "Semaphore.h" #include
#include #include #include namespace { const int THREAD_COUNT = 20; ///< Number of threads to initialize to start the requests const int MAX_QUEUE_SIZE = 100; ///< Maximum size of queue before submitJob blocks }; /** * Creates a pool of threads, which will service a queue of funtors. * * The pool of threads is determined by the MAX_THREADS define, or passed in through the constructor. * To have a thread execute a request, the user submits a ThreadRequest derived command * object (functor) to the thread queue. (I was going to call it threadJob but that's rude). * The request handler will pull this out of the queue, and run the functors operator() method. * If the queue is full, determined by the MAX_QUEUE_SIZE define, or the constructor parameter, * then the submitRequest method will block preventing new items being added to the queue. * These are implemented via a ManualEvent object. * * @warning Exceptions WILL NOT propogate out of the accept handler. You MUST catch * your exceptions in the ThreadRequest derived functor. * * @author Peter Hancock */ class ThreadPool { public: ThreadPool(int threadSize=THREAD_COUNT, int queueSize=MAX_QUEUE_SIZE); virtual ~ThreadPool(); int accept(); ///< Start pool for acceptance void submitRequest(ThreadRequest *request); ///< Submits a job to the pool for later attachment void shutdown(); ///< Shuts down the thread pool protected: virtual void onThreadStart(int threadId) throw(); ///< Runs on EACH thread initialization. virtual void onThreadFinish(int threadId) throw(); ///< Runs on EACH thread termination. private: bool alive; ///< Contains the suicide pill to close down threads std::vector pool; ///< thread handle pool int threads; ///< Number of threads in the pool static unsigned int __stdcall internalThreadProc(void* lpParam); ///< internal thread proc to run acceptance handlers void acceptHandler(unsigned int threadId) throw(); ///< Internal acceptance handler run the the internalThreadProc void *threadData; ///< Pointer to thread data std::queue jobQueue; ///< Pointer to request objects Semaphore queueAccess; ///< Ensure that only queue number or max threads will access queue at once CriticalSection queueGuard; ///< Ensure that the job Queue is threadsafe ManualEvent notFull; ///< Event signalled when queue is not full ThreadPool(const ThreadPool&); // Disable copy and assignment ThreadPool& operator=(const ThreadPool&); }; /** * Thrown when the thread pool is shutting down. * The submitRequest throws this once the threadPool has been requested to terminate. Prevents clients * from adding requests to the thread pool queue that won't be handled. * * @author Peter Hancock */ class ThreadPoolShutdownException : public std::exception { public: ThreadPoolShutdownException(){} ThreadPoolShutdownException(const char* mesg) : std::exception(mesg){} ~ThreadPoolShutdownException(){} };
|
http://read.pudn.com/downloads46/sourcecode/windows/network/155206/ThreadLibrary/ThreadPool.h__.htm
|
crawl-002
|
refinedweb
| 404
| 51.18
|
30645/saying-package-namespace-failed-factominer-mining-rstudio
I'm using RStudio for data mining. I rebuilt the R packages in my machine to use RStudio and installed the package FactoMineR to perform PCA. When I run library('FactoMineR') or library(FactoMineR), I get this error:
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/home/ci/R/x86_64-pc-linux-gnu-library/3.2/quantreg/libs/quantreg.so':
libRlapack.so: cannot open shared object file: No such file or directory
Error: package or namespace load failed for ‘FactoMineR’
Try reinstalling the package quantreg. Hope this works. Its one of the very common errors.
I installed package KernSmooth and it seemed ...READ MORE
Radha, Try restarting the session. the same ...READ MORE
Hi, I want to use dplyr package ...READ MORE
Try this:
source dataset in this format:
1 3 ...READ MORE
You can do this in R using ..
Take out the chid.var argument in your call to mlogit.data, ...READ MORE
You can use the removesparseterm function.
Removes sparse ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/30645/saying-package-namespace-failed-factominer-mining-rstudio
|
CC-MAIN-2021-43
|
refinedweb
| 200
| 61.33
|
Uncaught TypeError: Cannot call method 'on' of undefined
Hello,
I'm using Ext Designer 1.2.2 with Ext JS 3. I successfully export the project, but when I try to run it I receive:
Code:
Uncaught TypeError: Cannot call method 'on' of undefined
The problem is in the following code:
Code:
store = Ext.StoreMgr.lookup(store); store.on({ scope: this, beforeload: this.beforeLoad, load: this.onLoad, exception: this.onLoadError });
Code:
Ext.StoreMgr.register(this);
In my case I have a PagingToolbar with a store set and when that component needs it the store is still not created.
So, my questions is: can you give me any advice how to fix the problem or just any direction of the Stores' things. I.e. how are they created, initialized etc ...
However I changed this line of code:
Code:
return Ext.isObject(id) ? (id.events ? id : Ext.create(id, 'store')) : this.get(id);
Code:
if(Ext.isObject(id)) { return (id.events ? id : Ext.create(id, 'store')); } else { var store = null; if(store = this.get(id)) { return store; } else { return Ext.create(id, 'store'); } }
Hope that this helps to someone
|
https://www.sencha.com/forum/showthread.php?229954-Uncaught-TypeError-Cannot-call-method-on-of-undefined&p=853243&mode=linear
|
CC-MAIN-2015-40
|
refinedweb
| 187
| 70.7
|
Posted 11 Jan 2016
Link to this post
Hi - running Fiddler Web Debugger (v4.6.2.0) on W7 64
1) Running a java app (q.jar, attached)
2) I set proxy in Java control panel:
Control Panel>Java>General>Network Settings:
Use Proxy Server:
Address: 127.0.0.1 (or localhost)
Port: 8888
3) I launch q.jar (double-click in Windows Explorer) which gives me a java warning message that the app cannot connect to the web server. This is normal (the web server was taken down a few years ago), so I am trying to use fiddler to redirect to my local server.
But I cannot get fiddler to capture the q.app traffic to get the needed config info for redirection.
Could anyone please help me?
Thanks -
Sam
Posted 14 Jan 2016
Link to this post
Posted 14 Jan 2016
in reply to
Tsviatko Yovtchev
Link to this post
Thanks very much for you reply.
>> make sure that Fiddler is running on port 8888
I assume that means Tools>Fiddler Options>Connections>Fiddler listens on port: 8888
>> check out what library is q.jar using to handle http traffic
I wish I knew how to do this... can anyone point me in the right direction?
I extracted the q.jar file and did a search for "http" and these lines seemed relevant, but I'm ignorant about java and about "http libraries":
QoraanServlets.java:
import javax.servlet.http.*;
...
public class QoraanServlets extends HttpServlet {
...
public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException{
|
http://www.telerik.com/forums/tried-setting-java-proxy-fiddler-still-not-capturing-traffic
|
CC-MAIN-2017-26
|
refinedweb
| 257
| 66.74
|
Hello everyone. Soon I will begin creating new webparts in visual studio and wanted to get started on the right foot. I was investigating the datalist webpart code at C:\inetpub\wwwroot\Kentico82\CMS\CMSWebParts\Viewers\Documents\cmsdatalist.ascx. This webpart creates a HTML table structure with a certain number of columns depending on what is selected for that particular field. I can not find the code responsible for creating these tables within cmsdatalist.ascx. Is it found in one of the "using namespaces"? I'm a novice with C# and .net development. I would really appreciate if somebody could point me to the location of the code responsible for generating this HTML table. I feel it's important to understand this before I begin creating webparts or editing existing ones. I have worked through the "hello world" webpart tutorial once before, but even that wasn't crystal clear.
C:\inetpub\wwwroot\Kentico82\CMS\CMSWebParts\Viewers\Documents\cmsdatalist.ascx
cmsdatalist.ascx
The majority of the controls referenced in the webparts are compiled Kentico controls. If you need or want to modify the output of a control (repeater, datalist, etc.) this can typically be done with a transformation. Many times you don't need to modify any of the for the webparts, the majority of you work can be done within the Kentico UI.
It sounds like you want to create some custom web parts. I would recommend going through some tutorials on C# first and then .Net stack, etc. As Brenden indicated the majority of existing web controls are compiled and you cannot edit their code unless you have source code which is included in a version of the licensing. You can however use the existing controls in new controls .ascx (web parts) that you develop. This will deal with inheritance, etc. Of course you can create your own custom controls from scratch. Just pay attention to the inheritance and different events that each web control needs to subscribe to.
If you want some frank advice, I would suggest that open up visual studio, find some tutorials on creating .ascx controls and go through them. Then drop them on your own, .aspx pages. Once you got that down, then there is not much difference in using them in Kentico.
I thought that particular code might have been something precompiled. Thank you both very much. I truly appreciate the advice and information.
Please, sign in to be able to submit a new answer.
|
https://devnet.kentico.com/questions/code-locations-general-question
|
CC-MAIN-2018-47
|
refinedweb
| 412
| 58.38
|
Hi,
I'm having trouble understanding a certain aspect of webware/python.
Here is what I'm trying to do.
Three files are involved: SitePage.py, View.py, View.psp (These aren't
the real names.)
View.py *******************************************
from lib.SitePage import SitePage
class View(SitePage):
def __init__(self):
SitePage.__init__(self)
self.__stuff = self.request().field('sutff')
def stuff(self):
return self.__stuff
View.psp ******************************************
<%@page extents="testlet"%>
<%@page method="writeContent"%>
<%=self.test()%>
<div> lots of good html </div>
The error I get looks like this:
AttributeError: _home_randall_pyrocks_MyContext_real_estate_psp_ba instance has no attribute '_request'
Why can't I access the request object this way? Is (def writeContent: )
the only way to access request, etc. ?
I'm still trying to understand how webware works. Answers or references
to docs are appreciated.
Thanks.
Randall
On Sat, 2003-07-19 at 23:00, Randall Smith wrote:
>
What does SitePage inherit from?
Ian
On Sun, 2003-07-20 at 01:11, Randall Smith wrote:
> View.py
> *****************************************
> from WebKit.Page import Page
>
> class View(Page):
> def awake(self, transaction):
> Page.awake(self,transaction)
> id = self.request().field('id') # actually works now
> #any initialization desired
> *****************************************
>
> My psp (View.psp) can inherit from this and access 'id' or anything
> created by the initialization code. Seems to work great. Do you see
> any problems with this approach? It would be nice to have the awake
> method in Page call a 'dummy' method, Init, or something that can be
> subclassed and used for initialization. Is this feature already present
> and I just missed it?
Ah, yes, that would cause that problem. No, there's no plan to change
it. However, I often do something like:
class SitePage(Page):
def respond(self, trans):
self.setup()
Page.respond(self, trans)
self.teardown()
def setup(self): pass
def teardown(self): pass
Ian
|
http://sourceforge.net/p/webware/mailman/message/13275580/
|
CC-MAIN-2015-32
|
refinedweb
| 300
| 62.44
|
Anypoint Studio 7.4.2 Release Notes
February 6, 2020
Build ID: 202001311805
What’s New
Studio 7.4.2 release includes:
Performance to address memory leaks, minimization of background processing.
Upgrade of the Eclipse base version from 4.7 to 4.9.
Chromium Web Browser.
Migrated browser integration infrastructure from XULRunner to Chromium for better browser selection based on the operative system native browser to Chromium.* Bug fixes.
Upgrading to This Version
If upgrading to this version from Studio’s update site fails, you must download this new version from the download site and re-install it. stores
Duplicated namespaces now show as an error in Studio XML configuration file editor.
Fixed NullPointerException when right-clicking a project after opening the context menu in the Configuration XML editor and closing the editor.
Fixed issue initializing the
MULE_LIB/org.mule.connectors/mule-sockets-connector/1.1.1classpath container for projects wt-4.4.13.
Fixed issue on Linux where the Transform Message UI brakes after Edit current target.
Fixed error when importing a project created in 7.3.x in 7.4.x.
Fixed "Unable to resolve reference …" errors in Transform Message component editor in Studio 7.2.1.
Fixed issue initializing the
MULE_LIBin Studio 7.3.5 & Mule runtime engine 4.2.2.
Fixed error where Studio shows an error when importing an existing project from filesystem.
Fixed error when Studio detected changes when no change was done.
Fixed validation errors shown in RAML editor for valid RAML spec.
Fixed error where an M2 repository population was executed in every startup adding an overhead specially under windows.
Fixed error where updating an API only allowed the user to hit the refresh button once.
Fixed error where after updating an API design with a JSON schema and refresh in Studio, the metadata did not work.
Fixed issue where deleting the version of a Mule project from Exchange and then publishing if from Studio using the deleted version a 400 status response was thrown.
Fixed issue where an exception was thrown for
SOAP: Headerwhen using APIkit for SOAP 1.2.3.
Fixed issue where Studio ignored the selected Mule runtime engine verison when importing project from filesystem.
Fixed issue where APIkit scaffolding failed when there was an existing previous file scaffolded.
Fixed issue where importing an API as a dependency when updating from runtime 4.2.1 to 4.2.2 failed.
Fixed error when resolving metadata when the Maven repository had spaces.
Fixed issue where Studio did not load multi-level metadata unless it was saved and re loaded.
Fixed issue where opening the "edit" button of a version, the "Cancel" and "Ok" buttons were not readable any longer.
Fixed issue where enabling "update snapshots" in studio 7.4 rendered the project unable to run.
Fixed issue where Studio did not enable the "Run" button under Run Configuration for applications in old workspaces.
Fixed issue preventing users to edit the version of a module.
Fixed issue where selecting an API or module in the dependencies view caused the edit button to dissapear.
Fixed NullPointerException thrown after adding a configuration to Web Service Consumer module.
Fixed issues where metadakeys could not be retrieved when the configuration is located in a domain.
Fixed issue where a duplicated debug configuration was created when having a domain project.
Fixed issue where projects were copied to the workspace root folder when importing them from a folder inside the workspace.
Fixed issue where Studio could not import an API created in Studio 7.3.5 from Exchange.
Fixed issue where Studio update sites asked for certificates installation.
|
https://docs.mulesoft.com/release-notes/studio/anypoint-studio-7.4-with-4.2-runtime-update-site-2-release-notes
|
CC-MAIN-2021-43
|
refinedweb
| 601
| 58.89
|
This is Just basic C.
Hi all,
I am a noob in here. I was trying to work work with a program that read one line at a time from a text file, then allocate memory for the name and convert the number into an int. For Example, "LIGHT BLUE 123456" this is one line in the text file. The job here is to Allocate the LIGHT BLUE and convert 123456 from char to int, then store it into an int array. Well that is all done. However, i also have to sort the list of NAME in alphabetical order too.. Then what ever come first, the number of that name has to move along... the bold part is the part that i am stuck on..
Any help is appreciated.
this is my code for sort and compare 2 strings
Code:void SortString(char *colorName[], int colorVal[], int index) { int curr;// the index start at the second elenment from the left in the array of pointer int walk; // the index at the element want to compares char *hold; // pointer to the string you want to compare with char *temp; // pointer to the the next value in the string to compared to int located; // true or false for(curr = 1; curr < MAXMAIN; curr++) { hold = colorName[curr]; // assign the point at curr to hold located = 0; for(walk = curr - 1; walk >=0 && !located;) { temp = colorName [walk]; //assign the first string of colorname into temp, make ready for compare if (myStricmp(hold,temp) < 0) { colorName[walk +1] = colorName[walk]; //colorVal [walk +1] = colorVal[walk]; walk--; // move 1 index } else { located = 1; } } colorName[walk+1] = hold; } return; } int myStricmp (char str1[], char str2[]) { // pointer traversal version: while(tolower(*str1)==tolower(*str2) && *str1!= '\0'){ ++str1; ++str2; } // end while return (tolower(*str1)-tolower(*str2)); } // end myStricmp
|
http://cboard.cprogramming.com/c-programming/138531-noob-need-help-too.html
|
CC-MAIN-2015-32
|
refinedweb
| 300
| 66.47
|
Microsoft . I have taken three arrays for that one is string array, second one is integer array which is having same length as string array and third one is also integer array but having different length then string array. Here is the sample code for that.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
string[] a = { "a", "b", "c", "d" };
int[] num = { 1, 2, 3, 4 };
int[] numdiff = { 1, 2, 3, };
Console.WriteLine("Zip Exmpale with same length");
var ZipResult = a.Zip(num,(ae, ne)=>ae + ", " + ne.ToString());
foreach (string s in ZipResult)
{
Console.WriteLine(s);
}
Console.WriteLine(@"\n\n\nZip Exmpale with diffrent length");
ZipResult = a.Zip(numdiff,(ae, ne)=>ae + ", " + ne.ToString());
foreach (string s in ZipResult)
{
Console.WriteLine(s);
}
Console.ReadKey();
}
}
}
Hope this will help you.Hope this will help you.
2 thoughts on “ Zip operator in Linq with .NET 4.0 ”
|
http://www.dotnetjalps.com/2010/06/zip-operator-in-linq-with-net-40.html
|
CC-MAIN-2013-20
|
refinedweb
| 158
| 71.1
|
From: Greg Colvin (gcolvin_at_[hidden])
Date: 2001-10-25 15:26:08
From: Ed Brey <edbrey_at_[hidden]>
> From: "Greg Colvin" <gcolvin_at_[hidden]>
> ...
> > I'd prefer long double constants.
>
> I was thinking double in response to Peter Dimov's point about literals defaulting to double. Pi would do so for consistency.
One argument I can see to the contrary is that with literals, you often don't have enough digits to matter, and if you do it's quite
obvious that you have a whole lot of digits. Pi would silently lose precision when long double is used.
>
> However, besides consistency, a long double Pi has the problem that it will trigger loss-of-precision warnings when used in the
common case manner (if the platform has different representations for double and long double). This seems unacceptable.
>
> I wish I had a great answer. The closest I can think of is to put double, long double, and float constants each in its own
namespaces, although that has its own problems. On the other hand, it also solves a problem that hasn't been explicitly presented
yet:
>
> Suppose someone wrote a non-generic program, but used typedefs because he wasn't sure of the precision he'd need. For example:
>
> typedef double Real;
> using namespace boost::math;
> Real foo = 4./3. * pi * r * r * r; // and a bunch of other neat math stuff
>
> This works fine for a while. Then the user finds a case where he'd prefer to trade off some speed for accuracy, he might want to
change the first two lines to work with long doubles instead of doubles. His literals are small, and so don't have any value to be
long double, but that isn't the case for pi. Having to change this to pi_l would be undesirable.
>
> But wait. The literals really do need to change, since unsigned long(4./3.) is not the same as 4.L/3.L. So you really do need to
write the entire program with a given precision in mind, or wrap everything in casts like "Real(4)". But if one is going to do all
that, wrapping pi as "Real(pi_l)" isn't much by comparison. And this would leave plain old pi available as a warning-free double.
>
I don't see the need for three flavors of pi, so if double is the best
default then that is all we should provide for unadorned use.
If someone is serious about paramerizing their program on floating-point
type it seems they should do:
typedef double Real;
using namespace boost::math;
Real foo = Real(4.)/Real(3.) * constant<Real>(pi) * r * r * r;
Or if they need to use these constants a lot:
const Real Four_Thirds = Real(4) / Real(3);
const Real Pi = constant<Real>(pi);
> > Of course Beman says "Please do not make any design decisions for math
> > constants based on brain-dead compilers."
>
> Agreed. Of course, the same techniques that break compilers also tend to fight the language. Using the operator double()
approach, gone is the obviousness of parentheses meaning function and no parentheses meaning variable. Instead, what looks like a
variable actually has the restrictions of a function. This may be one of those cases when an exception to the norm is a good thing,
but such exceptions need to be examined closely.
What restrictions of a function do you mean? In my design pi is an
object, not a function. You can pass it by address or by reference.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2001/10/18891.php
|
CC-MAIN-2021-31
|
refinedweb
| 608
| 72.26
|
Teppo works for a Finnish company that, among other things, develops a few mobile applications. This company is growing, and as growing companies do, it recently purchased another company.
One of the applications that came with this company had a mongrel past. It started as an in-house project, was shipped off to a vague bunch of contractors in Serbia with no known address, then back to an intern, before being left to grow wild with anyone who had a few minutes trying to fix it.
The resulting code logs in a mixture of Serbian and Finnish. Paths and IP addresses are hard-coded in, and mostly point to third party services that have long since stopped working. It has an internal ad-framework that doesn’t work. The Git repository has dozens of branches, with no indication which one actually builds the production versions of the application. The back-end server runs a
cron script containing lines like this:
* * * * * curl > ~/out.txt * * * * * echo 'lalala' > ~/out1.txt
It’s a terrible application that doesn’t even “barely” work. The real test, of course, for an unsupportable mess of an application is this: how does it handle dates?
public static String getSratdate_time_date(String date) { String dtStart = date; try { SimpleDateFormat format = new SimpleDateFormat( "yyyy-MM-dd HH:mm:ss"); Date deals_date = format.parse(dtStart); String intMonth = (String) android.text.format.DateFormat.format( "M", deals_date); // Jan String year = (String) android.text.format.DateFormat.format( "yy", deals_date); // 2013 String day = (String) android.text.format.DateFormat.format( "dd", deals_date); // 20 return (intMonth + " / " + day); } catch (Exception e) { e.printStackTrace(); } return ""; }
This takes a string containing a date and converts it into a string containing “M/dd”. You may note, I used a date format string to describe what this code does, since the easiest way to write this might have been to do something like…
DateFormat.format("M/dd", deals_date), which doesn’t seem to be that much of a leap, since they used the
DateFormat object.
Bonus points for using Hungarian notation, and triple that bonus for using it wrong.
|
http://thedailywtf.com/articles/a-dated-inheritance
|
CC-MAIN-2018-05
|
refinedweb
| 345
| 56.45
|
Hi,
I am trying to use a stepper motor driver with UART and convert it with MAX485 chip. The driver needs RS485 two wire half-duplex.
For my script I am using Python3.
However I can’t get any type of connection to my driver. I think It is because of the connection to/from the MAX485 chip.
With python code I can’t even see a connected serial port.
import serial.tools.list_ports print(list(serial.tools.list_ports.comports(include_links=True)))
Output: [ ] #no port detected
My schematic
NOTE: The expansion header is from Raspberry, because it was already in my library and nearly similar to the Nvidia Nano
Best Regards
Chris
|
https://forums.developer.nvidia.com/t/rs485-with-uart-and-max485-with-nano-tx2/141598
|
CC-MAIN-2020-40
|
refinedweb
| 112
| 65.62
|
.
In the first part, we walked through the process of setting up a Strapi application and creating products for our application. In this part, we will concentrate more on how to set up the Gridsome application. Afterward, we will set up some components, pages, and layouts to get the application ready.
Before we get started, here are some prerequisites:
With Node installed, you can install the Gridsome CLI by running the command below:
yarn global add @gridsome/cli
OR
npm install --global @gridsome/cli
Having installed the Gridsome CLI, you can use it to create a new Gridsome project by running the following command:
gridsome create mealzers
The above command will create a Gridsome project named mealzers. Navigate into the created project folder with:
cd mealzers
And start the project development server with:
gridsome develop
The project will be available for preview on the browser. Navigate to localhost:8080 to view the app.
There, you have a Gridsome application running on the browser in minutes. Next, let's add Vuetify for styling.
Vuetify is a UI framework built on top of Vue.js. Whenever I'm working on a Vue project, Vuetify is by default my goto UI framework for styling. It is the most popular design framework for Vue.js and it comes packaged with tons of prebuilt components.
Amongst other things, it is mobile-first which ensures that my applications perform responsively irrespective of device or orientation. It is also simple to understand and use. Let's quickly add Vuetify to our Gridsome application by running the installation command below:
# npm npm install vuetify --save #OR # yarn yarn add vuetify
Now that you've installed Vuetify, you need to register it as a plugin in your Gridsome application. You will also need to add Vuetify CSS file and a link to Google's material design icons to your app. To do that, copy the code snippet below into your project's main.js file and save:
import Vuetify from 'vuetify' import 'vuetify/dist/vuetify.min.css' import DefaultLayout from '~/layouts/Default.vue' export default function (Vue, { appOptions, head }) { head.link.push({ rel: 'stylesheet', href: '', }) head.link.push({ rel: 'stylesheet', href: '', }); Vue.use(Vuetify) const opts = {}; //opts includes, vuetify themes, icons, etc. appOptions.vuetify = new Vuetify(opts); // Set default layout as a global component Vue.component('Layout', DefaultLayout) }
Next, you need to whitelist Vuetify in Webpack so that your application can build properly. To do that, first, you need to install the webpack-node-externals plugin with the command below:
npm install webpack-node-externals --save-dev
Then replace your project's gridsome.server.js file with the snippet below :
const nodeExternals = require('webpack-node-externals') module.exports = function (api) { api.chainWebpack((config, { isServer }) => { if (isServer) { config.externals([ nodeExternals({ allowlist: [/^vuetify/] }) ]) } }) api.loadSource(store => { // Use the Data store API here: }) }
Great! so we've just added Vuetify to our Gridsome project. But don't take my words for it. Let's quickly run the Gridsome app again and see if we observe any changes in the UI:
And we do. The presence of Vuetify in this application has already impacted the existing style which is why the app looks all jumbled up. Next, let's create a default layout that will cater to the applications Header and Footer needs. Let's start with the footer.
Create a src/components/Footer.vue file and add the following snippet to it:
<!-- src/components/Footer.vue --> <template> <v-footer absolute padless> <v-card flat tile <v-card-text> <v-btn{{ icon }}</v-icon> </v-btn> </v-card-text> <v-card-text. </v-card-text> <v-divider></v-divider> <v-card-text {{ new Date().getFullYear() }} — <strong>Mealsers</strong> </v-card-text> </v-card> </v-footer> </template> <script> export default { data: () => ({ icons: ["mdi-facebook", "mdi-twitter", "mdi-linkedin", "mdi-instagram"], }), }; </script>
Here, we defined a basic Vuetify footer component with a bunch of dummy text and social media icons. The icons are made available through the material design icons link we added in the applications main.js file.
Next, open the src/layouts/Default.vue file and update it with the code snippet below:
<!-- src/layouts/Default.vue --> <template> <v-app> <div> <v-app-bar absolute <v-app-bar-nav-icon </v-app-bar-nav-icon> <g-linkMealzers</g-link > <v-spacer></v-spacer> <v-btn outlined rounded dense <g-linkShop</g-link > </v-btn> <v-btn outlined rounded dense <g-linkSupport</g-link > </v-btn> <v-text-field</span> <span class="snipcart-total-price">{{ this.totalPrice }}</span> </v-btn> </v-app-bar> <v-navigation-drawer <v-list-item> <v-list-item-title> <v-btnShop</g-link > </v-btn> </v-list-item-title> </v-list-item> <v-list-item> <v-list-item-title> <v-btnSupport</g-link > </v-btn> </v-list-item-title> </v-list-item> </v-list-item-group> </v-list> </v-navigation-drawer> <v-sheet <v-container</v-container> </v-sheet> <slot /> </div> <Footer /> </v-app> </template>
In the template section above, we've defined a basic Vuetify header that features a navigation drawer and a few buttons to switch pages. We are using the custom Gridsome
At the moment, we have a shop page, and a support page, which we'll create later in the next part of this series. So don't bother about them just yet. Next, let set up the script section of the layout file above using the code snippet below:
<!-- src/layouts/Default.vue --> <script> import Footer from "@/components/Footer.vue"; export default { components: { Footer, }, data() { const drawer = false; const group = null; return { drawer, group }; }, watch: { group() { this.drawer = false; }, }, }; </script> <static-query> query { metadata { siteName } } </static-query>
If you save the changes and run the application again, you should get the following output on the browser where the app is running:
At the moment, the buttons and navigation on the app will lead to a 404 page. This is because we haven't created the /shop and /support pages yet. We'll get to that eventually in the next part of this series, but in the meantime, let's add some Hero images to make this app look even more like an e-commerce application.
Next, update your src/pages/index.vue file with the snippet below:
<template> <Layout> <template> <v-carousel *cycle* *height*="400" *hide-delimiter-background* :*show-arrows*="false" > <v-carousel-item *v-for*="(image, i) in images" :*key*="i"> <v-sheet *height*="100%"> <v-row *class*="fill-height" *align*="center" *justify*="center"> <div *class*="display-3"> <img :*src*="image" /> </div> </v-row> </v-sheet> </v-carousel-item> </v-carousel> <div *class*="separator"></div> </template> </Layout> </template> <script> *export* *default* { metaInfo: { title: "Mealzers", }, *data*() { *return* { images: [ "", "", "", "", ], }; }, }; </script>
Here, we define an array of 5 images and loop through them to display each one in the Vuetify v-carousel UI component we specified in the component template. If you save this snippet and check back on the browser, you should get an updated view like so:
We've come to the end of this part. Before we jump on to the next part, here's a quick recap on what we've done. In the first part, we set up a Strapi application and created a bunch of products we'll sell in our store. In this part, we just set up a Gridsome application and modified the UI to get it ready. In the next part, we'll connect this application to display the products we created with Strapi using Gridsome's GraphQL data layer. See you on the next.
|
https://strapi.io/blog/building-a-jamstack-food-ordering-app-with-strapi-gridsome-and-snipcart-2-6
|
CC-MAIN-2022-33
|
refinedweb
| 1,257
| 54.83
|
The Complete React Native Guide to User Authentication with the Amplify Framework
Nader Dabit
Updated on
・12 min read
In my previous post, The Complete Guide to User Authentication with the Amplify Framework, I walked through how to add username / password based authentication as well as OAuth with Facebook, Google or Amazon.
In this tutorial, I will be covering mobile authentication using React Native and AWS Amplify. This guide will cover both React Native and Expo. I will cover how to implement the following use cases:
- OAuth with Google & Facebook
- OAuth with Apple
- Hosted UI (Google + Apple + Facebook + Username & Password in one UI)
- Username & password authentication
- Protected routes
- Listening to authentication events
- Basic authentication with the
withAuthenticatorHOC
Getting Started
AWS Amplify provides Authentication APIs and building blocks for developers who want to create apps with real-world production-ready user authentication.
With Amplify you can incorporate username / password based authentication as well as OAuth with Facebook, Google, Amazon, or any third party OAuth provider such as Auth0 or Okta via OIDC.
We also provide a pre-built “Hosted UI” that provides a full OAuth + username / password flow with a single function call.
Introduction to Amazon Cognito
The Amplify Framework uses Amazon Cognito as the main authentication provider. Amazon Cognito User is a managed user directory service that handles user registration, authentication, account recovery & other operations.
Amplify interfaces with Cognito to store user data, including federation with other OpenID providers like Facebook, and Google.
The Amplify CLI automates the access control policies for these AWS resources as well as provides fine grained access controls via GraphQL for protecting data in your APIs.
Most modern applications require multiple authentication options, i.e. Facebook login + Username / password login. Amazon Cognito makes this process easy by allowing you to use a single user registry to authenticate users across multiple authentication types.
In this post, you'll learn how to add authentication to your application using both OAuth as well as username & password login.
OAuth with Apple, Google, & Facebook
Installing the Amplify CLI
To build authentication into your application with Amplify you first need to install the AWS Amplify CLI. here.
Creating the React Native project
Next, we'll create the React Native application we'll be working with.
If using Expo
$ npx expo init rnamplify > Choose a template: blank $ cd rnamplify $ npm install aws-amplify aws-amplify-react-native
If using the React Native CLI
$ npx react-native init rnamplify $ cd rnamplify $ npm install aws-amplify aws-amplify-react-native amazon-cognito-identity-js $ cd ios $ pod install --repo-update $ cd ..
Creating the Amplify project
Now we can now initialize a new Amplify project from within the root of our React Native application:
$ amplify init
Here we'll be guided through a series of steps:
- Enter a name for the project: amplifyauth (or your preferred project name)
- Enter a name for the environment: local (or your preferred environment name)
-.
Creating our App IDs
In our app we'll be having four types of authentication:
- Facebook (OAuth)
- Google (OAuth)
- Apple (OAuth)
- Cognito (username + password)
Next we'll need to create Apple, Facebook, & Google Apps in order to get an App ID & App Secret for each of them. For each provider that you'd like to enable, create App IDs by following the following instructions.
To see instructions for the Facebook setup click here.
To see instructions for the Google setup click here.
To see instructions for the Apple setup see the tutorial here. You only need to create the App ID, the Services ID, and the Private Key. You do not need to create a Client Secret. For the Services ID web domain and Return URL, leave it blank for now.
After you've created the Apple, Facebook, & Google OAuth credentials move on to the next step.
Creating & configuring the authentication service
Now that our Amplify project has been initialized & we have our App IDs & secrets from Apple, Facebook & Google we can add the authentication service.
As of this blog post, the Amplify CLI has not yet added support for Apple so we will need to do that in a separate step by enabling Apple directly in the dashboard. For now, we will only enable Google and Facebook from the CLI.
To add the authentication service, we can run the following command:
$ amplify add auth # If you already have a project configured & want to now add Social login, run amplify update auth instead
This will walk you through a series of steps:
- Do you want to use the default authentication and security configuration? Default configuration with Social Provider (Federation)
- How do you want users to be able to sign in when using your Cognito User Pool? Username
- What attributes are required for signing up? Email
- What domain name prefix you want us to create for you? amplifyauthXXXXXXXXX (use default or create custom prefix)
- Enter your redirect signin URI: If you are using Expo: exp://127.0.0.1:19000/--/, if you are using the React Native CLI: myapp:// (this can be updated later for production environments)
- Do you want to add another redirect signin URI: N
- Enter your redirect signout URI: If you are using Expo: exp://127.0.0.1:19000/--/, if you are using the React Native CLI: myapp://
- Do you want to add another redirect signout URI: N
- Select the social providers you want to configure for your user pool: Choose Facebook & Google
In the above step we chose Default configuration with Social Provider (Federation). This will allow a combination of Username / Password signin with OAuth. If you only want Username / Password, you could choose Default configuration or Manual Configuration.
We also set the
redirect URI. This is important and we will be updating the Expo or Xcode and Android Studio projects later on with this URI.
Finally, you'll be prompted for your App IDs & Secrets for both Facebook & Google, enter them & press enter to continue.
Now that the authentication service has successfully been configured, we can deploy the service by running the following command:
$ amplify push
After running
amplify push you should see a success message & the OAuth Endpoint should also be logged out to the console:
The OAuth endpoint should look something like this:
This OAuth endpoint is also available for reference in src/aws-exports.js if you need it at any point under the
oauth ->
domain key.
You will need to use this endpoint to finish configuring your Apple, Facebook, & Google OAuth providers.
Configuring Facebook
Next, open the Facebook app we created earlier & click on Basic in the left hand menu.
Scroll to the book & click Add Platform, then choose Website:
For the _Site URL), input the OAuth Endpoint URL with
/oauth2/idpresponse appended into Site URL:
Save changes.
Next, type your OAuth Endpoint into App Domains:
Save changes.
Next, from the navigation bar choose Products and then Set up from Facebook Login & choose Web.
For the Valid OAuth Redirect URIs use the OAuth Endpoint +
/oauth2/idpresponse. If you're prompted for the site URL, also use this endpoint (i.e.):
Save changes.
Make sure your app is Live by clicking the On switch at the top of the page.
Configuring Google
Now that Facebook has been configured we can now configure Google. To do so, let's go to the Google Developer Console & update our OAuth client.
Click on the client ID to update the settings.
Under Authorized JavaScript origins, add the OAuth Endpoint.
For the Authorized redirect URIs, add the OAuth Endpoint with
/oauth2/idpresponse appended to the URL:
Save changes.
Configuring Apple
Open the Apple developer console, click on Certificates, IDs, & Profiles in the left hand menu, then click on Identifiers.
In the App IDs dropdown menu, choose Service IDs.
Click on the Service ID you created earlier, then click Configure next to Sign In with Apple.
Here, enter the Domain and Return URLs. The Domain should be the
oauth domain value located in the aws-exports.js file. The Return URL will be a variation of the
domain that will look like this:
https://<domain>/oauth2/idpresponse
So, your return url could look something like:
NOTE You do not have to verify the domain because the verification is only required for a transaction method that Amazon Cognito does not use.
Adding Apple Sign In to the Cognito Service
The Amplify CLI integrated the Google and Facebook OAuth services with Amazon Cognito, but to enable Sign In with Apple we must go into the console and do it manually until the Amplify CLI adds this feature. To open the Cognito project, run the following command:
$ amplify console auth ? Which console: User Pool
In the left hand menu, choose Identity Providers. In this section, click on Sign in with Apple and enter the Apple Services ID, the Team ID, the Key ID, and upload the Private Key given to you from the Apple Developer Console.
Next, click on App client settings and enable Sign in with Apple for each app client.
Finally, click on Attribute mapping and be sure to enable email and map it to email.
Configuring local redirect URIs
In the amplify configuration step we set redirect URIs for the app to open back up after the user has been authenticated. Now, we need to enable these redirect URIs on our mobile project. These steps will differ depending on whether you are building with Expo or with the React Native CLI.
Expo - redirect URIs
If you are using expo, open the app.json file and add the following key value pair to the "expo" property:
{ "expo": { "scheme": "myapp", // other values } }
React Native CLI - redirect URIs
If you're using the React Native CLI and working with the native projects, you will need to configure both the Xcode project as well as the Android Studio project.
iOS - Xcode configuration
For iOS, open the Xcode project (in the ios folder, rnamplify.xcworkspace). Here, open
info.plist as source code, and add the following:
<key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLSchemes</key> <array> <string>myapp</string> </array> <dict> </array>
For a full example, click here.
Android - Android Studio configuration
In Android Studio, open android/app/main/AndroidManifest.xml. In this file, add the following intent-filter:
<intent-filter android: <action android: <category android: <category android: <data android: </intent-filter>
For a full example, click here.
Trying it out
Now, the project and the services are configured and we can start writing some JavaScript.
The first thing we need to do is configure the React Native Project to use the Amplify credentials. To do so, open aws-exports.js and add the following code:
import Amplify from 'aws-amplify' import config from './aws-exports' Amplify.configure(config)
Now, we can test out the authentication APIs. To do so, we will be using the
Auth.federatedSignIn methods. These methods allow us to launch either Federated Sign In with a single provider, or launch the Hosted UI for signing in with any provider.
// import the Auth class import { Auth } from 'aws-amplify' <Button title="Sign in with Google" onPress={() => Auth.federatedSignIn({ provider: "Google" })} /> <Button title="Sign in with Facebook" onPress={() => Auth.federatedSignIn({ provider: "Facebook" })} /> <Button title="Sign in with Apple" onPress={() => Auth.federatedSignIn({ provider: "SignInWithApple" })} /> <Button title="Launch Hosted UI" onPress={() => Auth.federatedSignIn()} />
Next, run the app to test everything out:
# If using Expo $ expo start # If not using Expo $ npx react-native run-ios # or $ npx react-native run-android
After signing in, we can test out a couple of other things.
To log out the current user's credentials:
const user = await Auth.currentAuthenticatedUser().catch(err => console.log(err))
Using the above method, we can also check if the current user is signed in at any time. If they are not signed in, the error message will tell us that no user is signed in.
If they are signed in, the
user object will be populated with all of the metadata of the signed in user.
To sign out the current user:
await Auth.signOut()
Username + Password authentication
With our current setup, we can also sign up and sign out users with a username and password.
To do so, we need to capture their info in a form. Using the
Auth class, we can handle many difference scenarios, including but not limited to:
- Confirming sign up (MFA)
- Confirming sign in (MFA)
- Resetting password
Let's take a look how to sign a user up. This is a very basic example that does not take into account switching form state between sign up, sign in, and confirming sign up or sign in. A more detailed example is linked below.
// import the Auth component import { Auth } from 'aws-amplify' // store the form state state = { username: '', email: '', password: '' } // sign the user up async signUp = () => { const { username, email, password } = this.state await Auth.signUp({ username, password, attributes: { email }}) console.log('user successfully signed up') }
To view all of the methods available on the
Auth class, check out the documentation here.
If you're interested in how to create a custom authentication flow, check out the components in this example, or just check out the entire React Native Authentication Starter here.
Protected / Private Routes
When creating a custom authentication flow, the one thing you need to deal with is protected or private routes.
Protected routes are routes or views that you do not want accessible to certain users. In our example, we will implement protected routes to redirect users who are not signed in and allow users who are signed in to proceed.
The example I will show is assuming you are using React Navigation, but if you are using a different navigation library the idea will still be the same.
Essentially what you need to do is have a listener for a route change, or if you are using React Navigation you can hook directly into the route change using the
onNavigationStateChange prop.
In this function, we can get the user's credentials and check if the user is signed in. If they are signed in, we allow then to continue to the next route.
If they are not signed in, we redirect them to the Authentication screen:
import React from 'react' import { createSwitchNavigator, createAppContainer, NavigationActions } from 'react-navigation' import Auth from './nav/auth/Auth' import MainNav from './nav/main/MainNav' import { Auth as AmplifyAuth } from 'aws-amplify' const SwitchNav = createSwitchNavigator({ Auth: { screen: Auth }, MainNav: { screen: MainNav } }) const Nav = createAppContainer(SwitchNav) class App extends React.Component { checkAuth = async () => { try { await AmplifyAuth.currentAuthenticatedUser() } catch (err) { this.navigator.dispatch( NavigationActions.navigate({ routeName: 'Auth' }) ) } } render() { return ( <Nav ref={nav => this.navigator = nav} onNavigationStateChange={this.checkAuth} /> ) } } export default App
For a complete implementation, check out the example project here.
Listening to authentication events
There is a listener we can initialize that will listen to changes in our authentication state and allow us to have access to the type of authentication event that happened and update the application state based on that data.
With Amplify, the
Hub module allows us to do this pretty easily:
import { Hub } from 'aws-amplify'; Hub.listen('auth', (data) => { const { payload } = data; console.log('A new auth event has happened: ', data.payload.data.username + ' has ' + data.payload.event); })
Basic authentication with the
withAuthenticator HOC
If you're just looking to get up and running with basic username + password authentication, you can use the
withAuthenticator HOC.
This component will put authentication in front of any component in your app with just a couple of lines of code:
import { withAuthenticator } from "aws-amplify-react-native"; class App extends React.Component { /* your code /* } export default withAuthenticator(App)
@dabit3 are these deprecated?
Hey, no you can still use these fields directly in the AppSync schema itself, without the GraphQL Transform library, to set authorization rules.
Oh! Thanks. Now I get it.
Thanks @dabit3 for the article, I have a question tho.
Do you need to eject from expo to use this flow?
Because on one of my application I am using AsyncStorage to keep the user token.
But
facebook/react-nativeis depprecated in flavor of
@react-native-community/async-storage
I just wanted to know your opinion on this 😉
Awesome article, i have been struggling to get a decent reactnative app with authentication.
But after reading this, im going to try again using this guide in combination with FusionAuth.io as oauth provider.
Thx!
Great article
Thanks!!
Amaizing, really thanks!!
Terrific article Nader! How to configure Sign in with Apple through withAuthenticator HOC? Not clear what exactly to pass into the federated attribute...
Nader, Hey!! is it possible that the next guide is about how to make push notifications with amplify from 0 or connected it with this guide? Thanks again!!!
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aws/the-complete-react-native-guide-to-user-authentication-with-the-amplify-framework-ib2
|
CC-MAIN-2020-05
|
refinedweb
| 2,776
| 52.8
|
Print last n lines of file
A lot of times, when we are debugging production systems, we go through the logs being generated by systems. To see the logs which are most recent, we commonly use tail -n functionality of Unix.
Tail -n functionality prints the last n lines of each FILE to standard output
After going through many interview experiences at Microsoft, I found that this question regularly features in the majority of interviews. Let’s take an example and see what to expect out of the functionality.
The first thing we notice about this problem is that we have to print the last n lines. It means we have to maintain some kind of order. If we want the last line first, this is typical
LIFO, which is implemented using the stack data structure.
However, another constraint is that we have to print most n lines. In that case, if the number of lines on stack goes more than n, we will remove some lines from it. Which lines should be removed? We will remove the lines which came first. Unstack all the lines from the stack and removed the first line and then put all lines back on to the stack.
When you read, we just read from the top of the stack till stack is empty which will give us last n lines of the file.
Also, tail functionality of Unix prints the line in forwarding order rather than reverse order. If we are implementing true tail functionality, the order will be
FIFO rather than
LIFO. But make sure that you clarify this with the interviewer.
The complexity of reading n lines is
O(n) and putting a new line also takes
O(n) complexity. If the stack is implemented using linked list, we do not require additional memory.
What if the file is continuously written on, and tail happens occasionally. As mentioned above, the stack solution has
O(n) complexity to put every line, which is not ideal in here. Tail -f actually requires that output grows as things are added to the file.
-f, --follow[={name|descriptor}] output appended data as the file grows;
What if we optimize the writing part using queues to store the last n lines of the file. Imagine a case, when a queue has last n lines of the file at a point of time. Now if a new line comes, we can add its tail of the queue and remove them from the front. If we keep track of tail of queue, insertion and removal operation both become
O(1).
To read lines, we have to read the queue in reverse order. This should give us the idea that a doubly linked list should be used to implement queue. Using doubly linked list, if we have the tail pointer, we can always traverse queue in reverse order. The complexity of reading n lines will still be
O(n). Real Tail does not require it, you can print the entire queue in FIFO manner, howver, it is good to mention in interview why you chose DLL over singly linked list to implement queue.
Print last n lines of a file: Algorithm
- For every line being added to the file, do the following:
- If size of queue is less than n, we simply enqueue the line in queue.
- If size of queue is greater than n, dequeue line from front and enqueue new line at the end.
If you are tailing an existing file, then read the whole file line by line and do the last two operations in the algorithm.
Print last n lines: implementation
#include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_SIZE 500 #define true 1 #define false 0 typedef struct queue_l{ char data[MAX_SIZE]; struct queue_l *next; struct queue_l *prev; }Queue; typedef struct dummyNode{ int size; struct queue_l *front; struct queue_l *tail; }dummyNode; /* Below are the routine function for init queue, enqueue, dequeue, queue_empty etc */ void initializeQueue(dummyNode **q){ *q = (dummyNode *)malloc(sizeof(dummyNode)); if(*q){ (*q)->front = NULL; (*q)->tail = NULL; (*q)->size = 0; } } int isEmpty(dummyNode *q){ if( !(q->size)) return true; return false; } Queue * enqueue(dummyNode *q, char * elem){ Queue *newNode= (Queue *) malloc(sizeof(Queue)); if(newNode){ strcpy(newNode->data, elem); newNode->next = NULL; newNode->prev = q->tail; if(q->tail){ q->tail->next = newNode; } q->tail = newNode; if(!q->front) q->front = newNode; q->size++; } return newNode; } char * dequeue(dummyNode *d){ if(isEmpty(d)){ printf("\n Queue is empty"); return NULL; } Queue *q = d->front; d->front = q->next; if(q->next) q->next->prev = NULL; else d->tail = NULL; char * deletedNode = q->data; free(q); d->size--; return deletedNode; } void update_lines(dummyNode *d, char *s, int n){ if(d->size <n){ enqueue(d, s); } else{ dequeue(d); enqueue(d, s); } } int main(){ dummyNode *d = NULL; int n=10; initializeQueue(&d); char line[MAX_SIZE], *result; FILE *stream; /* Open the file */ stream = fopen("problems.txt","rb"); /*Read lines one by one */ while((result =fgets(line, MAX_SIZE, stream)) != NULL){ update_lines(d, line,n); } fclose(stream); print_queue(d); return 0; }
Please share if there is something wrong or missing. If you are preparing for an interview and want coaching session to prepare you fast, please book a free session with us.
|
https://algorithmsandme.com/tag/tail-f/
|
CC-MAIN-2020-05
|
refinedweb
| 878
| 68.91
|
Difference between revisions of "Blow your mind"
From HaskellWiki
Revision as of 13:39, 1 March 2006
Useful, Cool, Magical Idioms
this collection is supposed to be comprised of short, useful, cool, magical examples, which incite curiosity in the reader and (hopefully) lead him to a deeper understanding of advanced haskell concepts. at a later time i might add explanations to the more obscure solutions. i've also started providing several alternatives to give more insight into the interrelations of solutions.
whoever has any more ideas, please feel free to just add them; if you see mistakes or simpler solutions please correct my chaotic collection. i'm very interested in more "obscure" solutions, which showcase the applicability of haskell's (unique) features (i.e. monad magic, folds and unfolds, fix points, ...)
-- splitting in twos (alternating) -- "1234567" -> ("1357", "246") foldr (\a (x,y) -> (a:y,x)) ([],[])
-- splitting in N -- 2 -> "1234567" -> ["12", "34", "56", "7"] List.unfoldr (\a -> if null a then Nothing else Just $ splitAt 2 a) "1234567"
fst . until (null . snd) (\(a,b) -> let (x,y) = splitAt 2 b in (a++[x],y)) $ ([], "1234567")
-- split at whitespace -- "hello world" -> ["hello","world"] words
fst . until (null . snd) (\(a,b) -> let (x,y) = break (==' ') b in (a++[x], drop 1 y)) $ ([], "hello world")
-- combinations -- "12" -> "45" -> ["14", "15", "24", "25"] sequence ["12", "45"] [x:[y] | x <- "12", y <- "45"] "12" >>= \a -> "45" >>= \b -> return (a:[b])
-- factorial -- 6 -> 720 product [1..6] foldl1 (*) [1..6] (!!6) $ unfoldr (\(n,f) -> Just (f, (n+1,f*n))) (1,1) fix (\f (n,g) -> if n <= 0 then g else f (n-1,g*n)) (6,1)
-- interspersing with newlines -- ["hello","world"] -> "hello world" unlines intersperse '\n'
-- sorting by a custom function -- length -> ["abc", "ab", "a"] -> ["a", "ab", "abc"] sortBy length map snd . sortBy fst . map (length &&& id) -- zweierpotenzen iterate (*2) 1 unfoldr (\z -> Just (z,2*z)) 1
-- simulating lisp's cond case () of () | 1 > 2 -> True | 3 < 4 -> False | otherwise -> True
-- add indices to list for later use -- [3,3,3] -> [(0,3),(1,3),(2,3)] zip [0..] -- fibonacci series unfoldr (\(f1,f2) -> Just (f1,(f2,f1+f2))) (0,1) fibs = 0:1:zipWith (+) fibs (tail fibs) fib = 0:scanl (+) 1 fib
-- unjust'ify list of Maybe's -- [Just 4, Nothing, Just 3] -> [4,3] catMaybes
-- find substring -- "ell" -> "hello" -> True substr a b = any (a `elem`) $ map inits (tails b)
--" -- putStrLn "hello" >> return "hello"
-- match a constructor -- this is better than applying all the arguments, because this way the data type can be changed without touching the code (ideally). case a of Just{} -> True _ -> False
-- prime numbers -- example of a memoising caf (??) primes = sieve [2..] where sieve (p:x) = p : sieve [ n | n <- x, n `mod` p > 0 ]
{- either maybe group fun with monad, monadPlus liftM list monad vs comprehensions -}
|
https://wiki.haskell.org/index.php?title=Blow_your_mind&diff=prev&oldid=2654
|
CC-MAIN-2021-21
|
refinedweb
| 471
| 55.07
|
Inmates sit in crowded conditions at the California Institute for Men in Chino, Calif.
By,.”
Advertisement them: .’ ”.
Get truth delivered to
your inbox every week.
Previous item: Access Journalism: The Movie
Next item: The Tea Party Is Yesterday's Navid, June 20, 2011 at 3:11 am Link to this comment
(Unregistered commenter)
Kudos to you! I hadn’t thgohut of that!
By LocalHero, May 29, 2011 at 10:10 am Link to this comment
As far as I’m concerned, it’s a badge of honor to be put in prison by this boot-licking, Fascist state.
As a percentage of the population, there are more sociopaths and psychopaths in government and the military than there are in prison.
By MarthaA, May 28, 2011 at 4:44 pm Link to this comment
TruthOut, May 28 at 1:49 pm,
“Why am I supposed to feel sorry for people who COMMIT CRIMES,
then are sentenced to JAIL? Why are any of you?” —
TruthOut, May 28 at 1:49 pm (unregistered commenter)
You tell me, because sorry is NOT a remedy.
But first, tell me what a crime is -
Is a crime an act of patriotism on behalf of the best interest of
yourself and the community?
Is a crime an act of civil disobedience against an oppressive and
tyrannical government?
Is a crime withdrawal from participation in and support of
tyrannical and oppressive governance?
What is it that constitutes a crime?
Is it a crime to want to be free?
Is it a crime not to have life and a standard of life worth living?
Is it a crime to want to have an equivalent standard of happiness
as others in a society?
Is it a crime to want to have an equivalent standard of justice as
others in a society?
Is it a crime to not want to be hungry?
Is it a crime to not want to be uneducated?
Is it a crime to want equal opportunity and benefit from society?
Is it a crime to criminalize others for wanting the same thing that
those who are advantaged in society have?
What is a crime?
Because this nation has more people in jail than China and more
people in jail than all other nations combined on the face of the
earth, we as a nation, the United States, must start to look inward
and determine “what is a crime and who is the criminal?”
Is the government of the United States of America a criminal
regime that is oppressing and tyrannizing its people by harsh and
wrongful use of power and authority that criminalizes the general
population as a means of political control? ———
And, if this is not the case, what is the reason that so many more
people are criminalized in the United States than the combined
total of all other nations on the face of the earth?
By TruthOut, May 28, 2011 at 12:49 pm Link to this comment
(Unregistered commenter)
Why am I supposed to feel sorry for people who COMMIT CRIMES, then are
sentenced to JAIL? Why are any of you?
By MarthaA, May 28, 2011 at 7:29 am Link to this comment
RAE, May 28 at 5:12 am,
“Why is it that so many fail to understand that humans WILL
NEVER COOPERATE with those who mistreat them?” —RAE,
May 28 at 5:12 am
They may not cooperate, but they accept the frame of that
mistreatment as legitimate..
Cows will never be patriots to the Farmer.
By MarthaA, May 28, 2011 at 7:25 am Link to this comment
“Why is it that so many fail to understand that humans WILL NEVER
COOPERATE with those who mistreat them?” —RAE, May 28 at
5:12 am.
By RAE, May 28, 2011 at 4:12 am Link to this comment
So much rhetoric. So little understanding. Pathetic.
I wonder how many decades must pass before it dawns on “authority” that ANY system that chooses the “lock-em-up-and-throw-away-the-key” model of justice, which is little else than brainless revenge, only serves to exacerbate the situation.
Admittedly, it is difficult and expensive to “deal with” miscreants in society in a manner that seeks to understand and resolve the underlying reasons for their offensive behavior, and to provide appropriate and effective programs to correct the situation. It is difficult to remember that when you’re up to your ass in aligators that the original objective was to drain the swamp thereby PREVENTING much of the problem.
It is far easier and FAR MORE EXPENSIVE to just build more prisons (aka lucrative businesses), pass racially biased draconian laws, and create a thriving criminal justice system which profits the elite at the expense of everyone else.
Why is it that so many fail to understand that humans WILL NEVER COOPERATE with those who mistreat them?
By zonth_zonth, May 27, 2011 at 11:22 pm Link to this comment
“you have no empathy for your fellow man”
Spoken like someone who is not trafficking with fellow man. Insular, Insulated, typical upper echelon American ignorant of human nature.
By MarthaA, May 27, 2011 at 2:40 pm Link to this comment
Russell Smith, May 27 at 12:45 pm,
A sick system, as you say, reflects a sick society.
The American Populace, needs to start questioning how it is that a
nation that claims to be the world’s leading democracy, can have
more people in prison than China, more people in prison than the
rest of the world combined. Is this exemplary of life, liberty, and
the pursuit of happiness with freedom and justice for all
as the world’s leading democracy?
Or, is this exemplary?
My evaluation is that it is exemplary of the latter, rather than the
former; that.
This is What a Police State Looks Like:
America the Great .... Police State:
U.S. Has Highest Income Inequality Rate in the World:
By Russell Smith, May 27, 2011 at 11:45 am Link to this comment
(Unregistered commenter)
I’m glad Kennedy went with the Lefties this time around. Prison conditions in
California and across these United States are deplorable. People die in prison all
the time from lack of medical attention or murder. The state has a responsibility
to provide a safe harbor for these people. I agree that the problem goes deeper
than the 3 strikes law, but that law did not help the situation at all. Black men are
looking at 50% unemployment. That is unconscionable. Many kids would rather
flip burgers than sell dope on the street, but those jobs have been taken by men
and women who are trying to feed their families. I lived in Wheeling, WV in the
Eighties, and grown men were delivering papers because jobs were so scarce. Is
West Virginia the future of America? Also, privatization of prisons creates an
incentive to imprison people. Simple economics shows that these private prisons
must grow to increase profit margins. It’s a sick system and should be done away
with. As usual, in California, legislators pass the buck systematically.
By Steve E, May 26, 2011 at 4:55 pm Link to this comment
The “law industry” has always been very lucrative.
By rollzone, May 26, 2011 at 4:51 pm Link to this comment
hello. in the photo, as is the case in the Chino men’s
homeless shelter, and many others across the state:
there are no disproportionate “African Americans”. it
is mostly “Hispanics”, since racial is key, and 3
strikes is not the cause. unemployment, and the lack of
jobs is the cause; and all the chasing of business out
of the state. it can all be traced to the EPA- that’s
the cause of prison overcrowding: regulations and
taxes. the system profits once they are in it, but very
few have a better choice than crime, or a better way to
survive than three hots and a cot. they came for jobs.
By MarthaA, May 26, 2011 at 3:02 pm Link to this comment
PatrickHenry, May 26 at 3:13 pm,
Where is your mercy for your fellow man?
By PatrickHenry, May 26, 2011 at 2:13 pm Link to this comment
Build a big prison over in Afghanistan and send them there.
We outsource everything else.
By MarthaA, May 26, 2011 at 12:06 pm Link to this comment
The prison systems in all states all over the United States should
never have been privatized for profit. Usury in all forms has been he
down fall of our nation.
By politicky, May 26, 2011 at 10:53 am Link to this comment
The immigrant bashers who don’t know my state might
continue to read, please.
WHY ARE CALIFORNIA’S PRISONS SO CROWDED?
A lot of California’s overcrowding problem is the result of the
state’s punitive “three strikes law,” which puts third time
offenders in jail for life, regardless of their crime.
Read more:-
prison-disaster-heres-what-you-need-to-know-2011-
5#ixzz1NUCXHL00
By gerard, May 26, 2011 at 10:51 am Link to this comment
“... a mandatory sentence of 25 years to life for anyone convicted of a felony if the person has two previous felonies”
That right there is the heart of prison problems. It frankly admits that two previous sentences served under the degrading prison/private profit “system” has not done anything to prevent recidivism. So the “criminal” commits to “criminality” again—naturally, since that is probably all the “criminal” knows, plus he’s the victim of a ruinous label.
So ... time to try the Alternatives to Violence Project—people who are trained in helping violent people (which means almost anybody, but particularly “criminals” think about alternatives to the past behaviors that have landed them in jail in the first place. It offers multiple chances to learn new ways of thinking, look for new ways to live that are more rewarding than the crime/punishment stalemate.
Some enlightened wardens have instituted these programs, but they are all too few. Some prison guards and police are trained in these methods, but again, not many. But in places where Alternatives to
Violence are conscientiously adopted, taught and followed, great improvements have occurred.
There was a time fifty years or so ago when prisons were not so overcrowded and where rehabilitation was looked upon more carefully. But since we are into fighting “wars on this or that” here and there, progress on alternatives to violence have almost dropped from sight in the public mind. They are still there, waiting to be promoted and used. (Google Alternatives to Violence Project and hook yourself in. Help is always needed.)
By John M, May 26, 2011 at 6:57 am Link to this comment
Kennedy was never a conservative - and since California
has 27000 illegal alien felons in it’s prisons they
should just turn them over to the feds or dump them
back over the border, problem solved. But they continue
to be a sanctuary state.. glad i don’t live there.
By drbhelthi, May 26, 2011 at 12:10 am Link to this comment
Violating immigration laws for two terms, only to imprison too many
immigrants for pittance offenses, further demonstrates the tendency of
the “Terminator-Adulterator,” Arnold Schwarzenneger. A product of NAZI
lineage, Schwarzenneger made a speech in support of “Junior Bush” at
the Republican national convention, 2007. With his Austrian village
accent, A. Schwarzenneger repeated in American, numerous “victory”
statements of A. Hitler. His performance was a remarkably accurate
monkey-see, monkey-do replication. The report of senior journalist,
John Buchanan, reveals details:
It is logical that a few million parents would like to see this NAZI
import return to his home village, and take all the socially-destructive
movies, with their destructive social influence, with him. Such a
healthy, social action might weaken the furtive nazification of the
U.S.A. by the NAZI-GHWBushSr entourage. It is sad that such examples of
asocial deportment, basically mental derangement, receive wide
publication, while upstanding Austrians in the U.S. never receive
mention. Is this simply further influence of nazi-ism in the media ?
Which nazi-ism has spread to the legal system of the U.S.A., not just
California.
Now, let us see which of the NAZI shills, Terminator-Adulterator-type
sympathizers and wanna-bees, jump in to call me “racist,” “Jew-hater,”
“bat-shit-crazy,” and other scapegoat-terms they typically apply in
their refutations. While it has been variously attempted, refutation of
“George Bush: The Unauthorized Biography,” by Webster G. Tarpley & Anton
Chaitkin, is attempted only by rash idiots. The asocial illogic of
which idiots is sometimes discernable in Truthdig blogs.
Get Truthdig in your inbox
Newsletter
Become a Fan
Subscribe
|
http://www.truthdig.com/report/item/cleaning_up_californias_cruel_prison_system_20110525/
|
crawl-003
|
refinedweb
| 2,132
| 61.56
|
mprotect - set protection on a region of memory
Synopsis
Description
Errors
Notes
Example
Program source
Colophon
#include <sys/mman.h>
int mprotect(void *addr, size_t len, int prot);
mprotect() changes protection for the calling processs:
On success, mprotect() returns zero. On error, -1 is returned, and errno is set appropriately.
SVr4, POSIX.1-2001. POSIX says that the behavior of mprotect() is unspecified if it is applied to a region of memory that was not obtained via mmap(2).
On Linux it is always permissible to call mprotect() on any address in a processs.
The program below); }
mmap(2), sysconf(3)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.sgvulcan.com/mprotect.2.php
|
CC-MAIN-2017-26
|
refinedweb
| 129
| 56.35
|
CodeGuru Forums
>
Visual C++ & C++ Programming
>
C++ (Non Visual C++ Issues)
> Simple question (involving rand() i think)
PDA
Click to See Complete Forum and Search -->
:
Simple question (involving rand() i think)
jaeger
August 14th, 2002, 01:17 PM
I'm making a black jack program as an excersize (mostly in structures and such) and i'm having a problem with the deal() method.
the problem seems to be that a) the random number seems to be 13 all the time ('k' is the output twice) and after i enter anything for the cin, it gives me a bus error and quits. any ideas?
(the player struct has a few ints (money, bet) and the name and hand (a 2-element array) for the player.)
this is the code for that method:
void deal(person player)
{
int i;
char askhit[3];
cout<<"You got: ";
for(i = 0; i < 3; i++) {
player.hand[i] = (rand()%13 + 1);
if (player.hand[i] == 11) player.hand[i] = 'j';
if (player.hand[i] == 12) player.hand[i] = 'q';
if (player.hand[i] == 13) player.hand[i] = 'k';
cout<<player.hand[i]<<" ";
}
cout<<endl;
do {
cout<<"Would you like to hit?"<<endl;
cin>>askhit>>endl;
if (askhit[0] == 'y' || askhit[0] == 'Y') hit(player);
} while (askhit[0] == 'y' || askhit[0] == 'Y');
}
kuphryn
August 14th, 2002, 01:30 PM
What kind of data type is hand? You are assign hand with an integer and assign it a character.
Kuphryn
jaeger
August 14th, 2002, 01:37 PM
char hand[2];
can't chars hold ints and characters?
katy
kuphryn
August 14th, 2002, 04:37 PM
No, a character cannot be an integer. An array of characters can be "13," but you will need a function like atoi() to convert it to an integer 13.
In your algorithm, use a temp integer to hold the random number.
Kuphryn
jaeger
August 14th, 2002, 07:29 PM
oh, ok... so what should i do to convert only the digits 11-13 to the chars 'j, q, k' respectively (or 'jack, queen, king' if it doesn't make much difference)?
kuphryn
August 14th, 2002, 07:49 PM
I made some changes to your algorithm.
-----
void deal(person player)
{
int randNum = 0;
char askhit[3];
cout<<"You got: ";
for(int i = 0; i < 3; ++i)
{
// I assume you want random [1, 13]
randNum = static_cast<int>(((static_cast<double>(rand())) / (static_cast<double>(RAND_MAX + 1))) * 13 + 1);
if (randNum == 11)
player.hand[i] = 'j';
if (randNum == 12)
player.hand[i] = 'q';
if (randNum == 13)
player.hand[i] = 'k';
cout<<player.hand[i]<<" ";
}
-----
Kuphryn
Graham
August 15th, 2002, 04:33 AM
A character can hold an integer: a signed char can hold a value in the range -128..+127 and an unsigned char can hold a value in the range 0..255. Where your code goes wrong is that you use 'j', 'q' and 'k' for the court cards, but you don't use '1', '2', etc for the others. In ASCII coding, the character '1' has the value 49, '2' has the value 50 and so on. So, for consistency your code should read:
player.hand[i] = (rand()%13 + 1);
if (player.hand[i] == 11)
player.hand[i] = 'j';
else if (player.hand[i] == 12)
player.hand[i] = 'q';
else if (player.hand[i] == 13)
player.hand[i] = 'k';
else if (player.hand[i] == 10)
player.hand[i] = '0'; // Needs a single char representation for "10"
else
player.hand[i] += '0'; // Get character representation of value.
Characters whose values lie between 0 and 31 (in ASCII) are generally not printable characters, so you won't see many of them in your printout (you might get the odd tab character coming out).
The more serious problem is that you say that "hand" is declared as char hand[2], yet you are accessing it in a loop that goes from 0 to 2 - i.e. you're going out of bounds in the array access.
A secondary problem is that you are not modelling a deck of cards: since each card you "deal" is simply a random number between 1 and thirteen, there is no account taken of the loss of a card once it has been "dealt" - e.g. if you deal the king of hearts, then you can't deal it again until the pack has been restored. A better approach may be to model a full deck, then randomise it and just keep picking cards off the top to deal them.
jaeger
August 15th, 2002, 09:03 AM
ok... that code makes sense. and yeah, the deck problem is something i planned to tackle maybe in my second edition of the program. but since you brought it up, how would you suggest i do it? i just started working in C++ so i don't know much about the possibilities for stacks or things like that that would model a real deck.
thanks kuphryn and Graham for your help!
katy
jfaust
August 15th, 2002, 11:09 AM
One way to do it, which conceptually maps well to the real world, is to use std::vector of Cards, which is basically what a deck is.
enum Suit
{
Hearts,
Diamonds,
Clubs,
Spades
};
struct Card
{
Suit suit;
unsigned int value;
}
typedef std::vector<Card> Deck;
....
Deck deck;
Initialize the deck by filling it with all the cards.
When dealing, pick a random number from the total remaining cards. When it's dealt, remove it from the deck.
Sounds like a fun problem to work on.
Jeff
jfaust
August 15th, 2002, 11:11 AM
Or, even better, initialize the deck and then "shuffle" it, which will randomly reorder it. Then deal off the top.
The closer a solution models the real world the better. Now, you just need free drinks and good buffets...
Jeff
Paul McKenzie
August 15th, 2002, 01:43 PM
To add to Jeff's suggestion, use the random_shuffle() algorithm function to shuffle the deck.
#include <algorithm>
#include <vector>
//...
typedef std::vector<Card> Deck;
using namespace std;
int main()
{
Deck deck(52);
//...Assume it is initialized
random_shuffle(deck.begin(), deck.end()); // Shuffle the cards
}
Regards,
Paul McKenzie
AnthonyMai
August 15th, 2002, 03:42 PM
<B>It is an extremely bad idea to use Paul's random_shuffle() function to shuffle the deck as the OP requested.</B> Actually it is so bad that you are going to lose big money on the bets.
Why? The OP's intend was clearly trying to shuffle the deck in a [B]random and un-predictable</B> way that no one can guess the cards, for the purpose of gaming or gambling. When you use random_shuffle(), the result is quite predictable, causing your opponent to always been able to guess your card and win your money.
There was precedence, a couple of years ago an on-line gambling site so trusted the security of their deck shuffling algorithm that they published the code publicly, the result was it was cracked without a huzzle and every card was completely predictable. It was a true story. The problem was not with the algorithm itself, but with the random number generator.
The rand() function may be useful for statistics purpose. But it is totally useless when it comes to cryptographically secure applications. It's sequence is completely predictable, especially when it has not been seeded. random_shuffle() relies on rand() internally, and hence it is unsafe.
Designing a cryptographically secure application is very difficult. It all boils down to introducing un-predictable elemements into the system, which in physics is called entropy.
It is very hard to obtain entropy on a computer because all computers are deterministic, virtually all methods of collection entropy on a computer involves some sort of highly platform dependent component.
For example, on a PC, you can use the RDTSC to get the current clock count, you can scan the whole memory space to get a checksum, you can measure the accurate time interval between two key strikes, etc. and a good thing to do is hash your data a couple of thousand times before you seed your random number generator. And above all, never use rand() or any thing that relies on rand().
To sum up, I do not see how random_shuffle() in STL has any real usage, except for as some homework assignment for students.
AnthonyMai
August 15th, 2002, 03:48 PM
I suggest people read Counterpane's CryptoGram and other security/cryptography related literatures. Very few people know any thing at all about concepts in the cryptography field, that is a big problem today when the IT is more and more vulnerable to information technology security flaws.
jfaust
August 15th, 2002, 03:53 PM
Most uses of random numbers don't need to be secure. Usually you just want a decent distribution of results
Case in point is the OP, which is a game. This is a perfectly good use for rand(). Unless this is going to be used for actually gambling for cash, it satisfies all the needs of a game.
But true, coming up with good random numbers is difficult, which can be seen by the large range of available pseudo-random number generators. Even with these, it is very important (and difficult) to pick a good starting value.
But in this case, rand() is completely suitable, as is random_shuffle<>.
Jeff
jfaust
August 15th, 2002, 04:02 PM
Furthermore, random_shuffle can take a predicate argument, so you don't need to use rand().
Jeff
PaulWendt
August 15th, 2002, 04:13 PM
Originally posted by AnthonyMai
[B]<B>It is an extremely bad idea to use Paul's random_shuffle() function to shuffle the deck as the OP requested.</B>
On the contrary, I think it is an extremely good idea to use Paul's
random_shuffle() function. The original poster stated:
"I'm making a black jack program as an excersize [sic]...."
This is clearly not a complicated gamgling simulation; nor is it
going to be a security/crypography piece of software.
I don't understand why you'd bother to make this post, given the
fact that you stated:
"To sum up, I do not see how random_shuffle() in STL has any real usage, except for as some homework assignment for students."
This is exactly what the original poster is doing: doing an exercise
so that [s]he can learn about C/C++. My guess is that you're
still going about trying to make Paul McKenzie look bad.
Unfortunately, you've failed at this attempt too; better luck next
time.
--Paul
Paul McKenzie
August 15th, 2002, 04:58 PM
Originally posted by jfaust
Furthermore, random_shuffle can take a predicate argument, so you don't need to use rand().
Jeff Hello Jeff,
Anyone who is versed in standard C++ would have known this or would have taken just a few seconds to look it up. Here is the definition, for those who wish to criticize before they research:
Note the second definition which takes a predicate.
Regards,
Paul McKenzie
Graham
August 15th, 2002, 05:01 PM
One of the guarantees given in the standard for random_shuffle is that any given random sequence from N objects has a probability of appearing of exactly 1/N! - i.e there is no "clumpiness" to the result: it's evenly distributed over all possibilities.
BTW, I wouldn't use a vector<card>, I'd use a deque<card> :D .
Graham
August 15th, 2002, 05:02 PM
Paul got his reply in as I was typing mine - look at the link under "Notes" for the behaviour guarantee.
jfaust
August 15th, 2002, 05:13 PM
Graham,
Do you recommend deque because you simply can't resist a pun, or is it because a certain someone says that nobody ever uses deque? ;)
Actually a deque would work better here since you are only taking off the end. vector would be preferrable if I didn't improve my first example by shuffling, as random access in a deque is done in linear time, not constant time.
Although I fear shuffling a deque might be slower.
Jeff
Bob Davis
August 15th, 2002, 05:24 PM
Instead of using if/else statements or switch/case statements to determine the card, why not create an enum type for the card's value? I remember in a C++ class I took, it was the professor's example for creating an enum.
enum Value
{
One = 1,
Two,
Three,
Four,
Five,
Six,
Seven,
Eight,
Nine,
Ten,
Jack,
Queen,
King,
Ace
};
Makes it a bit prettier to output that way. Ignore it if you like, I just always love a good enum.
jfaust
August 15th, 2002, 05:36 PM
I agree. An enum works well here. Then when you have a method to, say, get the numeric value of the card, the switch statement can reference Ten, Jack, Queen and King for value of 10 instead of integers.
Jeff
Graham
August 16th, 2002, 03:31 AM
Jeff: I could argue that I foresaw a member function called cheat() that dealt off the bottom, but really I just liked the pun.
jaeger
August 16th, 2002, 09:22 AM
This got into quite the discussion... no, I don't need a hardcore, secure randomizer. I just want to shuffle a deck a little.
I've never used enum (I'm really a beginner, so I apologize for asking really basic question), so how does it work? do I put it outside all the functions and then it is accessible from deal() and it just treats it like a constant?
(and out of curiosity: what's a deque?)
Thanks everyone!
katy
Graham
August 16th, 2002, 10:03 AM
a deque is a Double-Ended QUEue (pronounced "deck", hence the pun).
You can think of an enum as a way of declaring lots of constants in one go. there's a bit more to it than that but, at this stage, it'll do. I'm not sure that an enum provides too much value for money in this case (apart from what Jeff said about referring to Jack, etc, in the code), but it's not a bad idea to learn about them.
With a properly object-oriented design, you'll often find enums declared inside classes (it's also a work around for one of VC++'s many failings).
class some_class
{
public:
enum { relevant_constant1 = 1, relevant_constant2 = 17 };
// etc;
};
// then refer to the values in code as:
if (some_value == some_class::relevant_constant1)
{
// whatever
}
AnthonyMai
August 16th, 2002, 02:08 PM
Do you recommend deque because you simply can't resist a pun, or is it because a certain someone says that nobody ever uses deque?
I guess you refered me for that someone. Once again you are putting words into my mouth. I am not God, I do not have the capability of surveying every walk of life on this planet to find out who uses what and who doesn't use what. Therefore I never make such assertions like "no body uses so and so" or "no body does so and so".
Read my lips! What I said was:
There is NOT a single thing in this world that any one can think of that can NOT be done WITHOUT using deque.
So there is this strange two-headed snake called deque. So what? Two-headed snakes do exist. Is that a big deal? What about three-headed snake?
Deque is a fancy invention. But I don't see it any more useful than a tri-que, quad-que, quant-que, or even deca-que, hexa-que. You can invent any thing you want. But it remains that anything that you do using deque CAN BE DONE WITHOUT it, with NO sacrifice of convenience or code efficiency or readability whatsoever.
Graham
August 16th, 2002, 03:19 PM
Yeah. And all languages are equally powerful. There's no program that you can write in C++ that can't be written in FORTRAN or COBOL or PL1 or whatever. We don't need classes, recursion, bitwise operators, switch statements or any other paraphenalia of modern languages. All you need is a conditional statement, a loop construct that can be potentially infinite, a goto statement, somewhere to put your data and some basic manipulation methods. There is NOT a single thing in this world that anyone can think of that can NOT be done WITHOUT using C++. Now, anyone know where I can get a copy of masm?
AnthonyMai
August 16th, 2002, 03:36 PM
Graham:
You know I talked bout deque in the context of C++ programming. I am not talking about lawyers or doctors using deque. I am talking about programmers doing C++ programming.
Can you tell me just one thing that using deque is superior than using other method? The mere fact that a lot of C++ programmer would ask "just what is deque" says how rarely deque is ever used at all.
Talking about assembly, it is a fact that this IT world can live without Java, or C++.net or VBScript or any other high level languages. But the IT simply can NOT live without the assembly language. It is as true as programming languages is useless without the existance of CPUs.
Yves M
August 16th, 2002, 04:17 PM
Well, if you're using a stack, would you rather use a vector instead of a deque ?
The point of deque is that when you use it, you know that you're only supposed to take stuff off the end (or beginning) and add stuff there. It is a self-imposed restriction that makes for nicer and better readable code.
Why do people write comments in their code ? They are totally useless, they are even ignored by the compiler ! Hum, weird, isn't it ?
jfaust
August 16th, 2002, 08:02 PM
I used deque recently for creating and undo/redo stack. It had everything I needed.
And Anthony, you do use definitives such as "always". It's one of the things I dislike most about your posts.
Therefore I never make such assertions
Which is an assertion as well as false.
Jeff
cup
August 17th, 2002, 01:36 AM
Haven't written card games since Uni. Anyway, this is how I used to do it.
// Declare a deck of cards
const int VALMAX = 13; // 14 for some European sets
const int SUITMAX = 4;
const int DECKMAX = VALMAX * SUITMAX;
int deck[DECKMAX];
// Initialize the deck
remain = DECKMAX;
for (int i = 0; i < remain; i++)
deck[i] = i;
// Deal the next one
int choice = rand () % remain;
card = deck[choice];
// replace the chosen card with the last card in the remaining deck
remain--;
deck[choice] = deck[remain];
// Decoding
int value = card % VALMAX;
int suit = card / VALMAX;
It is sort of inline shuffling.
jaeger
August 19th, 2002, 12:07 PM
i'm sorry again, but what's a vector? i'm familiar with it in terms of math (as in, a magnitude and direction), but is that what a vector in C++ is?
thanks,
katy
cup
August 19th, 2002, 12:18 PM
In STL, you can think of it as an array where you don't need to worry about the size. For instance, you could start with an array of integers with no elements
vector<int> victor;
To add an element, say 200, into the vector
victor.push_back (200);
It will grow to the appropriate size. To retrieve the 10 element
int xxx = victor[9];
To modify the values
victor[9] = something;
That is about as simple as I can make it.
codeguru.com
|
http://forums.codeguru.com/archive/index.php/t-204782.html
|
crawl-003
|
refinedweb
| 3,257
| 71.55
|
Disclaimer: I presume we all have written one or multiple API at certain point in time in our career, otherwise you would not have bumped into this article. The article does not describe what REST API is, rather you should have some basic knowledge about REST API before going through the article.
Introduction:
Generally when we write REST API, we focus a lot on implementation & very little time on designing the proper request / response schema, API resource models. We jot down just all necessary request & response (in most of the cases HTTP Code 200 series response) parameters in a document, get it reviewed quickly, accordingly we create some resource models & jump into implementation. This strategy works in small to mid-size companies or startup companies. But once you start designing API for larger audience in a enterprise company or consumer internet company, you can’t so easily get away with such minimalist design, a lot of stake holders / teams are involved in the process, a lot of systems might get involved, so a homogeneous & consistent design strategy has to be developed which every stake holder in your organization ( even outside organizations as well ) can relate & contribute to. Open API specification (henceforth called OAS) solves that problem. It’s a standard of describing API comprehensively in machine & human understandable format. The idea is the API should be self sufficient to describe itself with enough granular level details.
Major Advantages of having API design standard:
- It forces you to get a ‘Design First’ attitude. Don’t delve into implementation quickly, spend enough time on designing the API, rectify all possible mistake first.
- It promotes collaboration. Organisation should have a common standard so that all developers understand the API descriptions & documentation without gaining specialised knowledge for each & every API, API design review becomes better when everyone follows the same standard. A tools ecosystem can be built around this standard which can auto validate the API design, auto generates code & documentation, enhance security for all API.
- API consumption become easier if both the API creator & consumer (might be different organizations) expresses the API in a common design standard. So you can standardize API design across teams hence cutting down time on developers’ effort to understand each API in a different way.
A Brief About OpenAPI Standard:
In 2009, a guy called Tony Tam started working on creating a API specification standard what would become Swagger API specification. In August 2012, he published version 1.1 of Swagger spec while working at Reverb Technologies. In March 2014, version 1.2 was published which was more inclined towards JSON Schema Draft 4 standard. This was the first version to gain widespread adoption across the API industry, and is still in use by some API providers today. Swagger specification 2.0 was published in September 2014 with lot of changes & improvement. In March 2015, SmartBear Technologies acquired interest in Swagger intellectual property & other open source technologies from Reverb Technologies. In December 2015, Swagger specification was donated by SmartBear to a new open-governance organization, set up under Linux foundation: the OpenAPI Initiative. These ten companies were the founding member of OpenAPI initiative: 3Scale, Apigee, CapitalOne, Google, IBM, Intuit, Microsoft, PayPal, Restlet and SmartBear. Currently around 30 companies are active member of OpenAPI initiative. In July 2017, the OpenAPI initiative announced the release of OAS v3.0.0, this specification conforms to JSON Schema Draft 5 & it introduced some changes around the schema, introduced new concepts like links, callbacks etc.
A Brief About Swagger:
Swagger is a tools ecosystem built around OpenAPI specification. Following are some capabilities of different Swagger tools:
- Swagger Editor: Swagger provides both online ( called SwaggerHub ) & offline (downloadable UI) interface where developers can write API specification in YAML format & the editor validates the design in real time, checks compatibility with OAS standard, detects errors on the fly & shows them visually. So it simplifies API development & helps developers to accurately model resources & APIs.
- Swagger Codegen: This tool can generate API boilerplate code & API models (server stubs) etc from the specification in 20+ languages. This greatly reduces developers’ effort to manually write those code. Swagger Codegen can generate client side code to consume the APIs. This tool can generate client side SDK as well in-case developers want to use SDK at the client app.
- Mocking API: SwaggerHub provides the capability to mock the APIs defined in the specification on the fly. This facility greatly helps to test APIs in a cleaner way without spending extra money & time to create mock servers during the development life cycle. The offline Swagger UI does not provide this capability.
- API Documentation: Maintaining API documentation manually is hard as API keeps evolving & for every minor change you don’t need to create a new API version. Swagger has out of the box capability to create & sync documentation from OAS. In case you want to generate documentation for already existing API, you can use Swagger Inflector to create documentation in run time using annotations or using Swagger Inspector , you can hit an API end point & generate open API specification from this interface & generate documentation from that specification as well.
- API Testing: Using Swagger Inspector, you can hit end point with proper request & check out the response.
In many cases, your company might not allow you to use SwaggerHub to maintain API because SwaggerHub is not free (free till a very limited use) & your organization might not trust it. So in order to facilitate the development process, you might need to install Swagger UI in your local machine. Follow the instructions below to create a local Swagger environment where you can non-restrictively create & maintain APIs:
- We will install Swagger UI & Swagger Editor using docker. So install docker first. Download docker here, follow the instructions according to your operating system.
- Install Swagger UI, run:
docker pull swaggerapi/swagger-ui
- Run Swagger UI:
docker run -p 8000:8080 swaggerapi/swagger-ui.
This will run the UI on port 80. Go to browser & type:. You will see Swagger UI running.
- Install Swagger Editor:
docker pull swaggerapi/swagger-editor
- Run Swagger Editor:
docker run -p 81:8080 swaggerapi/swagger-editor. This will run the editor on port 81.
- Create a project folder in any favourable location. I have created under
/home/kousik/gitrepofolder ( I have Ubuntu machine). My project name is:
DummyApiSpec. So the complete directory path of the project is:
/home/kousik/gitrepo/DummyApiSpec. You can clone my project & see the code:
- Swagger UI needs a URL to fetch API specification. Since we are hosting Swagger locally, we have to serve the specification file from
localhostserver. Hence we will install a Simple HTTP file Server in python & use that to serve any file which resides in either
/home/kousik/gitrepoor any of its child directories. Go to the folder
/home/kousik/gitrepo& create a file called
server.py& paste the following code:
#!/usr/bin/env python
try:
# Python 3
from http.server import HTTPServer, SimpleHTTPRequestHandler, test as test_orig
import sys
def test (*args):
test_orig(*args, port=int(sys.argv[1]) if len(sys.argv) > 1 else 8000))
You can go to the folder containing
server.py file & run this simple server issuing the following command:
python server.py
This will by default run the server at port
8000. In order to run the server at any particular port, you can run it like:
python server.py 8100
The above command will run the server at port 8100. This creates a local HTTP file server that can serve any file residing under its parent or any nested sub-directory. The parent directory acts as the base directory of web server, other paths must be relative to this base directory. Since my project
DummyApiSpec &
server.py reside under
/home/kousik/gitrepo , I can access the API spec from localhost like:. Arrange the directories accordingly & make sure that you are able to access the API spec file from localhost.
Hands-On with OpenAPI specification:
We will create a API specification to better understand different aspects of OAS v2.0 & v3.0.
Let’s say you want to create a User service (micro service) which owns all user data. Say this service has a functionality to create a random user which only works in a sandbox / testing environment. You can also get specific user details by querying the service with the corresponding user id. User details contains id, name, date of birth, gender, contact details, user home location, user device preferences etc. We will create the Open API specification for creating & getting the user details using Swagger.
Go to the location where you created the project. In my case the project name is
DummyApiSpec & location is:
/home/kousik/gitrepo/DummyApiSpec.
The directory structure should look like as shown in the below image:
I am using IntelliJIDEA, any IDE should have the same structure. The
schema folder under
openapi-3.0 contains the specification file
spec.json defined in accordance with OAS v3.0. The file
spec.json defined inside
swagger-2.0 is defined according to Swagger-2.0 specification, remember OAS v3.0 is derived from & improved version of Swagger Specification v2.0. The
components folder contains all reusable API resource models in separate JSON files. For the time being you can just create the folders as shown in picture. You can ignore
gradle related folder & files.
We will see how to create OAS specification in both v2.0 & v3.0 below & compare them.
Creating Random User Generation API with OAS v2.0:
OAS v2.0 is the most popular OAS version used today. It has the following schema structure. All the colourful rectangular blocks represents different component at the global / root level in the specification.
Following gist is JSON representation of OAS v2.0 of our random user generation API. Let’s decode different components.
The first section in the JSON file is
'swagger' which represents which specification version does the file represent.
The key
info maps to a object that contains basic information about the API like API version, title, basic description, developer contact details etc. Put your own details accordingly.
The keys
schemes,
host,
basePath together represents the API server URL where the API is supposed to be hosted. So according to the above spec, the API server URL is:. When you use Swagger UI or SwaggerHub to test any API mentioned in the specification file, they internally use this API server URL for testing & all API requests hit this address to get data.
The section
securityDefinitions represents all security schemes which are supported by our API, it does not apply any of these scheme to any API, it only defines the available schemes. example purpose.
securityDefinitions mapped object contains many keys like
BasicAuth,
ApiKey,
AppId,
SessionKey,
OAuth2 — these are random names, you can put any name to represent the security schemes, the main thing that matters is the objects represented by those keys:
BasicAuth represents Basic Authorization through built in
HTTP mechanism.
HTTP supports sending an
Authorization header that contains ‘Basic’ word followed by a space & base 64 encoded
username:password string. Example
Authorization header:
Authorization: Basic 63jYu7uu38uqt356q=. This mechanism is not secure as base 64 encoded strings can be decoded as well, use
HTTPS to make it more secure.
ApiKey /
AppId /
SessionKey maps to a JSON objects that represent authentication key value pair passed through header keys:
"ApiKey": {
"type": "apiKey",
"in": "header",
"name": "X-API-KEY"
},
"AppId": {
"type": "apiKey",
"in": "header",
"name": "X-APP-ID"
},
"SessionKey": {
"type": "apiKey",
"in": "query",
"name": "SESSION-ID"
}
Here, the key
type has the value
apiKey , it’s a Swagger / OAS defined type, the key
in represents where the key has to be passed — either
header or
query parameter section,
name represents the name of the key.
OAuth2 represents OAuth2 standard authorization scheme:
"OAuth2": {
"type": "oauth2",
"flow": "accessCode",
"authorizationUrl": "",
"tokenUrl": "",
"scopes": {
"read": "Grants read access to user resources",
"write": "Grants write access to user resources",
"admin": "Grants read and write access to administrative information"
}
}
OAuth2 requires some
scope based on clients will be granted permission to access the corresponding resource.
The
security section defines which authentication / authorization scheme is going to be imposed on which API. We have defined global
security section at the root level that is inherited by all API defined in the spec, we will see how we can override authorization mechanism at individual API level later.
security is a list of schemes wrapped in JSON objects like:
"security": [
{
"OAuth2": ["read"]
}
]
When multiple JSON objects are present inside the list, schemes represented by any of the JSON object will work. Multiple Schemes represented as separate JSON objects maintain
logical OR relationship. Example: Look at the
security section of the
GET /users/{user_id} API:
"security": [
{
"OAuth2": ["read"]
},
{
"ApiKey": [],
"ApiId": []
},
{
"BasicAuth": []
},
{
"SessionKey": []
}
]
In the above list, four authentication schemes are applicable for this API for demonstration purpose. All these schemes are in
logical OR relationship. So proving proper data to any of these schemes will pass the authorization check. For
logical AND relationship, you can put multiple schemes inside the same JSON object. Like consider the following portion in the above code snippet:
{
"ApiKey": [],
"ApiId": []
}
You need to provide both the API key & API Id to pass this combined authorization since both are part of the same JSON object — this is
logical AND relationship.
In the JSON object, the key name should be exactly equal to any of the scheme name defined in the
securityDefinition section i.e; in this example, the key name is
OAuth2 which is registered inside
securityDefinitions , the value is a list of scopes i.e; in the above case,
read scope is specified.
More on Swagger / OAS v2.0 authentication here.
The keys
consumes &
produces maps to list of MIME Types that the APIs can consume in a request body ( not for GET request obviously ) & produce as a response. The top level MIME Types are inherited by all defined APIs although individual API can override these types.
More on Swagger / OAS v2.0 MIME Types here.
paths section describes all API end points & operations. Let’s explore the
/user API to understand all aspects.
"/users": {
"description": "User provides some basic details & this api creates a random user with those details",
"summary": "POST api to create a random user",
"operationId": "generate_user.post",
"deprecated": false,
"produces": [
"application/json"
],
"security": [
{
"OAuth2": ["read", "write"]
}
],
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"description": "The user provided basic details will be used to create a random user",
"schema": {
"$ref": "#/definitions/UserBasicDetails"
}
},
{
"$ref": "../../components/header_request_id.json"
},
{
"$ref": "../../components/header_client_id.json"
}
],
"responses": {
"201": {
....
},
"400": {
....
},
"409": {
....
},
"users"
],
"externalDocs": {
"url": "",
"description": "See more on user operations."
}
}
}
In OAS terminology, API end point like
/user is called path, all paths are relative to the API server URL as mentioned earlier & associated HTTP verbs are called operations. Since we want to create a random user, our
/user end point is associated with
post operation ( HTTP verb POST equivalent).
Swagger 2.0 supports get, post, put, patch, delete, head, and options.
Swagger defines a unique operation as a combination of a path and an HTTP method. This means that two GET or two POST methods for the same path are not allowed — even if they have different parameters (parameters have no effect on uniqueness).
The keyword
operationId is optional, but if provided, it should be unique for each operation.
deprecated indicates whether the API should be decommissioned soon, it’s useful if the API is already being used by some client, but you have a new version or a alternative API.
We have used
produces to specify a list of MIME Types in global / root level already, we have again specified the same in the
post operation of
/user API just to show how to override properties inherited from root level. Once overridden, the root level features have no impact on this operation. You can override
consumes also in a similar way.
We have overridden
security schemes as well inside this operation, since we are creating user here, we have used
OAuth2 security scheme with
read &
write scope only. Once overridden, the global level
security schemes will not be imposed on this operation any more.
The
parameters section is a list of parameters represented as JSON objects. The
in key of the parameter object indicates the location where the parameter is expected. Based on location, following are the type of parameters:
Body Parameters: This kind of parameter is expected as part of request body, the location is signified by “
in": "body" inside the parameter object. Not applicable for HTTP GET request.
Query Parameter: If you want to expose parameters in URL like:
/api/users?attributes=location,devices, here
attributes is a query parameter, it’s signified by
"in": "query" inside the parameter object.
Path Parameter: If you want to represent API path as a URL like this:
/api/user/{user_id}, here
user_id wrapped inside
{} is a mandatory path parameter. It’s signified by
"in": "path" in the parameter object.
More on Swagger v2.0 parameters can be found here.
required means whether the parameter is mandatory, it’s mandatory when the key’s value is set to
true.
schema is used to describe primitive values, as well as simple arrays and objects serialized into a string.
$ref lets you refer to content defined in another location probably another JSON file, or the global
definitions section of the API specification, file hosted in another server etc. In our specification, the following code snippet describes that the request body of this particular API operation exists under global
definitions section as a JSON object mapped to the key
UserBasicDetails.
"schema": {
"$ref": "#/definitions/UserBasicDetails"
}
$ref can also refer to any JSON file lying under any directory, you just need to provide the relative path of that JSON file with respect to the current JSON file. Example:
{
"$ref": "../../components/header_request_id.json"
}
The above snippet means, there exists a file called
header_request_id.json in the directory called
components which exists in parent of parent of our specification file, refer to the directory structure image to understand properly.
More details on
$ref here.
The
responses section under any API operation defines all the possible response schema for that operation. It’s a JSON object where key names are HTTP status codes like
200,
400 etc & mapped values define the schema & metadata of that response object. Your API response for that particular HTTP status code will exactly follow the same structure. Let’s see the response structure for HTTP status 201 when the user object gets created:
"201": {
"description": "",
"headers": {
"X-RateLimit-Limit": {
"description": "Number of request per time window (hour / 5 min / 15 min)",
"schema": {
"type": "number",
"format": "integer"
}
},
"X-RateLimit-Remaining": {
"description": "Number of request remaining per time window (hour / 5 min / 15 min)",
"schema": {
"type": "number",
"format": "integer"
}
}
},
"schema": {
"$ref": "../../components/user.json"
}
}
Response structure has general
description about the response,
header & actual response
schema. In our API,
X-RateLimit-Limit &
X-RateLimit-Remaining headers are defined with their own description & schema, you can put any suitable header key & their type as shown above.
Again the actual response body resides in
user.json file in the
components folder & we use
$ref to refer to that file relative to the current file (here our
spec.json file path) path.
The
tags section inside an API operation contains list of tag names which are used by tools like Swagger UI to logically group operations in the UI. It’s nothing but a grouping of operation in the UI for better clarity. It’s an optional field.
In the end of the specification file, there is a global
tags sections defined which is array of JSON objects:
{
"name": "users",
"description": "User related operations exposed by this service",
"externalDocs": {
"url": ""
}
}
]
This section is also an optional field which describes all tags used in the individual API operation level as we saw earlier. We used
users tag in individual API operations as shown already, here we describe those tags with
name,
description &
externalDocs for further information if any. The tag name in these objects should exactly match the individual API level operations’ tags. If any tag name mismatch is there in the global
tags objects, an unnecessary group will be created & shown in the UI.
The section
externalDocs define extra documentation for API operations & tags. This is optional field.
Representation of reusable Objects / API components:
In real life, many of our API which are correlated to the same business problems or domain end up using many common components or resource models. So it does not make sense to write those component again & again for each specification, rather you can place them in a common module or a git repository & reuse them. In our directory structure, the
component folder holds all reusable components. Reusable components can be defined in the following ways:
1. “definitions” section: Swagger / OAS 2.0 defines a global
definitions section where you can define all resource models. Example: in our specification, we defined
UserBasicDetails model just for demonstration. You can define any number of models as per your need.
"definitions": {
"UserBasicDetails": {
"title": "UserBasicDetails",
"type": "object",
"properties": {
"name": {
"example": "Tony Stark",
"type": "string"
},
"gender": {
"example": "Male",
"type": "string"
}
},
"required": [
"name",
"gender"
]
}
}
We referred to this section from the
parameters section of
POST /users api like following:
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"description": "The user provided basic details will be used to create a random user",
"schema": {
"$ref": "#/definitions/UserBasicDetails"
},
......
]
#/definitions/YourModelName refers to
YourModelName under the global definitions section.
2. Accessing models residing in a different directory: We have already talked about placing models in any directory like
components in our case & accessing JSON files residing in those locations through relative path like:
"schema": {
"$ref": "../../components/user.json"
}
This method is particularly useful when you don’t want to put all models in the same specification file rather maintain a modular directory structure.
Some Important Data Types:
Let’s see the content of the file
components/dob.json which represents date of birth of a person:
{
"required" : [ "age", "date" ],
"type" : "object",
"properties" : {
"date" : {
"maxLength" : 50,
"minLength" : 1,
"type" : "string",
"format" : "date-time"
},
"age" : {
"type" : "number",
"format" : "integer",
"minimum": 10,
"maximum": 60
}
}
}
We have two properties here. The
date property represents date of birth of the person. Type of this field is
string , we have specified
minLength &
maxLength attributes of this field, those are optional, but it’s good to have boundary. The
format field is a hint how to show the data in the UI or treat the data.
The
age is of type
number with
integer format. Other available format for type
number are:
float,
double,
int32,
int64. The attributes
minimum &
maximum are the boundary in which
age should exist.
Defining array:
Our
components/devices.json file looks like below:
{
"type" : "array",
"items" : {
"$ref" : "./device.json"
},
"minItems": 0,
"maxItems": 20
}
The type of the object is
array & array objects are placed under
items field. You can define objects inside
items or can refer to other existing JSON file using
$ref as shown above.
minItems &
maxItems define the limit of total items in an array, these are optional fields, but it’s good to know limits.
More on Swagger (v2.0 & v3.0 both) data types can be found here.
Defining enum:
Have a look at the
GET /users/{user_id} API. In the parameters section, we have defined a
minimal parameter:
{
"name": "minimal",
"in": "query",
"description": "When set to true, returned user object will have only minimum necessary fields, otherwise verbose object will be returned",
"required": false,
"type": "boolean",
"enum": [true, false]
}
The parameter identifies whether the object representation in the API response should be minimal with extremely necessary data or complete representation. It’s a boolean field, by default set to false. The boolean values are defined as
enum list, the values defined inside
enum field must match the type defined of the parameter which is
boolean in this case.
There is another way to define enum — using vendor extension. Take a look at the
components/state.json file:
{
"maxLength" : 50,
"minLength" : 1,
"type" : "string",
"example" : "Karnataka",
"x-enum" : [ {
"value" : "KA",
"description" : "Karnataka"
}, {
"value" : "AP",
"description" : "Andhra Pradesh"
}, {
"value" : "TN",
"description" : "Tamilnadu"
}, {
"value" : "MH",
"description" : "Maharasthra"
} ]
}
Since we have a limited number of states that are supported in our API, for demonstration purpose, we have made state as enum.
x-enum is a vendor extension supported by Swagger. It defines enum values as JSON object with value & description hence making the meaning of enum constants meaningful. The enum’s value is of type
string here with maximum length of 50.
More on enum can be found here.
Now let’s see how Swagger UI shows the API documentation:
Go to browser & hit the URL where Swagger UI is running. In my machine, the URL is:. Once the page loads, put the URL for swagger specification file in the top bar text box & click on ‘Explore’ button. My spec file URL:
In the above figure,
HTTPS appears as http scheme because in our API spec, we defined
schemes as
["https"]. The
users section lists out all API tagged with
users. Since we used same
users tag for both the APIs, they appear in the same section. The
Models section shows all the API models / resources defined under the global
definitions section. Since we defined only
UserBasicDetails in the
definitions section, that appears here.
Once you click the
Authorize green bordered button, it will list out all available authorization schemes that we defined under
securityDefinitions section.
Let’s click on the green colour
POST /users API section, it will expand & look like:
This green section represents the
HTTP verb / operation at the top of this section — here it’s
POST, then in the next section,
description &
externalDocs data is rendered. In the
Parameters section of the page, all
body,
header &
query parameters are shown with a red asterisk (*) meaning that the parameter is required. The supported content type of the request body is shown as a drop-down just below the
body section.
Let’s click on the
Try It Out button on the right side. We see the following UI:
The UI opens up editable text area for
body & text fields for other applicable parameters like
header &
query (not shown here since it’s a POST API). You can insert proper data & click on
Execute blue coloured button, the UI will hit the server described in the specification file & show the response. This UI can be used to test API during the development time. Click on red
Cancel button & go back the the previous UI — image2.
Now in the image 2, click on the Lock symbol (shown at the top right corner of image 2), it will show you the authorization scheme applicable for this API. This applicable scheme comes from the
security section either defined inside the API operation or from the global / root level definition. We overrode the
security section inside this API to use
OAuth2. So the authorization UI shows only OAuth related text box. Put proper data if you want to test the API & your make sure your
OAuth URLs as described in the specification file work.
Close the UI & come back to image2. Scroll down to the
Responses section which shows all response defined in the specification file with example values as provided in the spec. If no example value is provided for a field, the type of the field like
string,
integer will become its value.
In the above image, the left column corresponds to
HTTP status code as described in the
responses section of the API spec, the right side drop-down shows the corresponding data type of the response object, it comes from the
consumes section of the spec file. Just above the black colour section, you can see
Example Value is in bold in this image meaning that the black colour response body section only shows sample response, you can click the
Model link next to it, the models associated with the response will be shown.
After the
body section, all
headers with their
description &
type are shown. In our API spec, we are returning
X-RateLimit-Limit &
X-RateLimit-Remaining headers. You can return anything.
The
400 section errors has the following model & example value:
In a similar way, the
Try It Out UI of the
GET /users/{user_id} looks like:
Since this is a
GET API, there is no
body (request body) parameter in the
Parameters section, only
path,
query &
header parameters are there. You can see a drop-down containing
true /
false values for the boolean field
minimal, these values come from the enum defined for the
minimal query parameter in the specification file.
You have now fair idea about how Swagger documentation looks from UI perspective, once you install Swagger locally or use SwaggerHub, you can play around with the UI & explore more.
OAS v3.0 specification of the same API:
Basic structure of OAS v3.0 specification:
This is how the Open API Specification v3.0 version of our API specification looks like:
OAS v3.0 has made some changes in structuring of the file.
No
schemes,
basePath or
host parameters are used to describe server address or API base URL in OAS v3.0, rather the following is used:
"servers": [
{
"url": ""
}
]
So instead of one, you can accommodate multiple servers.
OAS v3.0 emphasises on re-usability because multiple API / API operations can share same parameter, request & response body & other metadata. So OAS v3.0 defines a global
components section & it puts the following re-usable optional sub-sections inside it:
components:
# Reusable schemas (data models)
schemas:
...
# Reusable path, query, header and cookie parameters
parameters:
...
# Security scheme definitions (see Authentication)
securitySchemes:
...
# Reusable request bodies
requestBodies:
...
# Reusable responses, such as 401 Unauthorized or 400 Bad Request
responses:
...
# Reusable response headers
headers:
...
# Reusable examples
examples:
...
# Reusable links
links:
...
Any model residing in this section can be accessed like:
"$ref": "#/components/requestBodies/YourComponent"
or
"$ref": "#/components/examples/YourComponent"
If you plan to use multiple files describing the models, then you can access as shown already earlier like:
"$ref": "relative/path/to/your/component_file"
OAS v3.0 uses
parameters section inside an API operation to describe
path,
query,
header &
cookie parameters.
Cookie parameter is newly introduced in OAS v3.0. Request body or
body type parameter is no more supported in this section. For request body, a new section called
requestBody is introduced inside API operation section, this new section can take text based request body data and form data.
More on request body here.
There is no
securityDefinitions in OAS v3.0, it’s renamed to
securitySchemes & has been moved under global
components section. A new security scheme is introduced as well — cookie based scheme. It’s described as below:
"CookieAuth": {
"type": "apiKey",
"in": "cookie",
"name": "SESSION_ID"
}
Like all other security schemes, it also can be used in global or API operation level like below:
"security": [
{
"CookieAuth": []
}
]
Here the name inside the
security objects should exactly match the name of the security scheme described inside the
securityScheme.
There has been some changes in how basic auth scheme is defined,
type: basic has been replaced with
type: basic, scheme: http. All
http based security mechanisms like basic or bearer token based auth have been moved to type
http. This is how
basic http mechanism is described now:
"BasicAuth": {
"type": "http",
"scheme": "basic"
}
More on authentication here.
Let’s see the OAS v3,0 specific changes in
POST /login API:
The request body section is defined quite differently in OAS v3.0:
"requestBody": {
"content": {
"application/json": {
"oneOf": [
{
"$ref": "#/components/requestBodies/EmailLoginRequest"
},
{
"$ref": "#/components/requestBodies/MobileLoginRequest"
}
]
}
}
}
Actual request body structure is defined inside corresponding content type like
application/json or
text/plain or something similar. All these content types are contained inside
content. So this makes content negotiation very clear to describe, there is no requirement of
consumes array any more, you are free to choose content type & their corresponding body & you can describe all of them as above. Open API v3.0 provides support of validating request body schema through a collection of schema, the keywords such as —
oneOf,
allOf,
anyOf are built for those purpose. These keywords take a list of schema & checks accordingly if the request body schema matches the schema.
oneOf checks if it given request body schema matches exactly one of the given schema in the list,
anyOf check if it matches any of the given schema,
allOf checks whether it matches all of the schema. So in our example, the request body schema of
/login API should match exactly with only one of the provided schema —
MobileLoginRequest.
More on
oneOf,
allOf,
anyOf support here.
OAS v3.0 introduced callback support. The callback section is defined for
POST /login API as below:
"callbacks": {
"loginEvent": {
"{$request.body#/callback/url}": {
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"type": "string",
"example": "Login happened, please process the event."
}
}
} }
}
}
}
}
}
}
Our
/login API takes a callback URL, when some user logins, we shall call that callback URL with appropriate data & inform about the login event. This example is for demonstration purpose. Here inside the
callbacks section, we define a random event name called
loginEvent , you can define any name. The event defines which call back URL will be called & with what data. The
{$request.body#/callback/url} signifies that our request body has a section called
callback, under that section,
url key exists. This expression is evaluated in run time & it retrieves the actual callback URL passed in the request body through that key. The
post section under the callback URL describes how to form the request body to invoke the callback URL.
More on callback & run time expression evaluation here.
Let’s consider the
responses section for
/login API:
"responses": {
"200": {
"headers": {
"Set-Cookie": {
"description": "Session key is set in cookie named SESSION_ID",
"schema": {
"type": "string",
"example": "SESSION_ID=hfuy8747b7gb4dgy466t46; Path=/; HttpOnly"
}
}
},
"description": "Successful login",
"content": {
"application/json": {
"schema": {
"$ref": "../../components/user.json"
},
"examples": {
"ex1": {
"id": "90d640ab-548a-4a72-a89e-b86cdf4f1887",
"gender": "Male",
"name": {
"title": "Mr.",
"first": "Kousik",
"last": "Nath"
},
"phone": "140000",
"cell": "9090909090"
}
},
"ex2": {
"id": "4f7386f6-3c89-11e9-b210-d663bd873d93",
"gender": "Female",
"name": {
"title": "Mrs.",
"first": "XYZ",
"last": "ABC"
},
"phone": "111200",
"cell": "9090909090"
}
}
}
}
},
"links": {
"GetUserById": {
"$ref": "#/components/links/GetUserById"
}
}
},
...
...
}
In the above code snippet, we are returning
"Set-Cookie" header so that for cookie based authentication, this cookie can be used till it’s expired.
There is no
produces section in this specification file, rather the content negotiation is made simpler by putting the response schema under particular content type like
application-json under
content mapped to individual response code like
200 in the above snippet. You can use as many as content-types along with their content description.
Examples can be added to parameter, object, properties etc to make the API specification clear as examples describe what value a field can take. In OAS v3.0, the
example is enhanced. You can now use
examples — a JSON object of examples. All keys in this JSON object are distinct & their mapped value describe examples. You can define example schema under the global
components/examples section and re-use them by referring to them with
$ref.
Links are one of the new features of OpenAPI 3.0. Using links, you can describe how various values returned by one operation can be used as input for other operations. This way, links provide a known relationship and traversal mechanism between the operations. The concept of links is somewhat similar to
hypermedia, but OpenAPI links do not require the link information present in the actual responses.
We have defined
links section under the global
components section:
"links": {
"GetUserById": {
"description": "Retrieve the user with GET /users/{user_id} API using `id` from the response body",
"operationId": "users.get",
"parameters": {
"user_id": "$response.body#/id"
}
}
}
We have defined
links in the
POST /users also. So once a new user is created through
POST /users or user logs in through
POST /login API, we expose the operation to retrieve the created / logged in user through
GET /users/{user_id API. We have defined
GetUserById field inside
links section which maps to the a JSON object that declares which operation we are going to expose as a hypermedia link. The
operationId here —
users.get points to the
GET /users/{user_id} API, since both of the APIs are defined in the same specification file, we can use
operationId, if APIs are defined in different specification file, we have to use
operationRef. More details on the link below. The
parameters section inside this JSON
GetUserById describes what parameter you need to send while calling the exposed API, in this section, we basically compute any sort of parameter / request body that has to be sent to the exposed API. The
$response.body#/id retrieves the
id field from the current API response body in run time.
OpenAPI v2.0 is quite a wide spread standard, organizations are moving to OAS v3.0 slowly, but the good part of using such specification is that it scales the API design process, it’s absolutely necessary for big organizations. This post describes API designing both in OAS 2.0 & OAS 3.0, rest is up to you, if you can relate to the benefits of such design, just go for it.
References:
-
-
-
read original article here
|
https://coinerblog.com/designing-rest-api-with-open-api-specification-oas-v2-0-v3-0-using-swagger-11dd4ef8cea6/
|
CC-MAIN-2019-43
|
refinedweb
| 6,210
| 54.02
|
Python – Backtracking
Backtracking is a form of recursion. But it involves choosing only option out of any possibilities. We begin by choosing an option and backtrack from
it, if we reach a state where we conclude that this specific option does not give the required solution. We repeat these steps by going across
each available option until we get the desired solution.
Below is an example of finding all possible order of arrangements of a given set of letters. When we choose a pair we apply backtracking to verify if
that exact pair has already been created or not. If not already created, the pair is added to the answer list else it is ignored.
def permute(list, s): if list == 1: return s else: return [ y + x for y in permute(1, s) for x in permute(list - 1, s) ] print(permute(1, ["a","b","c"])) print(permute(2, ["a","b","c"]))
When the above code is executed, it produces the following result −
['a', 'b', 'c'] ['aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc']
|
https://scanftree.com/tutorial/python/python-data-structure/python-backtracking/
|
CC-MAIN-2022-40
|
refinedweb
| 176
| 67.49
|
Forum:ED for feature
From Uncyclopedia, the content-free encyclopedia
Can we feature ED's page on us? FreeMorpheme 13:10, 3 July 2006 (UTC)
- No. We've already featured Uncyclopedia is the worst. ~ 15:06, 3 July 2006 (UTC)
- I think we should try, it is a great article. - Sir Real Hamster {talk} {contribs} 15:13, 3 July 2006 (UTC)
- It's a bludy tribute. Plus it is the only funny article ever to roam ED's DB. I'd be tempted to call it ironic but , alas, alike Alanis I seem to be unable to use that word within proper context so i'll just call it superaffentitengeil. - Vosnul 17:11, 3 July 2006 (UTC)
- Just goes to show that both of our sites think the same of each other... ~ 17:21, 3 July 2006 (UTC)
- I do not see how thinking how ED uses raw blatant vularity as a substitute for humour compaired to Uncyc's ..... well .. ehh... sophisticated vulgar humour is in any way the same . - Vosnul 17:28, 3 July 2006 (UTC)
- Nah, we were gonna do a reskin once, but chron didn't want one. I doubt he'd like featuring the article:28, 3 July 2006 (UTC)
- I move that we continue to follow the blanket policy of "If we ignore other wiki's eventually they will all go away." --Brigadier General Sir Zombiebaron 00:15, 4 July 2006 (UTC)
- I think this article however does show how we are better people than the people at ED, I mean, comparing Uncyclopedia users to Terry Schiavo, do they have any sort of decency! Though over all, the article is hilarious, I guess ED is only funny when they are talking about Uncyclopedia. - Sir Real Hamster {talk} {contribs} 00:54, 4 July 2006 (UTC)
- ..what..the..fuck..ED stole my template! as you can see here Template:NSFWArticle--Maj Sir Insertwackynamehere CUN VFH VFP Bur. CMInsertwackynamehere | Talk | Rate 00:55, 4 July 2006 (UTC)
No, lets not feature that. Infact, QVFDing ED would be a MUCH better idea. --Uncyclon - Do we still link to BENSON? 05:07, 4 July 2006 (UTC)
Check out the pic - I made it yesterday as part of my userpage experiment (which failed miserably as there was simply no viable way to replace Sophia with the new pic). If you guys want to do a ED reskin, it might help. -- Colonel Swordman 13:06, 4 July 2006 (UTC)
Now listen in. There is no point in attempting to do this kinda thing when both the founder AND the chief bureaucrat say NO. I already tried. If you wanta have a look, it's located here. -. 17:10, 4 July 2006 (UTC)
I say completely delete it and act like it doesnt exist. Sortof like the cabal. --! 20:22, 1 August 2006 (UTC)
- I'm hoping it's not featured, as my work IP will probably block the main page for "Pornography" (just like every page under ED's namespace.) Wouldn't want to ruin our rep, now would we? --The King In Yellow (Talk to the Dalek.) 18:22, 2 August 2006 (UTC)
naughty naughty, work is called work for a reason, you dont get paid to piss about and look at ED, Foreshame!.- 19:41, 2 August 2006 (UTC)
- Fortunately, I'm not looking at ED (hence the reference to it being blocked.) --The King In Yellow (Talk to the Dalek.) 19:57, 2 August 2006 (UTC)
You are forgiven. still means you tried though.- 20:07, 2 August 2006 (UTC)
- Not necessarily. He might of accidentaly clicked a link which he didn't realise goes to ED. ~ 12:55, 8 August 2006 (UTC)
They are up to 4200 pages...sure that's nothing compared to our 150,019,509, but if the rest of their pages are as good as their page on us...aw screw it, I can't finish this...--Sir Modusoperandi Boinc! 01:56, 27 November 2006 (UTC)
An ED parody
There ought to be something that could be done with - but what it's desperately missing is a parody of DeadJournal, the web blog for whiny suicidal emo teens. That and someone to drive around in a Waahmbulance, run people over and put them out of their misery once and for all. Oh well.. --Carlb 01:54, 4 August 2006 (UTC)
- Feature the crap out of that. Or else, have the ED article redirect there instead of UN:CVP, Goatse, or even my suggestion of Talk:Euroipods. Crazyswordsman 06:20, 6 August 2006 (UTC)
ED has proven to me that they're jealous of us. Read their Grue article. It proves to me that they're not funny so they just take a jab at us that doesn't work. Crazyswordsman...With SAVINGS!!!! (T/C) 16:06, 8 August 2006 (UTC)
- No, they just think Uncyclopedia is unfunny typically. Not enough shock images and whatnot. AlexJohnc3 Complain F@H Fx2 22:31, 26 November 2006 (UTC)
- Which shows they have no sense of humor. Crazyswordsman...With SAVINGS!!!! (T/C) 01:16, 27 November 2006 (UTC)
Does anyone think things would be a lot more pleasant if we stopped this mutual emnity between us and ED? I mean, we're just different types of humour ultimately, and while we prefer our kind we can accept that their main criticism of us (i.e. that we have one shitload of random superfluous articles which even an entire foest fire year wouldn't get rid of) is valid, can't we?... No, just me? Okay, I'll get back to work... --Sir Jam 08:35, 27 November 2006 (UTC)
Can anyone explain why neither us or Wikipedia has a proper article for ED? Seems somewhat stupid that they have an article on us but not vice versa... --KWild 05:12, 28 November 2006 (UTC)
- I heard somewhere that an uncyc admin wrote the main part of their article on us... Not sure though. Any confirmations? --
13:10, 29 November 2006 (UTC)
- Original article was this, created as a placeholder after one of our admins had created a set of two templates: one there linking to same-topic articles here, one here linking back. The article about Uncyc there was originally a copy of the one about ED here, with everything reversed by user:Elvis. All created long before any sign of any rivalry or conflict. All before ED pulled the links to uncyc as "spam" and made some bizarre claims to Wikia (under the US DMCA) that the "ED has an article on (pagename)" template here infringed their copyright and logo. Ancient history... --66.102.73.165 17:28, 29 November 2006 (UTC)
|
http://uncyclopedia.wikia.com/wiki/Forum:ED_for_feature?direction=prev&oldid=1294354
|
CC-MAIN-2014-15
|
refinedweb
| 1,115
| 72.46
|
"public" specifier does not give any restrictions to any class from any package. But "protected" gives a small restriction over public.
Restrictions of protected specifier
Note: Variables and methods of a class are known as members of a class. Constructors are not members of a class. Why? A member can be called with an object where as constructor cannot be. You should use access with constructor but not call. That is, You can access a constructor but you cannot call a constructor. You can access a constructor by creating an object only.
In the following program Test class contains a protected display() method and public show() method. The subclass Demo can use both.
class Test { protected void display() { System.out.println("Hello 1"); } public void show() { System.out.println("Hello 2"); } } public class Demo extends Test { public static void main(String args[]) { Demo d1 = new Demo(); d1.display(); // works fine d1.show(); // works fine } }
In the above code, both Test and Demo happened to be in the same package (default package). The program works fine even Demo belongs to other package also (try once).
\================Would you like to dig more into?\=================
1. Java Private variable accessibility (Private access specifier) 2. What is Java private and private variable? 3. Can you override private method? 4. Access Specifiers & Access Modifiers
|
https://way2java.com/specifiers-and-amp-modifiers/java-made-simple-java-protected-and-protected-method/
|
CC-MAIN-2020-40
|
refinedweb
| 218
| 61.33
|
sounds promising...
i'll give it a try and see if it all works well !
thanks
You could create a subclass of PyStringMap that you
supply to PythonInterpreter for the locals dictionary.
Your subclass could put special handling for __setitem__
that detects rebinding of a class reference--or of
"reserved" items that shouldn't be rebound. That said,
I've never tried it and there could be some good
reasons not to do this that I'm not aware of. I
have seen this strategy mentioned before on the list though.
-Clark
________________________________
From: On Behalf Of Ruchir Talwar
Sent: Thursday, October 05, 2006 10:44 AM
To: jython-dev@lists.sourceforge.net
Subject: [Jython-dev] Hi
> Hi,
> Im an avid Jython developer and have been following
> this dev list for some time. I have a question
> about Jython which has bothered me for some
> time and I don't know who to ask. It does not belong
> to the users list because I know it is not possible
> at the moment.
>
>
> class A(object): pass
> A=A() # Ive lost class A
> b=A()
> Traceback (innermost last):
> File "<console>", line 1, in ?
> TypeError: call of non-function ('__main__.A' object)
>
> How do I protect 'A' ?
> Can we not provide a hook when binding a variable
> to an object in the locals(), globals()
>
> something like __localbind__(), __globalbind__()
>
> I saw some post somewhere which said it is not
> possible in Python and that it really isn't
> something to be concerned about?
>
> Im currently developing a framework in Jython
> using PyServlet to load different web servlets.
> In those servlets I want to be able to make
> available certain of my own datatypes
> e.g. MO (money), LD (Large decimal) etc
> The problem is that these types can be easily
> overwritten in the current namespace.
>
> I don't think this is a small issue and im definitely
> concerned about failing servlets etc. because a
> programmer has typed it wrong!
>
> I would like to propose that a hook be given to the
> programmer so that this is not overwritten.
>
> Im sorry if this question does not belong here.
> Please bear with me and do tell me where i should
> post it if not here?
>
>
> Thanks
> Ruchir
|
http://sourceforge.net/p/jython/mailman/attachment/b1d058ab0610060510o4fdd0369m90ad594ab59366c4%40mail.gmail.com/1/
|
CC-MAIN-2015-27
|
refinedweb
| 372
| 71.34
|
workflow variables not updating in will_close() handler
- roosterboy197
I have a custom UI I'm presenting in a Python script action. My custom view class has a
will_close()handler where I want to save off some values to two workflow variables I've set up in previous steps. Here's my handler:
def will_close(self): clist = u'\n'.join(u'{} :: {}'.format(_, self.wordlist[_]) for _ in self.correct_words) wlist = u'\n'.join(u'{} :: {}'.format(_, self.wordlist[_]) for _ in self.wrong_words) workflow.set_variable('correct_words', clist) workflow.set_variable('wrong_words', wlist)
The values of
correct_wordsand
wrong_wordsare output using Set File Contents actions to two files in Dropbox, creating them if they don't already exist. The files are being created as they should, but they are empty!
If I add some
will_close()handler, I can see in the console that
clistand
wlistcontain exactly what they should contain.
If I pause the workflow at the first Set File Contents action, I can see that the variable
correct_wordscontains no value.
Any suggestions?
- roosterboy197
OK, well, I was able to save off my data using
editor.set_file_contents()to do it directly instead of setting the filename with
workflow.set_variable()and then using a Set File Contents action but I'm still curious why the latter doesn't work.
|
https://forum.omz-software.com/topic/1970/workflow-variables-not-updating-in-will_close-handler/2
|
CC-MAIN-2021-49
|
refinedweb
| 217
| 57.98
|
The other articles in the series are
Reusable code can be more or less anything from site-wide functions, application constants or snippets of HTML and server-side mark-up that are used as part of a site's UI template, such as headers, menus and footers. I will start by looking at application constants, then I will cover how to manage common functions before reviewing the templating features offered by Razor.
Site Constants
In the last article, I illustrated how to store the connection string for the database in web.config, which is an XML-based configuration file. The web.config file is the ideal place to store other arbitrary application level constants. The connection string was stored in a node called connectionStrings which you had to create. The name is important because the ConfigurationManager class, the API for accessing web.config contents, looks for it when you reference the ConfigurationManager.ConnectionStrings property. Application level settings are stored in a node called appSettings, and they are accessed via the ConfigurationManager.AppSettings property. The following shows an email address being stored in the existing web.config file:
<?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="siteContact" value="contact@domain.com" /> </appSettings> <connectionStrings> <add name="classified" connectionString="Provider=Microsoft.Jet.OleDb.4.0;Data Source=|DataDirectory|/CLassified_2000.mdb" providerName="System.Data.OleDb" /> </connectionStrings> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration>
Individual appSettings are stored as a key/value pair. If you want to reference the value in code, you do so like this:
@Imports System.Configuration @Code Dim contact = ConfigurationManager.AppSettings("siteContact") End Code
Notice the Imports statement at the top of the file - this saves you from having to use the fully qualified namespace to reference the ConfigurationManager class as discussed in my previous migration article.
Site Wide Procedures
Any experienced classic ASP developer soon accumulates a set of common custom utility procedures. These may be generic to all sites or they may only be relevant to the current project. Some of them might perform operations and return values (functions) while others might be responsible for tasks that don't return values like rendering snippets of HTML, e.g binding values to dropdown lists and rendering them (subs). There are a number of options available to cater for these.
The App_Code Folder
Sites built using the ASP.NET Web Pages framework are based on the Web Site project template - as opposed to the Web Application project template. Web Site projects can be deployed in exactly the same way as classic ASP sites - you simply copy the site files containing source code to a location that the web server can "see". The site is compiled when the first request is made. Web Application projects on the other hand need to be compiled prior to deployment. Web Site projects can include a special ASP.NET folder called App_Code. Any source code that you place in App_Code is compiled at runtime and is made accessible to any other code in the site. You can place classes and modules in here and they will act in the same way as if you have included them in every page of your classic ASP site. You can also place files containing special Razor constructs - Functions and Helpers in App_Code to achieve the same effect.
Razor Functions
When migrating from a scripting language like VBScript (or PHP for that matter) developers need as much help as they can get. The designers of the Razor engine felt that the change from being able to stick a procedure anywhere in a .asp file to having to declare procedures within classes or modules was one burden too many for developers making the transition from scripting to a strongly typed language. So they came up with a couple of tricks to apparently remove this obligation.
The first uses the Functions keyword which is preceded by the Razor @ symbol. It is terminated with the End Functions statement, and the enclosed code is a standard VB Shared function. The following example is one I use to determine whether a checkbox was checked when a containing form was posted and then return a value that a database bit field will accept:
@Functions Public Shared Function OnAsBit(ByVal s As String) As Integer Return If(String.IsNullOrEmpty(s), 0, 1) End Function End Functions
You can place this in a vbhtml file in App_Code and then call it in your code by prefixing it with the name of the file. For example, if you name the file MyFunctions, you would call the method as follows:
Dim result = MyFunctions.OnAsBit(Request("check"))
You could add the function to the page that contains the form processing logic. However, if you did that, the function would only be available to that page.
Razor Helpers
The second feature introduced by the ASP.NET Web Pages framework to help with code reuse is called a Helper. It is denoted by the Helpers keyword, prefixed again by the Razor @ sign. Helpers don't return values like Functions. However, they can include intermixed Razor syntax and HTML. Their role is to provide a way of dynamically generating snippets of HTML. They are kind of like classic ASP Subs. Here's a really simple example for illustration. It takes a collection of strings and outputs them to an ordered list:
@Helper OrderedList(items As IEnumerable(Of String)) @<ol> @For Each item In items @<li>@item</li> Next </ol> End Helper
Again, these should be placed in their own file in App_Code so that they will be available across the whole site. If you named the file that contains this helper Helpers.vbhtml, you would call it within your page in the following manner:
@Helpers.OrderedList(New String(){"Apple", "Banana", "Cherry", "Damson"})
Notice that the name of the file, which at runtime is compiled to the name of a class containing the method, is prefixed with an @ sign. This is because the helper is intended to render output to the browser.
Class Files in App_Code
Razor Functions are useful and serve a purpose in that they ease the scripting developer gently into an Object Oriented (class-based) development framework. However, they are a bit like stabilisers on a bicycle. Sooner or later, you have to consider the grown-up alternative. As I mentioned previously, all procedures and subs have to live in a class or a module (or structure) within VB.NET. When you create a Razor function, the Razor Engine takes the name of the file you put it in and uses that to generate a class to house your function. But there is nothing to stop you creating your own class file to shortcut the process.
Here's an example that features the same method from earlier:
Imports Microsoft.VisualBasic Public Class FormExtensions Public Shared Function OnAsBit(ByVal s As String) As Integer Return If(String.IsNullOrEmpty(s), 0, 1) End Function End Class
The file type is a VB Class file. It is available in WebMatrix by selecting All in the Choose A File Type dialog. I have named this one FormExtensions.vb. The resulting method is called as follows:
Dim result = FormExtensions.OnAsBit(Request("check"))
This is the way with Shared methods in VB. You do not need to create an instance of the class that owns the method in order to call it.
Extension methods provide an alternative approach which offers the benefit of reduced typing when calling the method. Extension methods are declared in Modules rather than classes and they are marked with the <Extension()> attribute. Here's the OnAsBit method defined as an extension method:
Imports Microsoft.VisualBasic Imports System.Runtime.CompilerServices Public Module StringExtensions <Extension()> Public Function OnAsBit(ByVal s As String) As Integer Return If(String.IsNullOrEmpty(s), 0, 1) End Function End Module
Note that the Function is no longer 'Shared' and that System.Runtime.CompilerServices has been imported. Extension methods are called as if they are native methods of the type that they extend. In this case, the type is String (the type specified as the first parameter of the method):
Dim result = Request("check").OnAsBit()
Class Library
If your procedures are generic and likely to be used across multiple sites, you can create a class library. You won't be able to do this using WebMatrix, but Visual Studio Express offers the facility to generate class libraries. The advantage of creating class libraries is that every time you update them, all the applications that depend on them will be able to take advantage of any new features or bug fixes. The mechanics behind creating a class library and making it available to your site are exactly the same for both C# and VB. I have covered the basics in a previous article: Creating Reusable Components For ASP.NET Razor Web Pages.
Executing Code For All Pages
The original classic ASP site had one include file that was added to all pages. It contained code for setting the connection string and opening a connection, but it also contained code that established whether the current user was logged in (authenticated) or not. Now, there is a membership API that has been developed to work specifically with ASP.NET Web Pages that means you just need one line of code to check if the current user is authenticated. However, for the sake of comparison with the classic ASP site, I have chosen not to implement it in this migration. I have instead replaced the code in the include file with a line-for-line translation to ASP.NET. Here is the original classic ASP code followed by the .NET version:
[clssfd.asp] <!-- METADATA TYPE="typelib" FILE="C:\Program Files\Common Files\System\ado\msado15.dll" --> <% Dim objConn Set objConn = Server.CreateObject("ADODB.Connection") objConn.Open "Provider=Microsoft.Jet.OLEDB.4.0; " & _ "Data Source= C:\Users\MIKE\Desktop\Ch15\classified_2000.mdb" If Session("blnValidUser") = True and Session("PersonID") = "" Then Dim rsPersonIDCheck Set rsPersonIDCheck = Server.CreateObject("ADODB.Recordset") Dim strSQL strSQL = "SELECT PersonID FROM Person " & _ "WHERE EMailAddress = '" & Session("EMailAddress") & "';" rsPersonIDCheck.Open strSQL, objConn If rsPersonIDCheck.EOF Then Session("blnValidUser") = False Else Session("PersonID") = rsPersonIDCheck("PersonID") End If rsPersonIDCheck.Close Set rsPersonIDCheck = Nothing End If %>
[_pagestart.vbhtml] @Code Dim objConn = Database.Open("classified") If Session("blnValidUser") And Session("PersonID") = 0 Then Dim strSQL = "SELECT PersonID FROM Person WHERE EMailAddress = @0" Dim rsPersonIDCheck = objConn.QueryValue(strSQL, Session("EmailAddress")) If rsPersonIDCheck Is Nothing Then Session("blnValidUser") = False Else Session("PersonID") = rsPersonIDCheck.PersonID End If End If End Code
The key to the ASP.NET version is the file name: _pagestart.vbhtml. When a request comes in to your site, the Web Pages framework looks for any files named _pagestart.vbhtml or _pagestart.cshtml anywhere on the route through the directory structure to the page that has been requested. If any are found, their contents are executed. In the migration, the _pagestart file is place in the root directory, which means that it will be executed for all requests. If it had been placed in a subfolder, only requests for pages in that subfolder (or subfolders of that subfolder) will cause the pagestart file to be executed. Consequently, it is a good place to check that a user is authenticated, for example, when they access restricted pages which have been located in a particular folder.
HTML Templating
I have already looked at Helpers, which offer one way of producing reusable HTML template blocks. The role of the helper is more that of a macro for snippets of HTML. Your existing Include files are more likely to consist of self-contained blocks of reusable code such as headers, footers, menus and so on. The Razor way to deal with these is in two parts: layout pages and partials. A layout page acts as the base template for the HTML of the site. It can be anything from a bare-bones wireframe HTML structure containing placeholders for pluggable content to a more or less complete page with just one area for dynamic content to be displayed.
The layout page is defined by the fact that it has a call to RenderBody() in it. Here is the most simple layout page possible:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title></title> </head> <body> @RenderBody() </body> </html>
This file will be saved with a name like _layout.vbhtml anywhere in the application folder structure. The leading underscore in the file name prevents it from being browsed directly. The layout page is merged at runtime with any file that has it set as the Layout property. Here's a Hello World page that does exactly that:
@Code Layout = "~/_layout.vbhtml" End Code <div>Hello World</div>
When HelloWorld.vbhtml is requested, its content is merged with that of _layout.vbhtml and injected into the spot where the RenderBody() call is placed. Note the tilde (~) sign: that resolves at runtime to point to the root of the site folder structure. If you want to specify the same payout page for all pages in your site (or a folder) you can use the _pagestart.vbhtml file to achieve this.
Partial pages are used primarily for pluggable widgets or stand-alone sections of reusable UI. You could use one for a navigation system like this:
<ul> <li><a href="~/">Home</a></li> <li><a href="~/About">About</a></li> <li><a href="~/Contact">Contact</a></li> </ul>
This can be plugged in to your layout or child page by calling the RenderPage() method:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title></title> </head> <body> @RenderPage("~/_nav.vbhtml") @RenderBody() </body> </html>
You can nest layout pages and even call RenderPage within partial pages, which permits some complex levels of traversal and code reuse. It is worth noting the order in which each file will have its content executed. Code in the page which is requested will be executed first, followed by any code in the layout page. This means that you cannot set variables in the layout page which the host page relies on. They will not have been set at the point that the host page is executed.
6 Comments
- Byron Smith
- Steve
You indicated: "This means that you cannot set variables in the layout page which the host page relies on".
How can I have a vb/cshtml main page that sets a variable, and access that variable from a shared @RenderPage?
- Mike
The
RenderPagemethod also accepts an object that represents data to be passed to it. That data is available in the
PageDataor
Pageproperty. For example, you can do this:
In SomePage.cshrml, you can access the value from
PageData["colour"]or
Page.colour.
- Evan Flink
Great site! I come here often for great articles on Web Pages (and I bought your book). My immediate comment is 'How about links to Parts 1 & 2 of this article?'
Second, 'Are Web Pages directly supported in Visual Studio?' The articles I've found talk about migrating from Web PAges to MVC Views, but not about simply migrating/editing/maintaining a working project under Visual Studio.
Why do this? I developed a browser-based app privately using Web Matrix. Now I'm in a for-hire situation where the house uses Visual Studio. Don't need MVC. Want to just make simple mods to what exists & move on to the next job. VS is causing me pain. Any advice?
- Mike
Yes - I should add some links to the other articles in this short series. Thanks for the suggestion.
I use Visual Studio for developing Web Pages apps all the time. So long as you have VS 2010 SP1 or later, you shouldn't have issues. If you do have problems, perhaps you should post a question with the details to.
- Alexis
I enjoyed your articles.
I'm a former senior classic ASP programmer. I stop programming in 2007, now I wanted to start again and I discover Razor + WebMatrix as an easy (and incredibly similar to old ASP) way to start again. I was going through some tutorials on MS website when I saw one comment of you about WebMatrix probably be discontinued. I read here in this comment you use Visual Studio for developing ASP NET Web Pages. I searched online for some tutorials I couldn't find any. Do you have any suggestion about where to find more information about Razor + Visual Studio? Thanks
|
http://www.mikesdotnetting.com/article/227/migrating-classic-asp-to-asp-net-razor-web-pages-part-three-include-files-and-reusable-code
|
CC-MAIN-2016-36
|
refinedweb
| 2,741
| 56.25
|
On 17/5/00 at 10:18 am, bruno.dumon@kh.khbo.be (Bruno Dumon) wrote:
>We are students, and as part of our final project, we have developed an XLink
>processor for Cocoon.
This looks very interesting.
It requires patching and re-compiling Cocoon, for Cocoon to setup Xalan to use namespaces.
Is it possible to compile Cocoon on systems without Java 1.2?
Would it be nice to have this patch turned into a cocoon.properties setting?
Thanks
Jeremy
____________________________________________________________________
Jeremy Quinn media.demon
webSpace Design
<mailto:jeremy@media.demon.co.uk> <>
<phone:+44.[0].207.737.6831> <pager:jermq@sms.genie.co.uk>
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200005.mbox/%3CE12s8Hu-000Jqn-0V@anchor-post-31.mail.demon.net%3E
|
CC-MAIN-2014-23
|
refinedweb
| 106
| 52.87
|
On Nov 7, 4:23 pm, RyanN <Ryan.N... at gmail.com> wrote: > Hello, > > I'm trying to teach myself OOP to do a data project involving > hierarchical data structures. > > I've come up with an analogy for testing involving objects for > continents, countries, and states where each object contains some > attributes one of which is a list of objects. E.g. a country will > contain an attribute population and another countries which is a list > of country objects. Anyways, here is what I came up with at first: snip > > NAm = continent('NAm') > usa= country('usa') > canada = country('canada') > mexico = country('mexico') > florida = state('florida') > maine = state('maine') > california = state('california') > quebec = state('quebec') > > NAm.addCountry(usa) > NAm.addCountry(canada) > NAm.addCountry(mexico) > usa.addState(maine) > usa.addState(california) > usa.addState(florida) > canada.addState(quebec) > florida.addCounty('dade') > florida.addCounty('broward') > maine.addCounty('hancock') > california.addCounty('marin') snip > so this works but is far more cumbersome than it should be. > I would like to create an object when I add it > > so I wouldn't have to do: > usa= country('usa') > NAm.addCountry(usa) > > I could just do > NAm.addCountry('usa') > > which would first create a country object then add it to a countries > list snip One option is to add the names to a blank object as attributes, using setattr. Then you can access them in almost the same way... they're just in their own namespace. Other options would be to add them to a separate dictionary (name -> object). This example is kind of cool, as well as nicely instructive. >>> class Blank: pass ... >>> blank= Blank() >>> class autoname( ): ... def __init__( self, name ): ... setattr( blank, name, self ) ... self.name= name ... >>> autoname( 'fried' ) <__main__.autoname instance at 0x00B44030> >>> autoname( 'green' ) <__main__.autoname instance at 0x00B44148> >>> autoname( 'tomatoes' ) <__main__.autoname instance at 0x00B44170> >>> blank.fried <__main__.autoname instance at 0x00B44030> >>> blank.green <__main__.autoname instance at 0x00B44148> >>> blank.tomatoes <__main__.autoname instance at 0x00B44170> >>> blank <__main__.Blank instance at 0x00B40FD0> You don't have to call the container object 'blank', of course, or its class for that matter. I do because that's how it starts out: blank. Under the hood it's just a plain old dictionary with extra syntax for accessing its contents.
|
https://mail.python.org/pipermail/python-list/2008-November/474852.html
|
CC-MAIN-2014-15
|
refinedweb
| 373
| 51.14
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.