Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
ETHRegisterControllerV2
New Features:
Referral for Reveal and Renew domains.
Batch Commit, Reveal, and Renew.
Payment in the commit stage with extra tips to attract the registrar Dapp to reveal the name(s), covering gas costs.
You can read more about all we did in this medium article
Since we changed a lot from the previous version, we created a V2 for the new files
ETHRegistrarControllerV2.sol
IETHRegistrarControllerV2.sol
TestEthRegistrarControllerV2.sol
These features will reward front-ends (registrar Dapps), increase domain registration, and enlarge the community, creating third-party services for the ENS ecosystem.
By implementing the referrer as an option in the register and renew functions, the protocol will provide commission when executing the service and share a fee previously settled by the DAO, with the referrer.
Front-end Dapps will be built to profit on the refer share. And furthermore can be done with the payment with commitment, which can incentivize a relation of trust between the final user and front-end Dapp, enabling names to be purchased with a better UX, by a single transaction.
To save even more in gas costs, the batch feature will vastly increase the intention of acquiring new names, since the gas cost will rapidly decay when purchasing more than 1 name.
Has this passed a DAO vote yet?
The last time these changes were mentioned on the forums I asked this and was told it would still require a DAO vote.
While the functionalities may have benefit for some.
Personally I cant see how referral fees help anyone but influencers with huge followings - It may even be used to scam said followers and exploit/game the referral system in other ways.
I think frontends are capable of setting their own "referral fees" much easier by just tacking it on to the msg.value based on their wants how we do at ens.vision - this allows a competitiveness among different frontend builds as well creating / finding a true market value of such "fees"
It feels like it's adding unnecessary complications to the core contracts which could be abstracted by third parties who are interested in these features them selves and gives third parties something to build.
For these reasons I think you should organize a proper DAO vote on the individual features / changes you want to make before attempting to merge to main.
Has this passed a DAO vote yet? The last time these changes were mentioned on the forums I asked this and was told it would still require a DAO vote.
https://www.ensgrants.xyz/rounds/1/proposals/16
Some other benefits are the gas savings in these new features as shown here
Outstanding work! I'll give this a more detailed review soon, but I had one initial thought about the tips/referral mechanism that bears thinking about. What if, instead of maintaining a mapping to individual commits for pre-payment, we keep a per-account balance that can be used both to fund registrations and for referral fees? A commit-with-fee can fund this account, and referral fees can be added to the users' balance, while a reveal consumes any balance in the user's account, sending the remainder back to them so they don't have to manage withdrawals.
This will require keeping a separate counter of accumulated fees that can be withdrawn to the DAO, but I think it should still work out lower gas-cost than sending the referrer their fee in each TX, and simplifies the UX and the code.
Has this passed a DAO vote yet? The last time these changes were mentioned on the forums I asked this and was told it would still require a DAO vote.
https://www.ensgrants.xyz/rounds/1/proposals/16
Some other benefits are the gas savings in these new features as shown here
This is not a DAO wide vote though?
This is a small grant. Which you were awarded to research and develop.
As far as I'm aware @Arachnid core contract / protocol changes still require an official DAO wide vote?
It almost seems like trying to side-step the DAO by attempting to merge this here without bringing it to the DAO forum and putting forward a full vote via snapshot etc.
Awesome stuff!
There are a few changes we made post audit, so this will need to be merged into your V2 code as well. One example is:
https://github.com/ensdomains/ens-contracts/blob/master/contracts/ethregistrar/ETHRegistrarController.sol#L308
Please have a look at the current .eth controller contract for other additional changes. I will also find some time to take a closer look
@Arachnid "What if, instead of maintaining a mapping to individual commits for pre-payment, we keep a per-account balance that can be used both to fund registrations and for referral fees?"
I agree. This is actually something I have disagreed on implementation with @Alextnetto. Initially we had agreed to have a general balance for everything: fees, referrals, tips, funds, etc, but in the end the team argued that it was actually more costly in terms of gas to save the new balance than just to send the ether directly. As they were optimizing for gas I let it pass, but I'm still curious about why that would be the case.
Outstanding work! I'll give this a more detailed review soon, but I had one initial thought about the tips/referral mechanism that bears thinking about. What if, instead of maintaining a mapping to individual commits for pre-payment, we keep a per-account balance that can be used both to fund registrations and for referral fees?
Here is why we send the ether directly
Here is why we send the ether directly
Did you test this in the case where the accounts already have balances? Setting a storage slot from zero to nonzero costs 20k gas, but changing its value if it's already nonzero costs only 5k.
Heyy friENS, what's missing for getting this for a voting on DAO, can I do something to help, review, modify?
|
GITHUB_ARCHIVE
|
Sure. Just do it, and wait until one of your client's competitors' webmasters sees what's going on and reports it to Google.
There was a site in my niche that was doing all sorts of dirty tricks, and everybody was complaining about it.
Finally, I emailed Google about it, and they were gone in two days.
Haven't seen them since.
Depending on what you mean by hide.
Use alt or title.
Name an image as the part #
Use an anchor as part#
Otherwise tell whoever? stop being daft.
The straightforward answer to "When is hidden text permissible?" is "Only when a user action makes that text render visibly on the web page." So, white on white does not qualify. In fact, anything you want to be searchable really should be visible from the get-go.
The odd thing here is that a competitor who is already a proven thief is very likely to use your source code anyway, not the visible page. So its not at all worth any risk of a Google problem, in my view.
I am a specialist in hidden text.
Since I discovered in the year 2000, that Google does not index meta tag keywords, I used a scrollable 60 pixel height DIV, 300 pixel width.
Above the fold a search box
Below the fold the keywords
No problem until now.
2005, I optimized my main site and reduced the height of the DIV to 35 pixel height. This was to small.
I received middle December 2005 an email, that this site will be deleted from Google index. I corrected the problem and was some weeks later again in the index.
Just make certain that the user action that makes the text visible doesn't rely on browser artifacts for visibility.
Am I correct jj?
Do not hide text, its not worth the ban! If someone steals his part numbers in the future, file a DMCA complaint and get the other site banned.
Q: When is hidden text permissible?
A: When you are a large corporate site!
|Q: When is hidden text permissible? |
A: When you are a large corporate site!
Happened Jannuary 2006
would you say one week was a standard penalty?
|would you say one week was a standard penalty? |
Maybe I would have achieved the same short time without christmas holidays.
Thanks for all your advice folks.
Putting it in an alt tag won't work as it's a dynamically driven shopping cart without alt tags and with over 4000 products. I'd rather clean toilets for a living than adding those all separately.
I guess the only option is to display the part numbers and grit one's teeth and hope all competitors have a religious conversion and repent from their evil ways.
i would re-iterate what tedster said:
>>a competitor who is already a proven thief is very likely to use your source code anyway, not the visible page.
which means that if the partnumbers or whatever are indexable then they can also be found by your competitors.
I might be missing something but.......If it's a dynamically created site I take that to mean it's running off a database, MySQL probably? Why does the part number have to be visible to search, aren't you just searching the database anyway to display the results? If not, surely you could just create an advanced search option and label it "search by part number" to bring it up out of the data.
fjpapaleo - You are missing something.
The OP's client wants the part numbers hidden, but wants them available for Google to see so they become searchable results. Thus if someone searches for 5000 series widgets, the client's page will come up in Google but if you go to the page you won't see the actual part number (5000 series)
I'm afraid your client can't have it both ways and like others have said, your client's competitor will find them anyway if were to try and hide them by having them white on a white background.
It would be better to find other ways to battle your unsavory competitor.
Ok sorry, I thought the term "searchable" was referring to the user. Yeah, there's no way around that. Any competitor who really wants the part number will just look at the source code anyway. Definitely not worth taking a chance on using hidden text.
Shoot the "ecommerce tech person". That should fix your problem.
Sorry. mistype, I meant to say "educate the ecommerce tech person"
[edited by: Quadrille at 1:06 am (utc) on Jan. 14, 2007]
|
OPCFW_CODE
|
As a substitute, instructors ought to strategy peer critique as an possibility to teach these techniques and for pupils to observe them. For recommendations on how to manage and run peer critique in your system, see “Organizing and Guiding In-Class Peer Evaluate. “How Do College students Respond?Many instructors who have included peer critique into their classes report considerably less than enjoyable success.
In simple fact, it is pretty frequent to find that, when asked to take part in peer critique, pupils hurry through the peer-evaluation process and give their friends only vaguely beneficial responses, these kinds of as “I preferred your paper,” or “Good work,” or “Very good paper, but a several components need more operate. ” Furthermore, quite a few students seem to dismiss peer-reviewers’ mla format works cited responses on their composing. There are quite a few probable factors powering these best essay rewriter kinds of responses:1. Quite a few learners experience unpleasant with the task of obtaining to pronounce a judgment on their peers’ creating . This pain may be the final result of their maturity degree, their want not to hurt a peer’s feelings (probably produced a lot more acute by the point that they are nervous about possessing their peers read through and decide their own composing), or basically their inexperience with offering constructive criticism on a peer’s operate. A vaguely optimistic reaction lets them to keep away from a socially unpleasant problem and to produce an environment of mutual assist (Nilson 2003). 2.
- Attributes of the Obtain Research Report On the internet
- Request as “do my essay” to accept burden out of
- Exactly What Do I Get While I Get yourself a Exploration Pieces of paper?
- Custom made Thesis Composing At Will
- Ways To Be Assured That This Business is the correct one to produce My Essays i believe?
- Rank well 1 Made to order Document Crafting Company
If students are not supplied crystal clear steering from their instructors, they may well not know how to remark on 1 another’s creating in a precise and constructive way. In addition, it should really be observed that students might not recognize how to remark on their peers’ producing for the reason that in excess of the years they have not obtained handy feedback from instructors who have graded their papers. 3.
Spend money on Specialized Essay
Some instructors request their students to appraise their peers’ producing using the same conditions the teacher utilizes when grading papers (e. g. , excellent of thesis, adequacy of help, coherence, and many others. ). Undergraduate college students frequently have an insufficient comprehension of these criteria, and as a result, they either overlook or inappropriately implement these conditions in the course of peer-evaluation periods (Nilson 2003). 4. Quite a few learners do not perceive comments from peers as applicable to the course of action of writing a paper for a program. Specially at the beginning of their undergraduate work, college students are most likely to suppose that it is only the instructor’s feed-back that “counts. “5. Even when they acquire very seriously responses offered by their peers, learners generally do not know how to integrate that comments when they revise their papers. Key Procedures. 1.
Detect and instruct the expertise essential for peer assessment. As you are organizing your system, make a listing of the expertise that learners need to be mastering and placing into follow when taking part in peer evaluation.
These could possibly consist of looking through abilities (discerning a writer’s key point, locating critical points of aid or suitable facts, etc. ), writing abilities (writing clear, particular opinions and issues), and collaboration capabilities (phrasing critiques in a descriptive, constructive way). Articulating what you see as the main abilities included in peer overview will enable you establish a coherent system for integrating peer evaluation into your class and will make much more distinct the distinct guidance your learners will want as they master how to review a peer’s paper and how to use the feedback they acquire through peer overview.
|
OPCFW_CODE
|
We have had an IOS app developed by a software company, however we aren't sure how complete it is and if it's eligible to be uploaded to the Apple App Store. We need some help to review our code (on Github) to see if the app is functional and if we can go ahead and submit to Apple for approval. The app is very basic, and just has a few buttons that allow the user to call us, visit our website, or submit pictures for items they need to order, and a couple others.
Bu iş için 32 freelancer ortalamada $162 teklif veriyor
How are you? Let's work closely with each other. Let's work closely with each other. I can satisfy you. Hope to contact with me. Thanks.
I know Warnings and Optimizations are for a developer to fix if he/she chooses to fix. In this case, I have a good code review practice to have the code generate minimal possible warnings. I was able to resolve many wa Daha Fazla
Hi, Did you test your app using simulator or device? I can help you to build update for you and upload into iTunes to get the approval.
Hi There, Thanks for your valuable time. :-) Please check below points : 1. We have had an IOS app developed by a software company, however we aren't sure how complete it is and if it's eligible to be uploaded Daha Fazla
Hello, I checked the job description and I'm interested in your job. I have huge experience in iOS app development for over 6 years. Please check these apps. [login to view URL] Daha Fazla
Hi. This is Qi-feng Jin from China and I am an experienced iOS app developer. I can provide you fast/easy daily communication in English to keep you updated about the project progress and for better understanding a Daha Fazla
Hi, I would be happy to both review your code and run it on my device and profile it to check it's performance and to offer a list of improvements that could be made to the app if needed. I can do this tomorrow for Daha Fazla
I can design and develop app for your requirement. Lets discuss more. Try me! I have loads of experience in mobile apps development. I am having 13 years of experience in iPhone, Objective-C, Swift, Cocoa, iOS, An Daha Fazla
Hi, As you can see my profile, I have great working history and strong experience in native iOS app development. I also like well-structured, objective-oriented coding, I always do coding in that way, and I've submitt Daha Fazla
Dear Sir! How are you? I have been creating high quality and excellent Native iOS and Android apps for 6+ years using Swift, Objectiv-C, Java, Kotlin. Also I am good at in Cross platform Development such as Xamarin a Daha Fazla
Hi, I am iOS Development Expert and have 7 years of Experience . Please contact so we can discuss in details the functionality/Requirement of the App . Thanks
Dear sir. I have just read your project description carefully. As result of reviewed your job, I am sure that I can finish this project of 100% result in short time with the low budget. Thanks for talking time to re Daha Fazla
Hi, There! I am a full stack iOS developer from concept to publish to App Store as you can see my profile. I had published 100+ apps to App Store successfully. I am sure I can do your project within 1 day. Then y Daha Fazla
Hello I have read your job description carefully and I am very interested in your project. As I am a mobile app developer, I have 7 years of experience in mobile app development for both iOS and android platforms. C Daha Fazla
Hi there, I bid on your project because I have been developing native iOS apps for 4 years. Please check my profile to see my skills and past projects. Here is my plan on your project: 1. Check the source code on Daha Fazla
Hello. I have just read your requirements. I have deep experience in skills you need so i can finish your project in a short time. Please go through this link:- [login to view URL] waiting for your Daha Fazla
|
OPCFW_CODE
|
IC engine throttle problems. Successful flights.
Hello guys,
I've had two days of successful PX4 flights with an IC engine plane. We did about 3 hours of flying missions without a problem.
But there are some things that does not work for the IC engine setup so you might tweak them a little in order to be OK.
We use servo for the throttle! This is very important. Because the kill switch you provided, probably working for electric motors, and MCs, does not work properly for the IC engine. When I flip the kill switch, QGC reports this and the servo just freezes at whatever position is at the moment. For the proper kill switch we need a 950 or 1000us to be fed to the throttle servo once the kill switch is on. This guarantees the servo will cut the gasoline and the engine will stop.
If at 950us is our throttle cut then our idle throttle is at 35%. We set FW_THR_IDLE=0.35 and FW_THR_MIN = 0.4. But in reality we don't have the 35% idle throttle all the time when armed. May be this idle is for the auto modes. If this is the case then the implementation of idle throttle is wrong. We need this idle throttle the moment we arm in all modes Acro, Manual, Stabilized... all modes no exceptions, because otherwise the engine could not be started, or if we start it in auto mode and switch to manual mode without this idle throttle applied, the engine will stop.
We do our own idle throttle now and kill switch with the RC, given the FW_THR_MIN at 0.4 the plane just executed its missions perfect. We gave some slew to the throttle as well in order to prevent choking the engine.
:) They had 2 failures, we performed 4/4.
PX4 Rules!
@tubeme Thanks for the feedback.
Could you describe what we should change to make it easier for you?
For IC Engine Airplanes and Helicopters the procedure is the same. For throttle we have connected a Servo to CH3, which in its term mechanically is driving the gasoline throttle of the IC Engine.
In its zero position the Servo is shutting the gas to the engine. If we accept this position as 1000us or PWM_MIN. In order for the engine to work and have a good flow of gasoline the servo should be at 35% in our case and no lower. So our Zero throttle in the RC becomes 35% offset to CH3 with a switch that arms the IC engine throttle.
We have a three step arming process.
Arm with Button
ARM with RC command
Switch ON the arming button of the IC engine.
Once the Arming switch is put to OFF, it sends the Servo a 1000us, and the servo cuts the gasoline and the engine stops.
We prefer to have this arming procedure, because the pilot remembers always where the kill switch is.
So in the current implementation of PX4 we have Kill switsh.
In our case it should work in reverse logic. and nothing more. And make sure once the kill switch is ON, it sends predetermined amount of throttle + offset to the throttle Servo. For different engines it is different % throttle that is the minimum throttle. So it should be a parameter that can be changed.
This offset should apply to ALL MODES no exception.
Forgot to mention about the starter.
We need another RC switch for the starter if applicable.
So in our taranis we program SG to be IC engine arming/kill switch and SH for the starter.
@tubeme do you still need this?
Not urgently but it is real scenario. This is applicable to Glow engines BTW. Because the gasoline engines cannot be cut with just throttle cut. We use electronic throttle cut for the gasoline engines. I think there should be some thought on combining the two. In one case for throttle cut we use electronics on some channel, and in the other case we use the throttle cut the style I explained here. It is related to #6916.
Probably with a couple of parameters and the help of mixer editor we could manage combining the two scenarios - Glow engines and gasoline engines. We put both types to planes and helicopters despite helicopters mostly use glow engines. We have prototype helicopter with boxer gasoline engine.
Hey, this issue has been closed because the label status/STALE is set and there were no updates for 30 days. Feel free to reopen this issue if you deem it appropriate.
(This is an automated comment from GitMate.io.)
|
GITHUB_ARCHIVE
|
Container Serialization - Deserialization
Hello, some people would like to serialize containers and transmit them over the wire ... e.g. using zmq .. and deserialize again into containers on the other side.
I think this does not exist yet, right? @MaxNoe I think you wrote Container.asdict() which is already a huge step towards serialization, I think. I am currently looking into serialization into msgpack and back again. Seems to work quite nicely...
As the msgpack people show here: https://github.com/msgpack/msgpack-python#packingunpacking-of-custom-data-type
For serializing custom types, one needs to write these encode/decode functions. custom types includes of course numpy arrays and scalars, as well as astropy.Time and the SubarrayDescription as well as Quantities...
I do not know if serializing to msgpack or to JSON or to BSON or to googles protobuffers is best or so.. and I also do not care, but I see that people want some kind of serialization. So I am looking into this.
I just wanted to open this issue, to inform you and also ask, if there is already some "standard" to follow or some helpers I am not aware of... E.g. Container.from_dict() would be nice...
Containers are pickable, so using pyzmq you can do socket.send_pyobj(container)
Ah right! thanks ... didn't think of pickle at all!
That being said, it might not be the best approach, bet surely the fastest if both the sending and receiving end are python.
json, bson, msgpack or the like would be great for inter-language communication.
Container.from_dict()
Thats basically Container(**dict) right?
Okay so since I looked into msgpack now for an hour or so ... I'll open a little PR for what I have so far.
Thats basically Container(**dict) right?
Also that I was not aware of. ... why was I not aware that containers can be constructed from dicts?
Ah maybe since this is not how we work with them. Instead we construct them once and then "fill" them ...
regarding inter-language ... so for msgpack one needs these custom encoders and decoders ... which will need to be re-implemented for every language ... but they are fairly simple and straight forward, I think.
Also that I was not aware of. ... why was I not aware that containers can be constructed from dicts?
That's nothing special about containers, that's python...
That's nothing special about containers, that's python...
I disagree ... If one does not know Container.__init__ by heart, and only ever sees this usage:
data = SomeContainer()
data.foo = 'bar'
One might simply assume Container.__init__ to be empty or so, but it is not, the relevant part (which I just looked at) is:
def __init__(self, **fields):
for k, v in fields.items():
setattr(self, k, v)
Container.from_dict()
Thats basically Container(**dict) right?
Also it only works if not recursive=True was used.
Also it only works if not recursive=True was used.
Thats someting different, now you are talking about that containter == container(**container.to_dict(recursive=True)), which would be useful, I think.
data = SomeContainer()
data.foo = 'bar'
Really? That is not how it is intended. See for example hillas_parameters:
https://github.com/cta-observatory/ctapipe/blob/423e61e15d2448e04334c33055bb226c454f4d09/ctapipe/image/hillas.py#L155-L166
I disagree ... If one does not know Container.init by heart, and only ever sees this usage:
Container is basically meant to be a dict with attribute access and predefined fields.
Not allowing keyword arguments in the __init__ would maximum surprise.
|
GITHUB_ARCHIVE
|
Issue with AME causing strobing
Description
From MetroReactive on Discord
I made the antimater machine, I put fuel in it, I wanted to see what would happen if i injected 500 at a time, ship started to shake, I panicked, I checked the gravitation device to see if anyone was spamming it, No one was there to my knowledge, I ran back in and I didnt know if it was the machine that did it or if it was me so i put more fuel in it to see, It started to shaked again, I was going to turn it off but then i was killed.
Will update with proper replication steps.
I have tested this on local, and I can reproduce every time. Here are the exact steps I used.
Step 1. Build an AME in Engineering. I used the exact spot MetroReactive told me they used, which is in the AME room.
Step 2. Crank the control unit to 500 input, put in the fuel, and then turn it on.
Step 3. The entire station's power grid will rapidly fluctuate on and off, causing constant shaking from the gravity generator, as well as an extremely loud noise from the gravgen's on/off sounds layering over each other. The lights on the station will also flash rapidly.
I tried it and got this:
[FATL] unhandled: Robust.Shared.Utility.DebugAssertException: Exception of type 'Robust.Shared.Utility.DebugAssertException' was thrown.
at Robust.Shared.Utility.DebugTools.Assert(Boolean condition) in /home/20kdc/Documents/External/space-station-14/RobustToolbox/Robust.Shared/Utility/DebugTools.cs:line 31
at Content.Server.Power.Pow3r.BatteryRampPegSolver.Tick(Single frameTime, PowerState state) in /home/20kdc/Documents/External/space-station-14/Content.Server/Power/Pow3r/BatteryRampPegSolver.cs:line 129
at Content.Server.Power.EntitySystems.PowerNetSystem.Update(Single frameTime) in /home/20kdc/Documents/External/space-station-14/Content.Server/Power/EntitySystems/PowerNetSystem.cs:line 190
at Robust.Shared.GameObjects.EntitySystemManager.TickUpdate(Single frameTime) in /home/20kdc/Documents/External/space-station-14/RobustToolbox/Robust.Shared/GameObjects/EntitySystemManager.cs:line 232
at Robust.Shared.GameObjects.EntityManager.TickUpdate(Single frameTime, Histogram histogram) in /home/20kdc/Documents/External/space-station-14/RobustToolbox/Robust.Shared/GameObjects/EntityManager.cs:line 103
at Robust.Server.GameObjects.ServerEntityManager.TickUpdate(Single frameTime, Histogram histogram) in /home/20kdc/Documents/External/space-station-14/RobustToolbox/Robust.Server/GameObjects/ServerEntityManager.cs:line 128
(rest omitted for brevity)
It's definitely doing something
$ lua
Lua 5.3.3 Copyright (C) 1994-2016 Lua.org, PUC-Rio> 500 * 20000
10000000
> 0x100000000
4294967296
> 500 * 500 * 20000
5000000000
> 0x100000000
4294967296
>
> AME energy calculations are done in floats and then converted into an int
Oh dear. Ooooh dear.
|
GITHUB_ARCHIVE
|
Have often named "applications of quantum physics" existed before the theory itself?
Many "applications" are linked to quantum physics - let it be the laser, LED's, transistors, MRI scans, atomic clocks, electron microscope or CCD-detectors in digital cameras. I wonder wether these applications really only got developed starting from quantum theory, or wether quantum mechanics only is able to describe them, with the applications having already been there before quantum physics was elaborated as a theory.
All of the technologies you mention were invented after the theory of quantum mechanics was formulated, although the theory has developed and grown in tandem, especially solid state physics
Thank you. Is there a source (book, paper) that elaborates on this that you could recommend?
This translated talk might be of interest: https://www.nature.com/articles/121580a0 . It's from Niehls Bohr just as Heisenberg/Schrodinger formulated their contributions. This can also provide kind of a timeline to center on. You're looking more at "triode", "x-rays" than anything else.
One thing that did exist before QM, and indeed became the motivating case for creating QM, is the light bulb. Max Planck was asked to find the optimum temperature to operate a filament at, to produce the most visible light for the least energy. The spectrum (intensity vs. frequency) of light could not be explained by classical EM theory -- see "Ultraviolet Catastrophe." Planck's result for the real spectrum, plus other results like the photoelectric effect, laid the foundation for QM.
I guess that it depends on what exactly an "application" is. Quantum effects (e.g. rectifying semiconductor junctions, radioactivity, atomic spectrum, etc.) were certainly discovered before their quantum explanation. Arguably, that's what prompted the development of quantum mechanics!
If an "application" is the use of a quantum effect (i.e. an effect that cannot be explained by classical physics --- that's actually a slippery definition*), then there were many. Off the top of my head: crystal detectors (a type of semiconductor diode), gas-discharge lamps (including neon signs), and radium self-luminous paints.
People regularly tinkered with stuff they could not explain. In fact, for most of human history, that's all inventors did!
*Arguably, most (all?) chemistry is quantum effects, and arguably any chemical reaction is a quantum application. People have used chemical reactions for as long as there have been people.
I would suggest as a definition, a classically measurable relationship between classically measurable states that contradicts the predictions of classical physics, but can be correctly predicted by QM.
How would that apply to situations where classical physics makes no prediction? What's the classical prediction for atomic spectra (classically measurable with a prism) or most things related to chemical reactions?
@inmaurer no prediction mapping from state 1 to state 2 is the same as a prediction that state 2 will not regularly follow from state 1. Most of chemistry, however, can be done by treating all quantum interactions as taking place in a black box, and measuring macroscopic qualities of systems, such as mass, temperature, and enthalpy, provides all you need to know to predict the behavior of chemical systems using classical thermodynamics, often trivial thermodynamics.
I'm not sure whether atomic spectra would qualify as a classically measurable quantum effect or not under my definition, or as another unexplained quality of a material we can identify by its macroscopic qualities, but for whose inner workings we have no model.
“Predicting” (the word in your definition) and “measuring macroscopic qualities” are two very different things. How does classical physics predict, say, the heat capacity of matter (a quantity frequently needed for calorimetry)? You can certainly measure it in various ways, but that’s not a prediction. (Boltzmann’s classical model didn’t produce that accurate results and was arguably proto-QM to boot. Einstein and Debye’s models were certainly QM.)
|
STACK_EXCHANGE
|
Hard drive weirdness.
rhetoric102 at iowatelecom.net
Wed Jul 23 14:09:57 UTC 2008
My guess would be that the Promise RAID controller preempts the system
bios and sees hda residing on its own strap. Secondary status is
assigned to the ide device on the mainboard controller.
That's only a guess, but I bet it ain't far off the mark. Have you tried
doing away with the ide drive and just running the Promise controller
and the two RAID drives?
On Wed, 2008-07-23 at 09:56 -0400, David Gibb wrote:
> I'm experiencing some weird behaviour with my disk drives, and I was
> hoping someone might be able to give me some hints.
> I have an oldish computer (AMD athlon 850) that I wanted to use as a
> file server. I had a 20GB PATA drive lying about, and I bought a
> promise sata300 tx4 card plus 2 500GB sata drives that were on
> I hooked the 2 sata drives to the card, and the PATA drive to the
> motherboard's controller, and then I installed ubuntu server 8.04 on
> the 20GB drive. Everything went smoothly until I rebooted after the
> install, when it couldn't find the system drive. I played around with
> the boot settings in the bios until I found that none of the 'IDE-X'
> settings worked, but 'SCSI' did. Once ubuntu started up, I found that
> the 20GB pata drive showed up as /dev/sdc, and the 2 500GB SATA drives
> showed up as /dev/sda and /dev/sdb. I found this to be a touch weird,
> but everything seemed to be working so I left it as is.
> I then installed software RAID 1 on the two 500GB drives, then LVM on
> top of them, samba, etc, etc. Everything works great. I still wanted
> to simulate a drive failure, so that I could verify that the RAID
> mirroring was working, and to figure out which physical drive was
> which. Here's when the trouble started.
> When I pulled sata drive #1, I get a blank screen with a blinking
> cursor. No GRUB, no nothing. When I pulled sata drive #2, I get "GRUB
> Hard disk error". When I pull both disks, I get a "System disk not
> found" error.
> I guess my questions are as follows:
> 1) Does anyone know why my PATA drive isn't showing up as /dev/hda?
> 2) I guess I could possibly explain the "GRUB Hard disk error" if the
> absence of a drive caused the drive number to change, but I don't know
> why the absence of the other drive causes a blank screen.
> 3) Any idea how I should tinker with this system so that I can still
> boot it in case one of the raid drives fails?
More information about the ubuntu-users
|
OPCFW_CODE
|
Relation between entropy and min-entropy
I understand that the entropy is the number of bits that can encode a set of messages. However, I don't understand what the min-entropy is and how it is related to entropy.
Let's describe a simple password case: if a password is 100 random bits, is the min-entropy also 100?
If I understand Wikipedia correctly (which is admittely quite difficult), min entropy is always $\leq$ than shannon entropy.
Password entropy is notoriously difficult to estimate and therefore not a good example. It's highly unlikely that a human password can be measured to contain 100 bits of entropy. You'd simply have to dump all maths and guess based on heuristics and star sign. A better example is any electro-mechanically generated non uniform distribution.
There is min-entropy, Shannon entropy, and max entropy. (Plus a few more definitions, but let's only focus on these.) All of these measures are greatest, for a given number of outcomes, when each outcome occurs with equal probability. (In such case all three are equal.)
$$\text{min-entropy} \leq \text{Shannon entropy} \leq \text{max-entropy}$$
Min-entropy describes the unpredictability of an outcome determined solely by the probability of the most likely result. This is a conservative measure. It's good for describing passwords and other non-uniform distributions of secrets.
$$\text{min-entropy} = -\log_2{(p_{\text{max}})}$$
Say you have an algorithm which produces 8 digit numeric password. If the number 00000000 occurs 50% of the time, and the remaining $10^8 - 1$ passwords occur with equal probability, then the Shannon entropy would be about $14.3$ bits, but the min-entropy is precisely $1$, which is $-\log_2{(0.5)}$.
Min-entropy can be associated with the best chance of success in predicting a person's password in one guess.
Shannon entropy is defined to equal $$\sum_{i=0}^n{-p_i\log_2(p_i)}$$ for a probability distribution $p$ with $n$ possible outcomes. Shannon entropy describes the average unpredictability of the outcomes of a system.
It also measures how much information is in a system (on average). Shannon entropy is the smallest possible average file size for a compression algorithm designed specifically with the distribution $p$ in mind.
Max-entropy, called also Hartley entropy, is defined solely on the number of possible outcomes. It is equal to $\log_2(n)$. It's not particularly useful in cryptography or passwords. It's a measure of the number of bits you would need to have to designate one bit pattern for every possible outcome.
The three quantities are always the same for a uniform distribution. This is because $p_i = p_\text{max} = \frac{1}{n}$.
$$-\log_2(p_\text{max}) = -\log_2 (\frac{1}{n}) = \log_2 (n)$$
$$\sum_{i=0}^n{-p_i\log_2(p_i)} = n(\frac{1}{n} \log_2 (\frac{1}{n})) = -\log_2 (\frac{1}{n}) = \log_2 (n)$$
This is why passwords picked from a uniform distribution (using a secure RNG) are said to be stronger than human generated passwords. Humans are biased in which password they choose. (Their password distribution is non-uniform.)
measure are greatest or equal?
@kelalaka Regarding the third sentence? Greatest (maximized) AND equal (to each other).
Yes, the third sentence
As the formulas show, all definitions are independent of the encoding - they only depend on the probabilities and the number of different events with non-zero probability. If only 10 values are possible, it doesn't matter how many bits your encoding actually uses (except that you need at least 4 bits - although no measure will give full 4 bits).
|
STACK_EXCHANGE
|
Building a better product is important for any startup, and building one that scales well after you get your initial traction is often the difference between success and failure. The only thing that makes scaling difficult is if you are bottlenecked on your ability to do an analysis of the data that flows into your system.
- Bringing more engineers onto a team often means more features, but it also means longer timelines for writing code because there are simply more people involved. If this slowdown happens at the wrong time, this could mean missing out on opportunities or losing market share to competitors.
- The best way I have found so far to combat this issue is by spending time upfront sharing knowledge about what data is flowing in through our APIs and how it can be used, so new users on the dev team can quickly pull down particular subsets of data without depending others for guidance or context about what is available in our system.
- Additionally, with this approach, new engineers on the team can hit the ground running when it comes to productionizing their code by knowing exactly which subsets of data they need to pull down in order to do an analysis, without having to waste time figuring out what is available or how it is formatted.
- For example; if you are using Redshift or Postgres, then use ef export and pg_dump to export a CSV file for each table that your application writes records to. Then have one person who has knowledge about each table structure within your DB write a small program that parses through all of these CSV files and generates a metadata index JSON blob per table sampled. This index will use the metadata from the CSV files and output a set of key value pairs for each column in your sampled table.
- Once this is complete, you can now ship these JSON blobs as part of an ETL process or even push them up to S3 or DynamoDb for other engineers on the team to import directly into their own Redshift/Postgres databases.
- The goal here is not to copy large volumes of data around but instead to enable new users on the team with faster access to subsets of the data that they need, without having to bother others about how it might be formatted or structured. This has worked well for us so far and we have been able to achieve 10X speed increases when it comes time to do analysis against our data compared to when we first started the company.
Hopefully, this article series helps other teams achieve similar results, which are often hard to come by especially in challenging startup environments. Now it is time for us to share what more we have learned about how you can more efficiently build scalable systems at scale. Stay tuned!
What formats are supported?
Currently, only CSV is supported. Can the tool import data directly from DynamoDb or S3? Yes, with some caveats.
What happens when you reference a column in your JSON blob that does not exist in the table?
The importer will throw an error and stop importing any further.
Do you validate that all of our CSV columns match up with one of our defined JSON blob keys?
No, this is not currently supported. Can you import data with fewer than two columns in the CSV file?
Yes, but the JSON key names will be assigned by us with specific semantics so it might be confusing to understand what each column represents later on if you do not have all of the original metadata that was included when our importer wrote out your initial blob.
How does this compare to Cascalog/Scalding/Impala etc.?
This differs from most ETL solutions because it operates at a lower level and builds up a schema based on the contents of your dataset as opposed to expecting you to provide them upfront. It also supports exporting metadata for more than just Redshift and Postgres. So with that said, it is different, but not necessarily better.
What are some of the current limitations?
Currently, this is very much a work in progress so there are several limitations including: – Only works on CSV files for now – Writing out large datasets (> 500MB) results in very long run times due to lack of file caching/buffering – No support for nested data types (e.g. JSON inside of JSON) – No column name validation – Here are some additional limitations based on our use cases at Buffer. Redshift only exports up to date columns via COPY if they match an existing column name.
We have been using this tool to ingest our data at scale across a growing number of AWS instances and into a single Redshift cluster. We have found that by having a re-usable standard for how we name columns within our JSON files, it has allowed engineers on the team with little experience with our current infrastructure to be able to jump in and start sending us data without much further instruction. This post only touches the surface as there is still room for improvement as we continue to use this internally at Buffer and more feedback/requests from the community. If you find any bugs or would like to share additional thoughts then feel free to reach out or submit pull requests via Github.
|
OPCFW_CODE
|
PALAMARA Gian Marco
- Theoretical and Computational Ecology, Center for Advanced Studies of Blanes, Blanes, Spain
- Agroecology, Biodiversity, Biogeography, Biological invasions, Chemical ecology, Coexistence, Community ecology, Competition, Conservation biology, Demography, Ecotoxicology, Epidemiology, Experimental ecology, Food webs, Freshwater ecology, Human impact, Interaction networks, Landscape ecology, Ontogeny, Population ecology, Social structure, Species distributions, Statistical ecology, Symbiosis, Taxonomy, Theoretical ecology, Zoology
Linking intrinsic scales of ecological processes to characteristic scales of biodiversity and functioning patterns
The impact of process at different scales on diversity and ecosystem functioning: a huge challengeRecommended by David Alonso based on reviews by Shai Pilosof, Gian Marco Palamara and 1 anonymous reviewer
Scale is a big topic in ecology . Environmental variation happens at particular scales. The typical scale at which organisms disperse is species-specific, but, as a first approximation, an ensemble of similar species, for instance, trees, could be considered to share a typical dispersal scale. Finally, characteristic spatial scales of species interactions are, in general, different from the typical scales of dispersal and environmental variation. Therefore, conceptually, we can distinguish these three characteristic spatial scales associated with three different processes: species selection for a given environment (E), dispersal (D), and species interactions (I), respectively.
From the famous species-area relation to the spatial distribution of biomass and species richness, the different macro-ecological patterns we usually study emerge from an interplay between dispersal and local interactions in a physical environment that constrains species establishment and persistence in every location. To make things even more complicated, local environments are often modified by the species that thrive in them, which establishes feedback loops. It is usually assumed that local interactions are short-range in comparison with species dispersal, and dispersal scales are typically smaller than the scales at which the environment varies (I < D < E, see ), but this should not always be the case.
The authors of this paper relax this typical assumption and develop a theoretical framework to study how diversity and ecosystem functioning are affected by different relations between the typical scales governing interactions, dispersal, and environmental variation. This is a huge challenge. First, diversity and ecosystem functioning across space and time have been empirically characterized through a wide variety of macro-ecological patterns. Second, accommodating local interactions, dispersal and environmental variation and species environmental preferences to model spatiotemporal dynamics of full ecological communities can be done also in a lot of different ways. One can ask if the particular approach suggested by the authors is the best choice in the sense of producing robust results, this is, results that would be predicted by alternative modeling approaches and mathematical analyses . The recommendation here is to read through and judge by yourself.
The main unusual assumption underlying the model suggested by the authors is non-local species interactions. They introduce interaction kernels to weigh the strength of the ecological interaction with distance, which gives rise to a system of coupled integro-differential equations. This kernel is the key component that allows for control and varies the scale of ecological interactions. Although this is not new in ecology , and certainly has a long tradition in physics ---think about the electric or the gravity field, this approach has been widely overlooked in the development of the set of theoretical frameworks we have been using over and over again in community ecology, such as the Lotka-Volterra equations or, more recently, the metacommunity concept .
In Physics, classic fields have been revised to account for the fact that information cannot travel faster than light. In an analogous way, a focal individual cannot feel the presence of distant neighbors instantaneously. Therefore, non-local interactions do not exist in ecological communities. As the authors of this paper point out, they emerge in an effective way as a result of non-random movements, for instance, when individuals go regularly back and forth between environments (see , for an application to infectious diseases), or even migrate between regions. And, on top of this type of movement, species also tend to disperse and colonize close (or far) environments. Individual mobility and dispersal are then two types of movements, characterized by different spatial-temporal scales in general. Species dispersal, on the one hand, and individual directed movements underlying species interactions, on the other, are themselves diverse across species, but it is clear that they exist and belong to two distinct categories.
In spite of the long and rich exchange between the authors' team and the reviewers, it was not finally clear (at least, to me and to one of the reviewers) whether the model for the spatio-temporal dynamics of the ecological community (see Eq (1) in ) is only presented as a coupled system of integro-differential equations on a continuous landscape for pedagogical reasons, but then modeled on a discrete regular grid for computational convenience. In the latter case, the system represents a regular network of local communities, becomes a system of coupled ODEs, and can be numerically integrated through the use of standard algorithms. By contrast, in the former case, the system is meant to truly represent a community that develops on continuous time and space, as in reaction-diffusion systems. In that case, one should keep in mind that numerical instabilities can arise as an artifact when integrating both local and non-local spatio-temporal systems. Spatial patterns could be then transient or simply result from these instabilities. Therefore, when analyzing spatiotemporal integro-differential equations, special attention should be paid to the use of the right numerical algorithms. The authors share all their code at https://zenodo.org/record/5543191, and all this can be checked out. In any case, the whole discussion between the authors and the reviewers has inherent value in itself, because it touches on several limitations and/or strengths of the author's approach, and I highly recommend checking it out and reading it through.
Beyond these methodological issues, extensive model explorations for the different parameter combinations are presented. Several results are reported, but, in practice, what is then the main conclusion we could highlight here among all of them? The authors suggest that "it will be difficult to manage landscapes to preserve biodiversity and ecosystem functioning simultaneously, despite their causative relationship", because, first, "increasing dispersal and interaction scales had opposing
effects" on these two patterns, and, second, unexpectedly, "ecosystems attained the highest biomass in scenarios which also led to the lowest levels of biodiversity". If these results come to be fully robust, this is, they pass all checks by other research teams trying to reproduce them using alternative approaches, we will have to accept that we should preserve biodiversity on its own rights and not because it enhances ecosystem functioning or provides particular beneficial services to humans.
Levin, S. A. 1992. The problem of pattern and scale in ecology. Ecology 73:1943–1967. https://doi.org/10.2307/1941447
Yuval R. Zelnik, Matthieu Barbier, David W. Shanafelt, Michel Loreau, Rachel M. Germain. 2023. Linking intrinsic scales of ecological processes to characteristic scales of biodiversity and functioning patterns. bioRxiv, ver. 2 peer-reviewed and recommended by Peer Community in Ecology. https://doi.org/10.1101/2021.10.11.463913
Baron, J. W. and Galla, T. 2020. Dispersal-induced instability in complex ecosystems. Nature Communications 11, 6032. https://doi.org/10.1038/s41467-020-19824-4
Cushing, J. M. 1977. Integrodifferential equations and delay models in population dynamics
Springer-Verlag, Berlin. https://doi.org/10.1007/978-3-642-93073-7
M. A. Leibold, M. Holyoak, N. Mouquet, P. Amarasekare, J. M. Chase, M. F. Hoopes, R. D. Holt, J. B. Shurin, R. Law, D. Tilman, M. Loreau, A. Gonzalez. 2004. The metacommunity concept: a framework for multi-scale community ecology. Ecology Letters, 7(7): 601-613. https://doi.org/10.1111/j.1461-0248.2004.00608.x
M. Pardo-Araujo, D. García-García, D. Alonso, and F. Bartumeus. 2023. Epidemic thresholds and human mobility. Scientific reports 13 (1), 11409. https://doi.org/10.1038/s41598-023-38395-0
|
OPCFW_CODE
|
Over the next five weeks, we'll be running a series called "Getting to know the board", publishing blog posts from each member of the Rust Foundation Board of Directors, introducing them to the community. You can view the posts in this series here.
The Rust Foundation is up and running! This is really a great milestone for the Rust ecosystem. A system programming language like Rust is a fundamental building block so having an independent entity (the foundation) to facilitate open governance will boost widespread adoption for sure. Open governance here means certain assets such as logo, trademark, IT infrastructure and so on are NOT controlled by a single commercial organization, and with the governance structure separating decisions about funds and business affairs from the technical project’s decisions, all these will give participants from various organizations, even they are competitors in business, a neutral place with assurance backed by the foundation bylaw and operation mechanism. I know the foundation also serves other functionalities and those are all benefits to having a foundation, but the open governance part is a very important one I think. Projects like Kubernetes were much more widely adopted and attracted many players since the CNCF was established, and now it is the de-facto standard in the industry. And there are many more you can name. To be honest, the current Rust community governance structure is indeed open and very welcoming already, from the RFC process to the diversity of developers and various gate-keepers, so I really appreciate Mozilla’s great effort and the whole community to make this happen. And the establishment of the Rust foundation will just make it more attractive and give commercial players more peace of mind, and the composition of the founding platinum member companies so far has proven this. I am also happy to see that Rust is truly a global development activity, with contributors (copyright owners) from so many countries across almost all continents, and the software is publicly available across national boundaries. These will continuously be the mission and vision of the Rust Foundation.
I have mainly done C programming in development work and my past technical experience was mainly focused on operating systems, media processing and other embedded system software. So performance and resource consumption are always some of the key metrics to consider. Rust is certainly a kind of language that can match C/C++ here. The second well recognized beauty of Rust is memory safety. Both of these reasons are why the company I am working for, Huawei, is interested in Rust and would like to invest to make the language widely adopted in products, since as an ICT infrastructure vendor, performance and security are basically the two fundamental baselines in many scenarios to judge when choosing a new programming language. But actually when I first got introduced to Rust by friends, I was amazed not by the above two advantages compared to other languages, instead by the look-and-feel of the language, the cleaner syntax, better modularity and more importantly, the package-management system, which certainly C/C++ is lacking as a standard. Although I did not have the chance to switch from C to Rust in my development work then, I can certainly understand why many have done so or make Rust as their first choice of system programming language.
Now let me share a bit information on what is happening in where I am from, China. At the end of last year, Rust China Conf 2020 was held in Shenzhen, and Huawei is the only top-level sponsor (happily done by my team ☺). In this event, I was thrilled to see so many cases, products and open source projects written in Rust and many developers are just entering university. Some told me they have been using Rust for several years, which means they started programming in Rust from high school. The topics they shared range from data analysis, operating systems, storage solutions, blockchain, robotics, autonomous vehicle and mobile applications. Also, a student from Tsinghua University, RunJi Wang, wrote an elegant distributed file system called MadFS, which helped a 500-node ARM based cluster to champion the recent IO500 List, with a score of almost 4 times than the second on the list (https://io500.org/). All of these demonstrated the strong momentum and the active development community here in China. And I will no doubt see more in the future.
Looking into the future, with the establishment of the Rust foundation and strong investment from the members, we will see more language innovation and engineering implementation at a faster pace. For Huawei, areas we will invest in the community will include projects in numerical computation, robotics, virtualization and more. As the only platinum founding member from China so far, we would also like to promote Rust with all partners, this may include setting up local infrastructure such as crates.io and local CI for better access and usability, translating more documentation into Chinese and holding more events here. Last but not least, we are eager to have more Rust talents to join us, in the EU, North America and China! And if you are in a place other than those above but also interested, no worries, let’s talk about more opportunities!
|
OPCFW_CODE
|
import numpy as np
import scipy as sp
def randomise_amplitude(pix, minval, maxval):
absrange = maxval - minval
return np.random.rand(pix, pix) * absrange + minval
def randomise_phase(pix, minval, maxval):
phaserange = maxval - minval
return np.random.rand(pix, pix) * phaserange + minval
def make_super_gaussian(size, fwhm, dx, power, norm=False):
array = np.zeros((size, size))
radius = sp.hypot(*sp.ogrid[-size // 2 : size // 2, -size // 2 : size // 2]) * dx
sigma = fwhm / (2 * np.log(2) ** (1.0 / power))
output = 1.0 * np.exp(-0.5 * (radius / sigma) ** power)
if norm == False:
return output
else:
return output / np.sum(output)
def downstream_prop(inputArray, noshift="true"):
if noshift == "true":
return np.fft.fft2(np.fft.fftshift(inputArray), norm="ortho")
else:
return np.fft.fftshift(np.fft.fft2(np.fft.fftshift(inputArray), norm="ortho"))
def upstream_prop(inputArray, noshift="true"):
if noshift == "true":
return np.fft.fftshift(np.fft.ifft2(inputArray, norm="ortho"))
else:
return np.fft.fftshift(np.fft.ifft2(np.fft.fftshift(inputArray), norm="ortho"))
def prop_short_distance(array1, propagator, noshift="true"):
"""
use the angular spectrum method to propagate a complex wavefield a short distance
very useful when the Fresnel number for the intended propagation is astronomically high
(i.e. when the Fresnel number makes typical single-FFT based propagation impossible)
"""
return np.fft.fftshift(
np.fft.ifft2(np.fft.fft2(np.fft.fftshift(array1)) * propagator)
)
def broadcast_diffraction(inputarray, padpix, rebpix):
outputarray = np.zeros((padpix, padpix))
pm = padpix // 2
rm = rebpix // 2
outputarray[0:rm, 0:rm] = inputarray[rm:, rm:]
outputarray[0:rm, -rm:] = inputarray[rm:, 0:rm]
outputarray[-rm:, 0:rm] = inputarray[0:rm, rm:]
outputarray[-rm:, -rm:] = inputarray[0:rm, 0:rm]
return outputarray
def modulus_constraint(array1, array2, threshold=1e-9):
" return array with the phase of array1 but the amplitude of array2. Perform check on the amplitude of array1 to avoid dividing by zero "
return sp.where(
np.abs(array1) > threshold, np.abs(array2) * (array1 / np.abs(array1)), 0.0
)
def upsample_array(array1, factor):
pix1 = array1.shape[0]
mid1 = pix1 // 2
pix2 = pix1 * factor
mid2 = pix2 // 2
array2 = np.zeros((pix2, pix2)).astype("complex64")
array2[mid2 - mid1 : mid2 + mid1, mid2 - mid1 : mid2 + mid1] = np.fft.fftshift(
np.fft.fft2(np.fft.fftshift(array1), norm="ortho")
)
return np.fft.fftshift(np.fft.ifft2(np.fft.fftshift(factor * array2), norm="ortho"))
|
STACK_EDU
|
How to ORDER BY and INSERT to TABLE when loading data from JSON
I need help to load data from JSON to a table but have them ordered/sorted and insert to table?
DECLARE @IN_DATESJSON NVARCHAR(MAX) = N'[{"CreatedDate":"2018-10-10T09:07:29Z"},{"CreatedDate":"2018-10-09T09:07:29Z"},{"CreatedDate":"2018-10-08T07:07:08Z"}]';
DECLARE @V_CALLSTBL AS TABLE (CreatedDate DATETIME);
IF (ISJSON(@IN_DATESJSON) = 1)
BEGIN
INSERT INTO @V_CALLSTBL
SELECT *
FROM OPENJSON (@IN_DATESJSON)
WITH (CreatedDate DATETIME2)
-- ORDER BY CreatedDate ASC -- THIS DOESN'T WORK*
END
SELECT * FROM @V_CALLSTBL;
CreatedDate
-----------------------
2018-10-10 09:07:29.000
2018-10-09 09:07:29.000
2018-10-08 07:07:08.000
Related: How to maintain the order of insertion in SQL Server, Preserving ORDER BY in SELECT INTO, Return rows in the exact order they were inserted
Populate table variable with ordered results
@IljaEverilä can you please explain this behaviour: Image Link
There's not much gained by explaining it, since the order of a SELECT that does not explicitly use ORDER BY is unspecified, which does not mean it cannot appear ordered, but you cannot rely on it. Tables are also inherently unordered bags, since they're modeled after relations, so "insertion order" has no meaning.
Actually, it inserts ordered but you should also select it with order by. Because select not guaranteed to return sorted data as inserted.
SELECT * FROM @V_CALLSTBL ORDER BY CreatedDate ASC;
and you can add an identity column to the table for seeing this.
DECLARE @IN_DATESJSON NVARCHAR(MAX) = N'[{"CreatedDate":"2018-10-10T09:07:29Z"},{"CreatedDate":"2018-10-09T09:07:29Z"},{"CreatedDate":"2018-10-08T07:07:08Z"}]';
DECLARE @V_CALLSTBL AS TABLE (ID INT IDENTITY(1,1), CreatedDate DATETIME);
IF (ISJSON(@IN_DATESJSON) = 1)
BEGIN
INSERT INTO @V_CALLSTBL
SELECT *
FROM OPENJSON (@IN_DATESJSON)
WITH (CreatedDate DATETIME2)
ORDER BY CreatedDate ASC -- THIS DOESN'T WORK*
END
SELECT * FROM @V_CALLSTBL ORDER BY CreatedDate ASC;
This worked, can I know how does adding extra ID column make this work ?
If you want to get a sorted result, you have to add order by to your select. Adding an identity column does not change anything. I just added it to show you your data inserted by sorted.
There are two cases; one with an identity column where the select returns data in date with insert order and the second is only with date field and data is returned in the opposite order. This actually is not a definite behaviour. As in the answers you should somehow explicitely use ORDER BY clause.
If you don't specify an ORDER BY clause in your SELECT statement, it is not guaranteed that the returned data set will be in sorted.
Additionally, to keep a specific table data ordered, you should specify a cluster index on that table column. This will be used for fetching the data faster when a search is made on that column is in the filter criteria.
Again there is no sorting guaranteed if you don't use an ORDER BY clause
|
STACK_EXCHANGE
|
Cal $2$ $p$-series convergent vs divergent
I am having difficulties understanding how an infinite $p$-series converges vs diverges.
The infinite sum of $\frac{1}{n^p}$, if $p>1$ then convergent and if $p\leq1$ then divergent.
I understand that if $p=3$ that $\frac{1}{n^p}$ will eventually converge because as more terms are calculated the quantity of them diminishes so that eventually you add almost nothing.
However I'm having trouble understanding the case where $p\leq 1$ where it diverges.
For example when $p=1$ the terms are: $1 + 0.5 + 0.333 + 0.25 + \ldots$
While I do see that the summation is growing, the rate at which it is growing is decreasing. But apparently it sums up to infinity hence why it's divergent. I would think that eventually its growth would be so insignificant that it would converge.
"will eventually converge" is a strange concept.
Assuming you know how to evaluate improper integrals, intuitively, we can compare this to an integral:
$$\sum_{k=1}^\infty\frac1{k^p}\sim\int_1^\infty\frac1{x^p}~\mathrm dx$$
When $p=1$, we get
$$\sum_{k=1}^\infty\frac1k\sim\int_1^\infty\frac1x~\mathrm dx=\ln(x)\bigg|_1^\infty=\infty$$
So it diverges.
(Note that $\ln(x)\to\infty$ as $x\to\infty$)
When $p>1$, we get
$$\sum_{k=1}^\infty\frac1{k^p}\sim\int_1^\infty\frac1{x^p}~\mathrm dx=-\frac1{(p-1)x^{p-1}}\bigg|_1^\infty=\frac1{p-1}<\infty$$
So it converges.
(Note that $1/x^{p-1}\to0$ as $x\to\infty$)
When $p<1$, we get
$$\sum_{k=1}^\infty\frac1{k^p}\sim\int_1^\infty\frac1{x^p}~\mathrm dx=-\frac1{(p-1)x^{p-1}}\bigg|_1^\infty=\infty$$
So it diverges.
(Note that $1/x^{p-1}\to\infty$ as $x\to\infty$)
This is a VERY commonly asked question and there are a lot of resources to help you understand why online. Search for harmonic series proof. This will explain why the series diverges at p = 1. And if it diverges when p = 1 it clearly diverges for p = .999 or whatever else because all terms will be bigger.
Proof from an educator: https://www.khanacademy.org/math/calculus-home/series-calc/convergence-divergence-tests-calc/v/harmonic-series-divergent
Further discussion of series and proof: https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
The most intuitive, simple way I have seen to explain why the harmonic series (this series, $1+\frac{1}{2}+\frac{1}{3}+\ldots$ is so important it has a name) diverges goes something like this
We're going to split the sum into an infinite number of finite sums where the k$^{th}$ finite sum has $2^k$ terms.
$$1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8}+\ldots=\\
1+\frac{1}{2}+(\frac{1}{3}+\frac{1}{4})+(\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8})+\ldots >\\
1 + \frac{1}{2} +(2*\frac{1}{4}) + (4*\frac{1}{8})\ldots=
1+\frac{1}{2}+\frac{1}{2}+\frac{1}{2}\ldots\\$$
And that last one clearly diverges
If you want to be a bit more rigorous about this argument, you could use some proof by contradiction. Suppose this sum has a bound called M. Since each list of $2^k$ terms in the sum add another $\frac{1}{2}$, it would take 2M of these summed together to be greater than M. So after $\Sigma_{k=1}^{2M}\ 2^k$ terms, this sum will be bigger than the supposed bound. A closed from of this sum is $2^{(2M)+1}-2$ not that you need a closed form for this proof to be valid, but now we can do some calculations with it.
Suppose you wanted enough terms so that the sum is bigger than 100. $M=100$, so by having $2^{2*100+1}-2 = 2^{201}-2$ terms, we guarantee the sum will be at least 100. Plugging the sum $\Sigma_{k=1}^{2^{201}-2} \frac{1}{k}$ into wolfram alpha, we get $139.8$ (I rounded down since we wanted this number to be greater than another number). So we can calculate an upper bound on the number of terms required to make the harmonic series bigger than any given number, even though our estimate on the number of terms rises exponentially.
Might want to mention the Cauchy condensation test.
|
STACK_EXCHANGE
|
Skeema.io CI is a continuous integration system for MySQL and MariaDB, supporting a pull request workflow for schema changes. Once you enable our GitHub application on a schema repo, every
git push will be checked for common problems automatically. Pull requests will receive automated comments with DDL diff generation.
For more examples, check out our interactive demo repo.
The Skeema.io CI system does not access your actual database servers. All behavior operates on your schema repo alone. The application just needs read-only access to your schema repo’s code, and a few other permissions in order to comment on pull requests or update commit statuses.
After each commit, your schema repo’s
*.sql files will be scanned for
CREATE PROCEDURE, and
CREATE FUNCTION statements that have been changed in the commit. Each modified object is checked for common problems, such as:
- SQL syntax errors
- Duplicate indexes
- Lack of PRIMARY KEY
- Problematic character sets
- Deprecated storage engines
- Small int AUTO_INCREMENT exhaustion / overflow
- Non-standard int display widths
- Disallowed proc/func DEFINER
These linter checks are fully configurable. Each one can be set to generate a fatal error, a non-fatal warning, or be ignored entirely. Several additional optional checks provide the ability to flag any use of database features that you wish to avoid, such as foreign keys or stored procedures.
For a full list of supported linter checks, please check the Skeema options reference for settings with the “lint-” prefix.
Skeema.io CI uses the same configuration system as the
skeema command-line tool. In brief:
Each directory may optionally have a
.skeemaconfiguration file, formatted using an ini-like syntax similar to MySQL’s own configuration format.
Values set in a directory apply to that directory, and also cascade down to subdirectories. Subdirectories can override individual key/value pairs if desired.
Skeema.io will use values configured for the
productionenvironment. This means any configuration at the top of the .skeema file (before any
[section]) will apply, as will anything in the
How much does the service cost?
The service is completely free during the beta period. Payment information is not requested or collected at this time. Paid tiers may be introduced in the future, along with an enterprise on-prem version.
How do I see CI output on individual commits?
The CI system only leaves comments on pull requests and master branch commits. To see CI status for a branch commit that isn’t part of a pull request, go to the Commits tab of the branch on GitHub, and click on the green checkmark or red X next to any commit.
Will a copy of my schema repo be stored on your servers?
We do not retain or persist any repositories, nor any derived artifacts from them, beyond the brief time required to perform lint and diff operations – typically well under 60 seconds per
Do I need to setup or use GitHub Actions to use this system?
No, it does not interact with GitHub Actions in any way at this time.
Are private repos supported?
Is GitHub Enterprise supported?
A commercial offering with GHE support is currently being tested. Please reach out to express interest.
Will other platforms such as Bitbucket or GitLab be supported?
A platform-agnostic solution is being planned. Please reach out to express interest.
Are mono-repos (mixing application code with schema definitions) supported?
Generally yes, but it depends on repo size. If the mono-repo is excessively large or takes too long to clone, the CI system will reject it with an error. In this case, please use a separate repo for storing your schema definitions.
Support and Feedback
For feedback, bug reports, or other inquiries, please use our contact form.
|
OPCFW_CODE
|
ydotool cmd args
ydotool cmd --help
ydotool lets you programmatically (or manually) simulate keyboard input and mouse activity, etc. The ydotoold(8) daemon must be running.
Currently implemented command(s):
Type a string
Move mouse pointer to absolute position
Click on mouse buttons
key [-d,--key-delay <ms>] [<KEYCODE:PRESSED> ...]
Type a given keycode.
e.g. 28:1 28:0 means pressing on the Enter button on a standard US keyboard. (where :1 for pressed means the key is down and then :0 means the key is released)
42:1 38:1 38:0 24:1 24:0 38:1 38:0 42:0 - "LOL"
Non-interpretable values, such as 0, aaa, l0l, will only cause a delay.
See `/usr/include/linux/input-event-codes.h' for available key codes (KEY_*).
You can find the key name/number your keyboard is sending to libinput by running `sudo libinput record` and then selecting your keyboard from the list it will show you the libinput proper key name and number for each key you press.
Delay time between keystrokes. Default 12ms.
type [-D,--next-delay <ms>] [-d,--key-delay <ms>] [-f,--file <filepath>] "text"
Types text as if you had typed it on the keyboard.
- -d,--key-delay <ms>
Delay time between key events (up/down each). Default 12ms.
- -D,--next-delay <ms>
Delay between strings. Default 0ms.
- -f,--file <filepath>
Specify a file, the contents of which will be typed as if passed as an argument. The filepath may also be '-' to read from stdin.
Example: to type 'Hello world!' you would do:
ydotool type 'Hello world!'
mousemove [-a,--absolute] <x> <y>
Move the mouse to the relative X and Y coordinates on the screen.
Use absolute position
Example: to move the cursor to absolute coordinates (100,100):
ydotool mousemove --absolute 100 100
click [-d,--next-delay <ms>] [-r,--repeat N ] [button ...]
Send a click.
- Options: -d,--next-delay <ms>
Delay between input events (up/down, a compete click means doubled time). Default 25ms.
- -r,--repeat N
Repeat entire sequence N times
all mouse buttons are represented using hexadecimal numeric values, with an optional bit mask to specify if mouse up/down needs to be omitted.
- 0x00 - LEFT
- 0x01 - RIGHT
- 0x02 - MIDDLE
- 0x03 - SIDE
- 0x04 - EXTR
- 0x05 - FORWARD
- 0x06 - BACK
- 0x07 - TASK
- 0x40 - Mouse down
- 0x80 - Mouse up
- 0x00: chooses left button, but does nothing (you can use this to implement extra sleeps)
- 0xC0: left button click (down then up)
- 0x41: right button down
- 0x82: middle button up
The '0x' prefix can be omitted if you want.
The socket to write to for ydotoold(8) can be changed by the environment variable YDOTOOL_SOCKET.
ydotool was written by ReimuNotMoe.
This manpage was written by email@example.com but updated since.
Project site: <https://github.com/ReimuNotMoe/ydotool>
|
OPCFW_CODE
|
This screen catalogs the actions configured for the Post-Processing (Automation) rule you have configured in previous screens.
Click Add Action to create a new action in the editor. The Actions column lets you revise (Edit this entry) or Delete entries in this table. Click Save to preserve the action(s) configured here, or Cancel to abandon any edits.
Clicking Add Action lets you select from the following:
• Forward Northbound
Click Apply to accept configured actions, or Cancel to abandon their editor and return to this screen.
Actions available here are like those for Discovery Profiles. You can also use actions to Execute Proscan. See Change Management / ProScan.
This screen lets you configure Action based on Adaptive CLI actions available in the system. Notice that you can select by most common or by keyword search, depending on which of the links in the upper right corner of the screen you select.
The most common actions include those you have used most recently. To search for actions, either enter a keyword, or click the search icon (the magnifying glass) to produce a pick list below the Action field. Select an action by clicking on its appearance in that list.
Select the device target of the custom action by selecting from the Target pick list. If you do not specify an explicit target, Open Manage Network Manager uses the default entity for the event as the target.
If you want to select an action with additional parameters, those parameters appear in the screen below the Target field. To see definitions for such parameters, hover the cursor over the field and a tooltip describing the field appears.
You can specify parameter variables, dependent on the specifics of the event, rule, and selected targets. Do this with either NOTIFICATION or VARBIND.
The following are valid attributes to use in a phrase like [[NOTIFICATION: <attr name>]]:
Consult the relevant portlet to find and verify an OID. For example, Event Definitions portlet has an OID column, and the varbind OIDs appear in the Message Template screen of the event editor.
Correct spelling is mandatory, and these are case sensitive. NOTIFICATION and VARBIND must be all caps, and within double brackets. The colon and space after the key word are also required.
Open Manage Network Manager converts anything that conforms to these rules and then passes the converted information into the action before execution. Anything outside the double square brackets passes verbatim.
For example, the string:
This is the alarm OID [[NOTIFICATION: AlarmOID]] of notification type [[NOTIFICATION: TypeOID]] having variable binding [[VARBIND: 188.8.131.52.3]]
becomes something like...
This is the alarm OID 1OiE92tUjll3G03 of notification type 184.108.40.206.4.1.34220.127.116.11.7 having variable binding 151.
Click Apply to accept your edits, or Cancel to abandon them.
Email actions configure destinations and messages for e-mail and SMS recipients. You can include fields that are part of the event by using the variables described in Email Action Variables.
Notice that below the Description of the e-mail action, you can check to send this mail (and/or SMS) to associated Contacts, if any are available, even if you specify no mail address destination. The SMS tab is similar to the e-mail tab, but limits the number of characters you can enter with a field at its bottom. You must send SMS to the destination phone carrier’s e-mail-to-SMS address. For example sending text to 916-555-1212 when Verizon is the carrier means the destination address is email@example.com.
When enabled, notification emails go to the Contact associated with the Managed Equipment for the notification event. For the contact's email address, mail goes to the first specified address from either the Work Email, Home Email or Other Email fields in the Contact editor. SMS messages go to the Pager Email field for the contact. If a Contact was not found or the required addresses are not specified for the Contact, then Open Manage Network Manager uses the Recipient addresses configured in the Email Action.
Programs other than Open Manage Network Manager let you manipulate mail outside the scope of Open Manage Network Manager. For example IFTTT (If This Then That) lets you send SMS in countries whose providers do not provide e-mail equivalents to SMS addressing. You can also use such applications to save mail attachments like reports to Dropbox accounts.
This screen has the following fields:
Recipient Addresses -- Enter an e-mail address in the field below this label, then click the plus (+) sign to add it to the list of recipients. The minus (-) removes selected recipients.
Subject -- The e-mail subject.
Email Header / Footer -- The e-mail’s heading and footing.
SMS Body -- The e-mail contents to be sent as text.
SMS Max Length -- The maximum number of characters to send in the SMS. Typically this is 140, but the default is 0, so be sure to set to your carrier’s maximum before saving.
Here is what Email looks like when it arrives:
Sent: Wednesday, March 02, 2011 2:37 PM
Subject: Web Test
sysUpTime.0 = 5 hours, 16 mins, 43 secs
snmpTrapOID.0 = 18.104.22.168.4.1.3422.214.171.124
redcellInventoryAttrName.0 = RedCell.Config.EquipmentManager_Notes
redcellInventoryAttrChangedBy.0 = admin
redcellInventoryAttrNewValue.0 = hello
redcellInventoryAttrOldValue.0 = hello
When you want to forward an SNMP v2 event (trap) to another host, then configure automation in this screen to do that.
Enter the following fields:
Destination Address -- The IP address of the northbound destination.
Destination Port -- The port on the northbound destination.
Community String -- The SNMP community string for the northbound destination.
Send as Proxy -- When checked, this sends the IP address of the application server as the source of the event. Unchecked, it sends the IP address of the source device. (See Send as Proxy for more.)
For details of trap forwarding, see the Trap Forwarding Process.
|
OPCFW_CODE
|
This idea sounds easy and stuff. I agree. I assume you already got it working fine. But perhaps one day you may notice something is not working well, especially when you are dealing with bigger files. Below are a few things you want to look out for.
Change your /etc/php.ini
upload_max_filesize = 2M
post_max_size = 8M
memory_limit = 128M
The default upload file size is 2M, which could be too small. If your upload file is larger than this, you will receive an empty upload file at your server side. So, increase this. While if you are using ajax to post-send, increase the post_max_size limit. Another thing to watch out for is memory_limit. If you somehow make copies of big blobs in your code, increasing PHP’s memory limit is also a good idea.
After these changes, restart your apache server by: sudo apachectl restart
You can double check by calling the phpinfo() function. Your new settings should show up.
Next thing is in mysql. If you have been using addslashes to insert, that will eventually fail when you hit your mysql query max length limit. I am not sure how long that is but I hit that a few times which got me all confused until I checked my logs. So, possibly change your insert or update statement to something like the following.
UPDATE my_table SET my_blob = LOAD_FILE( $filename ) WHERE id = '$id';
Now, after you made all these changes you may still be loading null into your db. If so, check this. Go grab your uploaded file at the server and check the file size. It better be the right size. In code, I mean
$file_obj = $_FILES["my_file"];
$file_size = $file_obj["size"];
print $file_size; // hopefully it's greater than 0
// below is for later, skip it for now
$file_tmp_name = $file_obj["tmp_name"];
$file_content = file_get_contents($file_tmp_name);
Once you pass that, there may be a chance your mysql client doesn’t have enough permission to read your uploaded file. Below is some code to create a new temp file and let your mysql client read the file.
// basically make a duplicate of your upload file
$tmpfname = tempnam("/tmp", "somePrefix");
$handle = fopen($tmpfname, "w");
chmod($tmpfname, 0666); // this is so that the mysql process can read this file
// HERE, call your mysql LOAD_FILE statement.
unlink($tmpfname); // clean up
If you are still loading null into your db. Then it’s also possible that your current mysql user doesn’t have enough permission to load local files.
GRANT FILE ON *.* TO root@localhost
Assuming you are using root. I know, it’s bad. Shut up. Just an example.
And if you are still loading null, then it’s also possible that your mysql client has a low max_load_file size limit. Do this. Open a terminal, get a mysql prompt by: mysql -u root -p
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
That should allow enough data to go through the LOAD_FILE command.
I am good after all these. Hopefully your issues are solved by now. 😉
|
OPCFW_CODE
|
Why does Debug.Writeline stop working for some projects in the solution?
We have a solution with multiple projects after running the code from VS the output normally seen from Debug.Writeline statements just cease to appear. I mention the multiple projects because the output from one of the projects continues to appear. However, the other project consistently stops showing the output from the statements.
It's starting to drive me crazy. I should mention this is also occurring for a second developer on the project. Anyone seen this before, or have any ideas?
After being tormented by this for years I finally found the cause and the solution in this Stack Overflow question: vs2010 Debug.WriteLine stops working
It seems that Visual Studio's handinlg of debug.writeline can't handle multiple processeses that each use multiple threads correctly. Eventually the 2 processes will deadlock the portion of visual studio that handles the output, causing it to stop working.
The solution is to wrap your calls to debug.writeline in a class that synchronizes across processes using a named mutex. This prevents multiple processes from writing to debug at the same time, nicely side stepping the whole deadlock problem.
The wrapper:
public class Debug
{
#if DEBUG
private static readonly Mutex DebugMutex =new Mutex(false,@"Global\DebugMutex");
#endif
[Conditional("DEBUG")]
public static void WriteLine(string message)
{
DebugMutex.WaitOne();
System.Diagnostics.Debug.WriteLine(message);
DebugMutex.ReleaseMutex();
}
[Conditional("DEBUG")]
public static void WriteLine(string message, string category)
{
DebugMutex.WaitOne();
System.Diagnostics.Debug.WriteLine(message,category);
DebugMutex.ReleaseMutex();
}
}
Or for those using VB.NET:
Imports System.Threading
Public Class Debug
#If DEBUG Then
Private Shared ReadOnly DebugMutex As New Mutex(False, "Global\DebugMutex")
#End If
<Conditional("DEBUG")> _
Public Shared Sub WriteLine(message As String)
DebugMutex.WaitOne()
System.Diagnostics.Debug.WriteLine(message)
DebugMutex.ReleaseMutex()
End Sub
<Conditional("DEBUG")> _
Public Shared Sub WriteLine(message As String, category As String)
DebugMutex.WaitOne()
System.Diagnostics.Debug.WriteLine(message, category)
DebugMutex.ReleaseMutex()
End Sub
End Class
Wow, finally solved even if requiring a somewhat annoying work around.
Follow these steps, it works for me
Right click on your project
Select Properties
Select tab Build
Make sure Define DEBUG constant is checked
Hope that helps
Thanks for the response. That option was already selected in all the projects. Further, in case anyone else finds themselves here... In VS2010 that is in the compile tab, and in the case of vb.net is in the advanced compile dialog from there
This in combination with the answer from @Cor helped print the Debug.WriteLine statements to Output window.
I had the same problem with Visual Studio 2010. None of the above solutions worked in my case, but I solved it like this:
Right-click on your project.
Select Properties.
Click the Compile tab.
Scroll down to "Advanced Compile Options".
Change the value for "Generate debug info" from "pdb-only" to
"Full".
No idea what it's for exactly, but now my Debug.Print statements appear in the Immediate Window again I can finally get back to work.
Though navigation to the screen might be relevant for VS 2010 but this problem is relevant even when we use VS 2015. In VS 2015 the "Advanced" option is inside the Build tab of project properties. In combination with the answer from @onmyway133 print messages using Debug.WriteLine statements to Output window worked for me.
This solved it for me too, on Visual Studio 2019. I'd noticed that I was getting Debug output from some projects but not others. I've no idea how that option ever got unticked, we never go in there.
you should try DebugView from Microsoft SystemInternals.
http://technet.microsoft.com/en-us/sysinternals/bb896647
Regards,
Allen
Interesting. And seems useful. But it doesn't really solve the issue in this case. Let's assume I can't install anything, and would rather actually make Visual studio work as intended.
OK. Try to check your solution configuration manager and to see if all projects are in Debug configuration when you set your "Active solution configuration" to Debug.
Hi D., any updatse from your side? It is indeed very strange.
I ended up using the setting to send all the debug output to the immediate window... which oddly enough seems to be working. Not really what I want, but at least I have the output.
Try checking if Platform for the solution is set to Any CPU and not x86 (or x64). I used x86 to enable Edit and Continue and then lost Debug output. After going back to AnyCPU the Output is also back.
That could certainly be my issue, as my setting and the other programmers in question all have ours set to x86. I can't really make that change long term... but I will give it a shot to at least test it.
I got the same problem, and changing the platform does solve it.
Got this in VS 2015. All of a sudden all Debug.WriteLine "stops working" (not showing in Output window). After going crazy about this for about an hour I found the problem:
1. Right click In output window (output from Debug)
2. Check that "Program output" is checked
|
STACK_EXCHANGE
|
EeeUser.com Google Search
We've just got an Asus EeePC with Xandros on it and out of the box I have to say I am very unimpressed with it's wireless capabilities. I need to connect it to our wireless network which uses a radius server and after many hours trawling through Google results I am starting to get a little bored! Every time I think I have found what looks like it may work, it's a 4 page document about editing this, creating that, downloading and installing half a dozen things and assumes that I am a complete unix nerd and will know all of this already. Does anyone know if there is a simple way to connect it to our network, other than installing XP?
Is it the 4g model? Are you using WPA? I had issues with ours, 5 minute fix is:
check for upgrades (might have to stick your proxy settings in first)
scroll down to the atheros_swan_module
mark for upgrade
I admire your perseverance with Xandros. Many people would have given up long ago.
Unfortunately the version of Xandros that is already installed on the EeePC doesn't fully support all the hardware. You are probably better off with one of the community Linux Distros that have been specifically created for the EeePC.
Ubuntu Eee, eeeXubuntu, Eeedora, Pupeee & Cruncheee all work "out of the box".
I think you should get your boss to sort it out. After all, he gets paid all that money and does nothing all day long.
Thank you all for your help and suggestions. I installed Cruncheee and it gave several more options than Xandros had and it now looks like me and my boss (thehomee) should be able to get it connected now.
If you switch back to Xandros - I had these emails back from ASUS about wireless problems wiht our 4gb Eee - ours wouldn't connect to a WPA wireless network at all and now it works fine - Don't understand linux that much so don't know exactly what the first update does all I know is that it fixed it!
1: Access their Terminal via CTRL + Alt + T
2: Use the following commands:
sudo apt-get update
sudo apt-get upgrade
Go into the terminal - type su and enter the root password
Type in "apt-get update" to update the package repository info.
Type in "apt-get dist-upgrade" to update OS.
Once you have done the above go to add / remove programs > settings > scroll down and find the new wireless update pack and install it.
There are currently 1 users browsing this thread. (0 members and 1 guests)
|
OPCFW_CODE
|
Changes to the Preference settings in the renderer will be applied by default to the entire new image group created.
The Render tab is in all respects identical to the one found in the general settings or in the editing window. It is a default parameter for the rendering options that will also be visible and modifiable at the time of the final rendering.
If you make a mistake, you can use the "Restore defaults" button.
This lets you adjust the panorama’s export size in percentage of the maximum size.
The interpolator is used to project the pixels of the source image on the panorama. Its quality often depends on the sharpness of the panorama.
- Nearest Neighbour: Reserved for testing, because of the numerous and very visible artifacts created. In return, this is the fastest.
- Bilinear: This is a correct quality/speed ratio choice.
- Bicubic: (default) Use it if you do not know. The difference with the bilinear is almost imperceptible to the naked eye but can be seen in the lines with strong contrasts. Its default use is recommended.
- Bicubic sharper: This is the same thing as the bicubic but it is stronger (the fortification level corresponds to the same settings as in Photoshop when changing the size of an image).
- Bicubic smoother: This is the same thing as the bicubic but it is softer (the softening level corresponds to the same settings as in Photoshop when changing the size of an image).
- Spline36: This powerful method of interpolation is to be used when extreme or high post-rendering is necessary. The difference with the bicubic mode is not noticeable to the naked eye.
- Spline64: Works in the same way as the Spline36, but stronger, slower and usually better (expirment to sample resutls).
The purpose of the blender is to combine the overlapping zones without it being noticeable, to obtain a perfect stiching of the panorama's images.
Autopano offers 4 optimization presets adapted to your needs without having to change them yourself.
These profiles correspond to the pre-configurations that can be seen in the “advanced settings”.
- Simple: This is fast but it is possible that defects are seen where the areas overlap.
- Anti-ghost: Conserve the image's strong characteristics (stops, lines, curves) when mixing while automatically removing objects that have moved.
- Exposure Fusion: To be used if the panorama was created with a bracket shot. Keeps the best of different exposures.
- HDR output: To be used by users who wish to create a .hdr format file in order to create post-production or special effects.
- Custom: This is enabled when you manually change the parameters and they no longer correspond to a preset.
- None: For each position, the algorithm uses the pixel with the greatest importance according to the required weight.
- Linear: The rendered pixels are the result of a weighted average of input pixels.
- Multi-band: Lets you mix the average value (color trend) of the images while maintaining their details.
- Diamond: The pixels in the centre of the images are more important than the pixels on the edges of the images.
- Fusion: enable/disable the tool.
- HDR ghosts:
- Avoid mixing the superimposed pixels that do not have the same information and that come from different levels (moving object on the same bracketed image).
- At least 3 different information (layers) are needed to determine the information to exclude. It is advisable to get more for a greater reliability of matches (warning, a layer that is entirely over-exposed or under-exposed can not be used).
- Cutting: Lets define the cutting choices of the blender according to the desired result (ghost removing or long focal preservation) thanks to the priority's definition slider:
- Ghost: (default) all moving objects will be retained or removed according to their positions in the overlap areas. The anti-ghost himself makes this choice.
- Long focal: Used to promote the details of the images of long focal length.
- The left side of the slider bar enables the Ghosts removing at 100% and makes the Long focal preservation inactive.
- The right side of the slider bar enables the Long focal preservation at 100% and makes the Ghosts removing inactive.
- Move the slider bar allows to play on both settings (ghost removing or long focal preservation) at the same time, ie giving more priority to one of both settings leads to a lower priority for the other.
- Consult the following page Understanding and using the rendering engine for more information.
Lets you choose the output format, encoding, compression quality and resolution.
Lets you define how and what data needs to be exported:
- Panorama: Lets you export the panorama.
- Layers: Lets you export the image groups.
- Images: Lets you export the images used to create the panorama.
- Embed all outputs: Incorporates all the data in the same file.
- Remove Alpha channel: Deletes the alpha channel of the exported files.
- Folder: Lets you specify the file in which the image will be saved.
- Filename: Default syntax of the file name. Click on the icon for a description of the symbols that make up the models (syntax) of the file name.
Rendering done notification
Lets you specify a sound that signals the end of the rendering.
Technical Support / Autopano Giga Documentation
|
OPCFW_CODE
|
Building a career in Technology: a conversation with Yoofi Brown-Pobee
Get to know our team! This week: Yoofi Brown-Pobee, Software Engineer. This interview is the fourth of a series featuring the outstanding team from Chalkboard Education. Check out previous interviews from the Developer team with Nii Apa Abbey and Lemuel Hawkson.
What is your role at Chalkboard Education?
I am a software engineer, part of the developer team responsible for building and maintaining the product daily.
How did you get involved with Chalkboard Education?
It’s quite an interesting story. My friend was working at Chalkboard, and I went to visit him at the office. I posed as an investor during my visit, but eventually, I told them I was joking. I asked the team about the company, and I was interested in the work they did, so I wanted to join. Afterward, we set up an interview, and I joined the company, and it’s been about two years since.
Considering you joined the team as a student hire, how has the transition been from working from school part-time to working full-time?
Pre-COVID, the main difference between both experiences was coming to the office frequently and seeing my colleagues. I wouldn’t say there has been a significant change regarding how I feel about my work or execute it. The team dynamics and working arrangements stayed the same; the only other changes are expected, such as an increased workload because it’s a full-time job now.
You began in operations and now work in the Technology team, how different are both experiences?
They are both unique experiences in their ways. For operations, the work was more functional and administrative. I had to work on content conversions, creating materials, and developing manuals, which was fine because we could do it. The tasks did feel repetitive so I started using code to simplify the execution of some tasks. When I moved to the developer team, the first weeks were for onboarding, and the main change, apart from the tasks I worked on, was less interaction with my manager was required than in operations. Tasks are assigned, and you would reach out to Nii Apa (our C.T.O.) when you have issues instead of how I would frequently contact Genevieve (our C.O.O.) when I worked in operations.
What have you learned at Chalkboard Education that has shaped your career in technology?
Writing scalable and extensible code and understanding design patterns. Making sure to make all code modular and encapsulated to make changes easy and effortless. The approach to structuring code has rubbed off on me. I also learned Laravel : I had web development experience, so understanding of principles carried over, and it was just a matter of getting syntax.
What do you like about working with the team? What have been your highlights working with the team and its clients?
I like how we disseminate tasks at Chalkboard and how there is swift and easy communication between us if we need clarification on tasks. Also, I support our working structure, how we have general meetings once a week to give updates on our work. It allows you to be self-motivated because no one is rushing you to do your tasks, and you there is room and flexibility to complete tasks within a set time. One of my highlights was when I worked in operations, and we had to provide I.T. support for a client with their onboarding in another region in the country.
What is the most intriguing in-house project you have worked on with Chalkboard Education?
It would be writing tests for the whole front-end. I had to refactor existing tests and write new tests while converting them to TypeScript, that was a good one.
What are your interests outside work?
I hang out with my friends to just chill and base. I watch quite a lot of YouTube and read stuff in my spare time.
What is the most challenging task you have had to complete since joining the team?
It has to be the task of converting the codebase to TypeScript. I had to learn TypeScript and then convert the codebase, which was challenging. Initially, I was stressed, but while I was going along, it was getting more manageable. I was developing momentum, less friction to understand what I was working on. I had to learn the types and then implement it on the live code base, which was necessary because the code could be correct, but the app won’t compile if the TypeScript is wrong. So yeah, it was challenging, but I got through it.
What are your interests in technology, and how has that helped you while working for Chalkboard?
I am interested in back-end development and virtual system designs, also with having to make everything modular. It has led me to look at things like design patterns and structuring code in a scalable way. My back-end work and testing have helped me because now when I write code, I make sure it’s modular, robust, and scalable.
What drives and motivates you in your pursuit of a career in technology?
I feel this is a path I enjoy because it doesn’t feel like as much work. It can be stressful when the workload is heavy, but the tasks I work on don’t feel like work. Because it doesn’t seem like work, I want to get better and do more tasks to improve. I’ll have the ability to do more if I know more, so I am motivated to work harder at it.
What should upcoming college graduates know about pursuing a career in technology within Ghana?
So, for this, I would say come up/look for a roadmap for yourself — a roadmap for either front end dev, back end dev, or full-stack development. Most importantly, it would help if you were building things and trying different things. You can do tutorials and read documentation, but only when you have to retrieve the information and utilize it to create stuff is when you’ve grown. Once you understand building, I would say get used to reading documentation. I randomly read documentation to know where everything is. Code is mostly the same, except someone knows how to use it better than you and knows a more efficient approach or a best practice. Another thing about programming is that it is incremental; completing one task gives you the momentum to keep building and doing things. Finally, make sure your code is modular; it just makes everything easier.
What advice would you give to people seeking out career opportunities in technology in Africa?
Just keep building things and applying to areas that interest you. I think the most important thing is knowing how to build. Once you can build, the rest comes because you have the confidence. Of course, you have practice interview preps available online, but I think if you have the ability, everything else naturally occurs.
|
OPCFW_CODE
|
6 Months Of Using GraphQL
Having worked on a project for 6 months using GraphQL on the backend, I weigh up the technology’s fit into the development workflow
First and Foremost
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API as well as gives clients the power to ask for exactly what they need and nothing more.
It was developed by Facebook as an internal solution for their mobile apps and was later open-sourced to the community.
Pragmatic Data Exchange
With GraphQL, the query can be defined for the fields which the client needs, nothing more and nothing less. It’s really that simple. If the frontend needs
_first name_ and
_age_ of a person, it can only ask for that. The
_last name_ and
_address_ of the
_person_ would not be sent in the response.
Using Dataloaders to Reduce Network Calls
Although Dataloaders are not a part of the GraphQL library itself it is a utility library that can be used to decouple unrelated parts of your application without sacrificing the performance of batch data-loading. While the loader presents an API that loads individual values, all concurrent requests will be combined and presented to your batch loading function. This allows your application to safely distribute data fetching throughout your application.
An example of this would be fetching
_bank details_ of the
_person_ from a different service called transaction service, the backend can fetch the
_bank details_ from the transaction service and then combine that result with the
_first name_ and the
_age_ of the
_person_ and send the resource back.
Decoupling between exposed data and database models
One of the great things about GraphQL is the option to decouple the database modeling data with how the data is exposed to consumers. This way when designing our persistence layer, we could focus on the needs of that layer and then separately think about what's the best way to expose the data to the outside world. This goes hand in hand with the usage of Dataloaders since you can combine your data together before sending it to the users it becomes really easy to design the models for exposed data.
Forget about versioning of APIs
Versioning of APIs is a common problem and generally, it is fairly simple to solve by adding a new version of the same APIs by appending a v2 in front of it. With GraphQL the story is different, you can have the same solution here but that is not going to go well with the spirit of GraphQL. The documentation clearly states that you should evolve your APIs meaning adding more fields to an existing endpoint would not break your API. The frontend can still query using the same API and can ask for the new field if needed. Pretty simple.
This particular feature is really useful when it comes to collaborating with the frontend team. They can make a request to add a new field that is required because of the design change and the backend can easily add that field without messing with the existing API.
With GraphQL, the front-end and back-end teams can work independently. With the strictly typed schema that GraphQL has, the teams can work in parallel. Firstly the frontend team can easily generate a schema from the backend without even looking at the code. The schema generated can directly be used to create queries. Secondly, the frontend team can continue working with just a mock version of the API. They can also test the code with it. This gives the developers a pleasant experience without stalling their development work.
Not all APIs can be evolved
Sometimes there would be changes trickling down from the business or design which would require a complete change in the implementation of an API. In that case, you would have to rely on the old ways to do the versioning.
As experienced multiple times, the code sometimes can become scattered into multiple places while using Dataloaders to fetch the data and that could be difficult to maintain.
Longer response times
Since the queries can evolve and become huge, it can sometimes take a toll on the response time. To avoid this, make sure to keep the response resources concise. For guidelines have a look at the Github GraphQL API.
The goal of caching an API response is primarily to obtain the response from future requests faster. Unlike GraphQL, caching is built into in the HTTP specification which RESTful APIs are able to leverage. And as mentioned earlier a GraphQL query can ask for any field of a resource, caching is inherently difficult.
I will highly recommend using GraphQL as a replacement for REST APIs. The flexibility offered by GraphQL definitely supersedes the pain points it has. The good and bad points mentioned here may not always apply, but it is worth taking them into consideration while looking at GraphQL to see if they can help your project.
|
OPCFW_CODE
|
The FAT, Linux, and NTFS file systems
I heard that the NTFS file system is basically a b-tree. Is that true? What about the other file systems? What kind of trees are they?
Also, how is FAT32 different from FAT16?
What kind of tree are the FAT file systems using?
"Linux" isn't a file system..
I think he means the file systems used by Linux. Well that's my guess.
Of course that's what I meant. Sheesh.
you should search on wikipedia first.
http://en.wikipedia.org/wiki/Comparison_of_file_systems You may want to narrow your search then, cause there is a few...
What PostMan said. I responded with some common ones, but there are a lot.
I just care about the one most commonly used, which I think is ext2.
!? Ext2 has been outdated for some time in most uses (unless you count routers, switches and similar appliances). Ext3 is the old standard and is being now replaced by Ext4.
My guess is that far more people use ext3. It's been out for a long time, and this point ext4 is probably more likely for new installs. Not to mention that power users use multiple file systems, like ext2 for /boot, ReiserFS for /var and ext4 for everything else.
ext3 and ext4 use "H-trees", which are apparently a specialized form of B-tree.
BTRFS uses B-trees (B-Tree File System).
ReiserFS uses B+trees, which are apparently what NTFS uses.
By the way, if you search for these on Wikipedia, it's all listed in the info box on the right side under "Directory contents".
NTFS uses B trees for indexes, not B+ trees. My understanding is that btrfs uses B+ trees and for more than the indexes. Small nit, but...
FAT (FAT12, FAT16, and FAT32) do not use a tree of any kind. Two interesting data structures are used, in addition to a block of data describing the partition itself. Full details at the level required to write a compatible implementation in an embedded system are available from Microsoft and third parties. Wikipedia has a decent article as an alternative starting point that also includes a lot of the history of how it got the way it is.
Since the original question was about the use of trees, I'll provide a quick summary of what little data structure is actually in a FAT file system. Refer to the above references for accurate details and for history.
The set of files in each directory is stored in a simple list, initially in the order the files were created. Deletion is done by marking an entry as deleted, so a subsequent file creation might re-use that slot. Each entry in the list is a fixed size struct, and is just large enough to hold the classic 8.3 file name along with the flag bits, size, dates, and the starting cluster number. Long file names (which also includes international character support) is done by using extra directory entry slots to hold the long name alongside the original 8.3 slot that holds all the rest of the file attributes.
Each file on the disk is stored in a sequence of clusters, where each cluster is a fixed number of adjacent disk blocks. Each directory (except the root directory of a disk) is just like a file, and can grow as needed by allocating additional clusters.
Clusters are managed by the (misnamed) File Allocation Table from which the file system gets its common name. This table is a packed array of slots, one for each cluster in the disk partition. The name FAT12 implies that each slot is 12 bits wide, FAT16 slots are 16 bits, and FAT32 slots are 32 bits. The slot stores code values for empty, last, and bad clusters, or the cluster number of the next cluster of the file. In this way, the actual content of a file is represented as a linked list of clusters called a chain.
Larger disks require wider FAT entries and/or larger allocation units. FAT12 is essentially only found on floppy disks where its upper bound of 4K clusters makes sense for media that was never much more than 1MB in size. FAT16 and FAT32 are both commonly found on thumb drives and flash cards. The choice of FAT size there depends partly on the intended application.
Access to the content of a particular file is straightforward. From its directory entry you learn its total size in bytes and its first cluster number. From the cluster number, you can immediately calculate the address of the first logical disk block. From the FAT indexed by cluster number, you find each allocated cluster in the chain assigned to that file.
Discovery of free space suitable for storage of a new file or extending an existing file is not as easy. The FAT file system simply marks free clusters with a code value. Finding one or more free clusters requires searching the FAT.
Locating the directory entry for a file is not fast either since the directories are not ordered, requiring a linear time search through the directory for the desired file. Note that long file names increase the search time by occupying multiple directory entries for each file with a long name.
FAT still has the advantage that it is simple enough to implement that it can be done in small microprocessors so that data interchange between even small embedded systems and PCs can be done in a cost effective way. I suspect that its quirks and oddities will be with us for a long time as a result.
Here is a nice chart on FAT16 vs FAT32.
The numerals in the names FAT16 and
FAT32 refer to the number of bits
required for a file allocation table
entry.
FAT16 uses a 16-bit file allocation
table entry (2 16 allocation units).
Windows 2000 reserves the first 4 bits
of a FAT32 file allocation table
entry, which means FAT32 has a maximum
of 2 28 allocation units. However,
this number is capped at 32 GB by the
Windows 2000 format utilities.
http://technet.microsoft.com/en-us/library/cc940351.aspx
FAT32 uses 32bit numbers to store cluster numbers. It supports larger disks and files up to 4 GiB in size.
As far as I understand the topic, FAT uses File Allocation Tables which are used to store data about status on disk. It appears that it doesn't use trees. I could be wrong though.
|
STACK_EXCHANGE
|
package org.hongxi.whatsmars.javase.nio.qing;
import java.nio.ByteBuffer;
import java.nio.channels.SocketChannel;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.Semaphore;
import java.util.zip.Adler32;
import java.util.zip.Checksum;
class ServerHandler implements Handler {
private static Semaphore semaphore = new Semaphore(Runtime.getRuntime().availableProcessors() + 1);
private static Map<SocketChannel,Thread> holder = new HashMap<SocketChannel,Thread>(32);
@Override
public void handle(SocketChannel channel, String from) {
System.out.println("handle:" + from);
//synchronized (holder) {
if(holder.containsKey(channel)){
System.out.println(from + ":channel contains");
return;
}
Thread t = new ReadThread(channel, from);
holder.put(channel, t);
t.start();
//}
}
static class ReadThread extends Thread{
SocketChannel channel;
String from;
ReadThread(SocketChannel channel, String from){
this.channel = channel;
this.from = from;
}
@Override
public void run(){
try{
semaphore.acquire();
boolean eof = false;
System.out.println(from + ", channel is open:" + channel.isOpen());
while(channel.isOpen()){
//ByteBuffer byteBuffer = new ByteBuffer(1024);
ByteBuffer head = ByteBuffer.allocate(4);//int for data-size
while(true){
int cb = channel.read(head);
if(cb == -1){
throw new RuntimeException("EOF error,data lost!");
}
if(isFull(head)){
break;
}
}
head.flip();
int dataSize = head.getInt();
if(dataSize <= 0){
throw new RuntimeException("Data format error,something lost???");
}
ByteBuffer body = ByteBuffer.allocate(dataSize);
while(true){
int cb = channel.read(body);
if(cb == -1){
throw new RuntimeException("EOF error,data lost!");
}else if(cb == 0 && this.isFull(body)){
break;
}
}
ByteBuffer tail = ByteBuffer.allocate(8);//int for data-size
while(true){
int cb = channel.read(tail);
if(cb == -1){
eof = true;
}
if(isFull(tail)){
break;
}
}
tail.flip();
long sck = tail.getLong();
Checksum checksum = new Adler32();
checksum.update(body.array(), 0, dataSize);
long cck = checksum.getValue();
if(sck != cck){
throw new RuntimeException("Sorry,some data lost or be modified,please check!");
}
body.flip();
Packet packet = Packet.wrap(body);
System.out.println(from + ":" + packet.getDataAsString());
if(eof){
break;
}
}
}catch(Exception e){
e.printStackTrace();
}finally{
if(channel != null){
try{
channel.close();
}catch(Exception ex){
ex.printStackTrace();
}
}
holder.remove(channel);
semaphore.release();
}
}
private boolean isFull(ByteBuffer byteBuffer){
return byteBuffer.position() == byteBuffer.capacity() ? true : false;
}
}
}
|
STACK_EDU
|
Currently location-dependent filenames are used to name objects. Multiple copies of objects often exist, but outside of the distributed Netlib Repository, different sites may have different filenames for the same object. To complicate matters further, the same filename is often used for different objects or for different versions of the same object.
Current search facilities return the locations of objects, usually in the form of URLs. As a result, multiple hits may be returned for the same object. Another disadvantage is that the searchable indices have to be updated every time a copy of an object is moved, and such relocation is likely to happen much more often than changes to the object or to its description.
Naming objects by their locations presents a problem for caching, because it is difficult to tell whether a desired object is in the cache. Looking up whether the URL is cached does not suffice, because the contents of the location may have changed. Hence, clients or cache servers are forced to try to determine if the contents of the location have changed since the last access. Caching by location also wastes the opportunity to take advantage of multiple copies of objects. The Xnetlib client demonstrates a way of doing local caching by putting local copies of index and data files on the user's local file system, and these cached copies may be shared on a site-wide basis. To do such caching outside of Netlib, however, will require universal names.
Although it uses consistent filenames across different Netlib sites, Netlib still has the problem that an object's name is tied to its location. If an object is moved to a new location on the file system, it gets a new name, and the old name becomes stale. Furthermore, Netlib files may be updated in place, and although such updates are automatically propagated to all mirroring sites, the user cannot tell from the name, or from any other readily accessible information, that the object has changed.
Hence, there is a need for a name that remains constant for the lifetime of an object, regardless of its location. The Uniform Resource Name (URN) has been proposed by the IETF Uniform Resource Identifier (URI) Working Group . The Netlib Development Group has begun implementing a specialized type of universal name called a Location Independent Filename (LIFN) as part of the Bulk File Distribution (BFD) project. A LIFN is assigned to a particular sequence of bytes, and once the assignment has been made, the same LIFN cannot subsequently be used to name any other sequence of bytes. The space of LIFNs is subdivided among several publishers, or naming authorities, who are responsible for ensuring the uniqueness of LIFNs within their portions of the LIFN-space. Resolving a LIFN involves finding an appropriate LIFN-to-location server and then contacting that server to obtain a list of locations for the LIFN. The prototype BFD implementation includes LIFN-to-location server code, as well as a WWW client library for resolving LIFNs. LIFNs are currently being assigned to Netlib files and are being integrated into the Netlib search interfaces. Use of LIFNs will allow Netlib to automatically mirror files from author-maintained sites, rather than handling updates from these sites manually. Future plans include include integration of authenticity and integrity checking into server-to-server and client-to-server protocols. Such checking will allow a network of trusted LIFN servers and file servers to be set up for a naming authority and will allow a client to authenticate such servers. More information about BFD is available at http://www.netlib.org/nse/bfd.
|
OPCFW_CODE
|
This is going to be mixture of programming and Windows Internals with some use of WinDbg, so it looks like it going to be Programming, Windows Internals and Debugging if we use the categories shown in my blog title. The idea of Thread Storage Slots is simple and therefore you should remember and understand it pretty quickly.
All threads running within a particular process address space are able to access all the memory addresses stored within that process address space. This in fact can cause some problems for some people, since some variables and classes are best to kept private to one specific thread. This brings in the concept and introduction of thread local storage slots.
All static and global variables can be accessed by all the threads within the same process address space, and will share the same fixed memory address in the global scope of the address space of the process. On the other hand, with locally defined variables these variables are local the stack of the thread, and will have differing memory addresses. The concept of thread local storage can be used to implement the advantage of global variable’s fixed memory location, and then incorporate that into the benefit of having a variable which is private to that of a particular thread.
TLS is part of the C++ 11 Standard, and can also be used with the Win32 API. I’ll demonstrate the Microsoft C++ version of this concept.
The __declspec(thread) is a C++ extension for indicating storage class specifiers. As you can see from the the above code example, the storage class specifier is notably the thread keyword, which indicates that the storage class specifier is a thread local storage type. We then create a variable called tls_i of a integer type and assign it to the value of 1. I took the variable name from the MSDN documentation for ease on my part, and to make the variable more self-explanatory later in the code.
Since this is a global variable, and will accessible to all parts of the program, you may expect that any writes to that variable will be shown in other parts of the code. However, this is not the case with this example as a result of the thread local storage concept. The tls_i variable is incremented by 1 in the Thread_Func function, but will still be 1 within the Main thread. This can be understood better when running the code.
Now, we understand how to write Thread Local Storage variables and data, let’s examine how it implemented within Windows and where we can find this information within WinDbg.
The Thread Local Storage slots can be found within the TEB data structure.
At offset 0x02c, the ThreadLocalStoragePointer field is the linear address of the thread local storage array, the address can be accessed with the use of a pointer as known in the output above. Note that the TEB is stored within the FS segment register on x86, and the GS segment register on x64. The segment registers are primarily used for performance reasons.
The TlsSlots at offset 0xe10, shows the current number of TLS slots, the minimum number of slots is 64, thus the reason for the . Each slot is indexed starting 0, and is accessed with this index, this is implemented as a array with a pointer to access each slot. The TlsLinks at offset 0xf10, is a doubly linked list of the TLS memory blocks for the process.
The !tls extension can be used to view the TLS’ for a particular process.
The left column indicates the Index number, and the -1 parameter is used to dump all the TLS’s for the currently running process. I believe the other column could possibly be the data. This extension is unfortunately sparsely documented.
Some further advantages of using thread local storage, is that it enables multiple threads to access the same variable without any locking overhead. Essentially, two threads could share the same memory address for that particular variable, and then each write their own private data to that variable. Furthermore, many platforms and operating systems support the use of Thread Local Storage, and therefore portability shouldn’t be a problem.
The __declspec implementation only uses one large chunk of memory for a Thread Local Storage slot which is the sum of the size of each local thread variable. This enables the efficient use of indexing the slots. Internally, these variables are stored within the .tls section of the PE Header, which is part of the .rdata section of the PE Header. Please note the .tls section is actually not a requirement, and is mostly not used by most programs.
The TLS Directory Size indicates the size of the directory, which in turn describes to the loader how the local variables are going to be used by the thread.
We can view this further with CFF Explorer, and then check the TLS Directory.
The StartAddressOfRawData and the EndAddressOfRawData indicate the beginning and ending of the TLS section. The AddressOfIndex is the address of the index used to index into the TLS array of slots. The AddressOfCallbacks is a pointer into a array of TLS Callback addresses.
Going back to the ThreadLocalStoragePointer field mentioned earlier, this is the pointer which is used to differentiate between a variable being local to one thread and then local to another thread. The pointer is initialized with the _tls_index variable, and then given the offset of the threadedint variable so it points to local copy of the TLS type global variable created by the programmer. Note this is performed by the compiler and linker.
Thread Local Storage
Thread Local Storage (Windows)
Win32 Thread Information Block (TIB)
Thread Specific Storage for C/C++ (Paper)
Thread Local Storage Part 1 (5 Parts)
R4ndom’s Tutorial #23: TLS Callbacks
|
OPCFW_CODE
|
"""Tests for integration with teleport."""
import json
import pytest
from pyrfc3339 import parse as rfc3339
from val import Schema, Optional, Or
from val.tp import (
DeserializationError,
SerializationError,
document,
to_val,
from_val)
def test_teleport_struct():
"""A teleport Struct can be converted to an equivalent val schema."""
todo = {
"Struct": {
"required": {"task": "String"},
"optional": {
"priority": "Integer",
"deadline": "DateTime"}}}
todo_val = to_val(todo)
assert todo_val.validates({"task": "Return videotapes"})
assert todo_val.validates({
"task": "Return videotapes",
"deadline": "2015-04-05T14:30:00Z"})
assert not todo_val.validates({})
assert not todo_val.validates({"task": 1})
def test_teleport_array():
"""A teleport Array can be converted to an equivalent val schema."""
todo = {
"Struct": {
"required": {"tasks": {"Array": "String"}},
"optional": {}}}
todo_val = to_val(todo)
assert todo_val.validates({"tasks": ["Return videotapes"]})
assert todo_val.validates({"tasks": []})
assert not todo_val.validates({"tasks": [1]})
assert not todo_val.validates({"tasks": "Return videotapes"})
def test_serialize_to_teleport():
"""Appropriate val schemas can be serialized to teleport Struct schemas."""
todo = Schema({
"task": str,
Optional("priority"): int,
Optional("deadline"): rfc3339})
assert from_val(todo) == {
"Struct": {
"required": {"task": "String"},
"optional": {
"priority": "Integer",
"deadline": "DateTime"}}}
def test_serialize_array():
"""Appropriate val schemas can be serialized to teleport Array schemas."""
todo = Schema({"tasks": [str]})
assert from_val(todo) == {
"Struct": {
"required": {"tasks": {"Array": "String"}},
"optional": {}}}
def test_serialize_map():
"""Appropriate val schemas can be serialized to teleport Map schemas."""
todo = Schema({str: int})
assert from_val(todo) == {"Map": "Integer"}
def test_serialize_unserializables():
"""Val schemas that cannot be serialized raise appropriate exceptions."""
todo = Schema({"task": Or(str, int)})
with pytest.raises(SerializationError):
from_val(todo)
EXPECTED = """{
"Struct": {
"optional": {
"priority": "Integer",
"status": "String"
},
"required": {
"task": "String"
}
}
}
"""
def test_document():
"""Val schemas can be documented as teleport schemas."""
todo = Schema({
"task": str,
Optional("priority"): int,
Optional("status"): str})
output = document(todo)
expected = {
"Struct": {
"optional": {
"priority": "Integer",
"status": "String"},
"required": {"task": "String"}}}
# json.loads because the json.dumps in python2 and 3 are subtly different.
assert json.loads(output) == expected
INTEGERS = {-1, 0, 1, 3123342342349238429834}
DECIMALS = {-1.0, 1.0, 1.1, 1e4}
BOOLEANS = {True, False}
STRINGS = {u"", u"2007-04-05T14:30:00Z", u"Boolean"}
DATETIMES = {u"2007-04-05T14:30:00Z"}
ALL = INTEGERS | DECIMALS | BOOLEANS | STRINGS | DATETIMES
SCHEMA_VALID_NOT_VALID = (
("Integer", INTEGERS, ALL - INTEGERS),
("Decimal", DECIMALS, ALL - (DECIMALS | INTEGERS)),
("Boolean", BOOLEANS, ALL - BOOLEANS),
("String", STRINGS, ALL - STRINGS),
("JSON", ALL, ({1, 2}, object())),
("DateTime", DATETIMES, ALL - DATETIMES),
({"Array": "Integer"}, [[], [1], [2, 3]], ALL),
({"Map": "Integer"}, [{}, {"a": 1}, {"a": 123, "b": -123}], ALL),
({"Struct": {
"required": {"a": "Integer"},
"optional": {"b": "Integer"}}},
[{"a": 1}, {"a": -1, "b": 13}],
list(ALL) + [{"a": 1.0}]),
("Schema",
(u"Integer",
u"Decimal",
u"Boolean",
u"String",
u"JSON",
u"DateTime",
u"Schema",
{"Array": "String"},
{"Map": "String"},
{"Struct": {
"required": {},
"optional": {},
"doc.order": []}}),
ALL - {u"Boolean"}))
@pytest.mark.parametrize("schema,valid,not_valid", SCHEMA_VALID_NOT_VALID)
def test_to_val(schema, valid, not_valid):
"""Converted teleport schemas validate correctly."""
val_schema = to_val(schema)
for value in valid:
assert val_schema.validates(value)
for value in not_valid:
assert not val_schema.validates(value)
@pytest.mark.parametrize("schema", [s[0] for s in SCHEMA_VALID_NOT_VALID])
def test_round_trip(schema):
"""Round tripping teleport schemas does not change them."""
assert from_val(to_val(schema)) == schema
@pytest.mark.parametrize("schema", (
"Wat",
{"foo": "wat"},
{"Struct": {}},
{"Struct": {"optional": {}}},
{"Struct": {"required": {}}},
{"Struct": {
"required": {},
"optional": 12}},
{"Struct": {
"required": "foo",
"optional": {}}},
{"Struct": {
"required": {"a": "Wat"},
"optional": {}}},
{"Struct": {
"required": {},
"optional": {"a": "Wat"}}},
{"Map": "Wat"},
{"Map": 4},
{"Array": "Wat"},
{"Array": 4},
))
def test_detects_broken_schemas(schema):
"""Invalid teleport schemas raise appropriate exceptions."""
with pytest.raises(DeserializationError):
to_val(schema)
|
STACK_EDU
|
Buy remote desktop connection
Wholesale remote desktop connection from China remote desktop connection Wholesalers Directory.Xtralogic Remote Desktop Client can connect using Microsoft Remote Desktop.Quickly and securely connect to devices all over the world without need for a VPN.A list of the best free remote access programs, sometimes called free remote desktop or remote control software.Thin-client hardware devices that run an embedded Windows-based operating.
Remote Desktop Licensing (RD Licensing), formerly Terminal Services Licensing (TS Licensing), is a role service in the Remote Desktop Services server role.Are you looking for a free remote desktop software to access your computer remotely — from another computer or a mobile device.
FAQ - Remote MouseRemote Access for Consumers Remote Access for Small Business: Need Help.You can Online Wholesale terminal server connection,remote access.
Remote Desktop Manager - DownloadRemote Desktop (RD) Connection Manager allows easy working with remote desktops and servers.
How many hours do you lose in your business just trying to get things to work or access to the tools you need.I have to say it is a tool that is very well accepted and liked and well used.FreeRDP is a free implementation of the Remote Desktop Protocol (RDP), released under the Apache license.Volume Licensing brief Licensing Windows Server 2012 R2 Remote Desktop Services November 2013 1 This brief applies to all Microsoft Volume Licensing programs.
Apple Remote Desktop - Official Apple SupportRemote Desktop Support. imPcRemote is a remote support tool that enables instant, secure, and trouble-free connections between remote computers over the internet.TeamViewer is the best free remote desktop software available.Quick access to other essential tools and applications from Team Viewer.
How to Enable Remote Desktop on Windows Server 2012
The Microsoft Remote Desktop Protocol (RDP) provides remote display and input capabilities over network connections for Windows-based applications running on a server.Applies To: Windows Server 2008 R2. Remote Desktop supports two concurrent connections to remotely administer a computer.People often use Remote Desktop to change or install software on computers out of physical reach.
Use the Microsoft Remote Desktop app to connect to a remote PC or virtual apps and desktops made available by your admin.
RDPSoft – Remote Desktop and Terminal Server Software
Remote Desktop Manager is a remote connection and password management platform for IT pros trusted by more than 300 000 users in 130 countries.Professional, simple and secure apps for businesses and nimble teams.Learn how our powerful, easy-to-use and fast remote desktop access, remote support and online collaboration tools are interconnected in a suite that will enable your company to connect and innovate across any distance.
Remote Desktop Protocol (Windows) - msdn.microsoft.comAre you simply running Microsoft Remote Desktop Services and need to assess performance, connection. schedule a demo with a member of the RDPSoft sales.Description With the Microsoft Remote Desktop app, you can connect to a remote PC and your work resources from almost anywhere.Collaborate with up to 300 people at once, and enjoy the flexibility of screen sharing, video, VolP, and phone.
Windows 8 Tip: Use Remote Desktop - SuperSite for WindowsConfigure SQL Server. SQL Server instances do not allow remote connections. compatible with our iOS and Android versions of Remote Desktop Manager.
Remote Desktop Services (Windows) - msdn.microsoft.comAffordable USA, UK, NL, Canada, buy rdp with Credit Card, Buy USA RDP, UK RDP, NL RDP, Encoding.TeamViewer connects people, places and things around the world on the widest array of platforms and technologies.
Configure SQL Server - Remote Desktop ManagerWe use TeamViewer to directly assist the farmers in our municipality with properly managing their water usage.
Remote Desktop Connection/ Remote Administration Tool
Remote Desktop (RD) Connection Manager download
Windows 10 Home Remote Desktop Connection
The Best Free Remote Desktop Apps for Your iPadAccess other computers or allow another user to access your computer securely over the Internet.
With GoToMyPC mobile apps, you can connect over 3G, 4G and Wi-Fi networks.
|
OPCFW_CODE
|
"""
Author: RedFantom
License: GNU GPLv3
Source: This repository
"""
# Based on an idea by Nelson Brochado (https://www.github.com/nbrol/tkinter-kit)
from noval import GetApp
try:
import Tkinter as tk
import ttk
except ImportError:
import tkinter as tk
from tkinter import ttk
import tkinter.font as tk_font
import webbrowser
class LinkLabel(ttk.Label):
"""
A :class:`ttk.Label` that can be clicked to open a link with a default blue color, a purple color when clicked and a bright
blue color when hovering over the Label.
"""
def __init__(self, master=None, **kwargs):
"""
Create a LinkLabel.
:param master: master widget
:param link: link to be opened
:type link: str
:param normal_color: text color when widget is created
:type normal_color: str
:param hover_color: text color when hovering over the widget
:type hover_color: str
:param clicked_color: text color when link is clicked
:type clicked_color: str
:param kwargs: options to be passed on to the :class:`ttk.Label` initializer
"""
link_label_font = tk_font.Font(
name="LinkLabelFont",
family=GetApp().GetDefaultEditorFamily(),
underline=kwargs.get('underline',True),
),
self._cursor = kwargs.pop("cursor", "hand2")
self._link = kwargs.pop("link", "")
self._normal_color = kwargs.pop("normal_color", "#0563c1")
self._hover_color = kwargs.pop("hover_color", "#057bc1")
self._clicked_color = kwargs.pop("clicked_color", "#954f72")
ttk.Label.__init__(self, master, font="LinkLabelFont",**kwargs)
self.config(foreground=self._normal_color)
self.__clicked = False
self.bind("<Button-1>", self.open_link)
self.bind("<Enter>", self._on_enter)
self.bind("<Leave>", self._on_leave)
def __getitem__(self, key):
return self.cget(key)
def __setitem__(self, key, value):
self.configure(**{key: value})
def _on_enter(self, *args):
"""Set the text color to the hover color."""
self.config(foreground=self._hover_color, cursor=self._cursor)
def _on_leave(self, *args):
"""Set the text color to either the normal color when not clicked or the clicked color when clicked."""
if self.__clicked:
self.config(foreground=self._clicked_color)
else:
self.config(foreground=self._normal_color)
self.config(cursor="")
def reset(self):
"""Reset Label to unclicked status if previously clicked."""
self.__clicked = False
self._on_leave()
def open_link(self, *args):
"""Open the link in the web browser."""
if "disabled" not in self.state():
webbrowser.open(self._link)
self.__clicked = True
self._on_leave()
def cget(self, key):
"""
Query widget option.
:param key: option name
:type key: str
:return: value of the option
To get the list of options for this widget, call the method :meth:`~LinkLabel.keys`.
"""
if key is "link":
return self._link
elif key is "hover_color":
return self._hover_color
elif key is "normal_color":
return self._normal_color
elif key is "clicked_color":
return self._clicked_color
else:
return ttk.Label.cget(self, key)
def configure(self, **kwargs):
"""
Configure resources of the widget.
To get the list of options for this widget, call the method :meth:`~LinkLabel.keys`.
See :meth:`~LinkLabel.__init__` for a description of the widget specific option.
"""
self._link = kwargs.pop("link", self._link)
self._hover_color = kwargs.pop("hover_color", self._hover_color)
self._normal_color = kwargs.pop("normal_color", self._normal_color)
self._clicked_color = kwargs.pop("clicked_color", self._clicked_color)
ttk.Label.configure(self, **kwargs)
self._on_leave()
def keys(self):
"""Return a list of all resource names of this widget."""
keys = ttk.Label.keys(self)
keys.extend(["link", "normal_color", "hover_color", "clicked_color"])
return keys
|
STACK_EDU
|
An experimental HTML version of the document can be browsed via the tab controls, above.
The "Vancian to Psionics" project is a conversion of the standard (Vancian) system of magic used by version 3.5 of the Dungeons and Dragons role-playing game over to a different (Psionic) system.
The result is a .pdf with a current page count over 300.
The project has a discussion thread on giantitp.com.
I am one of those who thinks the psionic subsystem is one of the most elegantly designed things in D&D 3.5. In many ways, I think it's how spellcasting should have been done in 3.5 to begin with. And I'd like to use it for more things.
But when I try to replace the core vancian casting with psionics, I run into a couple of problems.
- Options are missing. There are things the core classes can do that simply can't be faithfully replicated with psionics. The psionic system is powerful enough, no doubt about that. It can pretty much always get the job done. But if I wanted to, say, play a necromancer, I'd probably have to reflavor Astral Constructs as my zombies and some Stygian powers as my negative energy effects. Which works, but wasn't really what I was looking for in the first place. Which brings me to the second problem:
- The flavor doesn't always fit. Now, I think it's cool - if viewed on its own terms. The crystals and tattoo theme the books have going on is a perfectly good way to look at magic, but in my experience, it doesn't always live up to people's ideas about what D&D magic "should" look like. DMs still ban psionics for "flavor reasons", or because it doesn't "fit their setting". Which, I must say, I understand perfectly.
So what can I do? Well, I devised a twofold solution to the twofold problem.
- I translated the core spells and classes over to psionic mechanics.
- As for the system itself (as well as some doodads like the basic magic items), I changed every reference to "power", "manifester", "crystals" and "psionic", and so on to... well, their arcane counterparts. In other words, when people say "just reflavor psionics to fit your setting/concept" - I'd like to think I did precisely that.
- There's a spell point variant in Unearthed
Arcana/the SRD, as well as about 500 homebrew versions.
Why, oh why, Ernir, are you trying to invent the
The primary difference between this project and every other spell point variant I have seen is simple - I rewrote the spells so that they take the system into account. No other spell point system I have seen has done this. In addition, this isn't really a new spell point system. This is the well known and researched psionic system, which we all know works. I just added a paint job and a new bell or two.
- Did you fix magic forever?
No, I didn't. My primary goal was to give the vancian flavor a better system, balancing it was not my primary concern. The Wizard class resulting from this is still exorbitantly more powerful than any "mundane" class in WotC D&D, if you ask me. That being said, I did rewrite all the spells, and of course I couldn't resist ironing out some of the kinks I know of. When it came to this, I concentrated my efforts on getting rid of the spells that have truly unbounded/uncontrollable consequences. Wizards can probably still solo most level appropriate encounters, but I hope they'll now run into trouble solo-ing some campaign settings.
- Did you leave some spells out?
Lots of spells weren't precisely reproduced. Most of those, however, I just merged with others. The augment system provided by psionics makes it particularly easy to merge spell chains (that is, spells that are just greater/lesser versions of other spells). In addition, I merged some spells that I knew would just never be taken as a spell known otherwise.
- Did you add a lot of your own content?
Mixing the core spells up with my own eccentricities was not my original intent. Nevertheless, I did eventually decide to make up quite a few spells of my own (and steal a few psionic powers), usually to fill in nearly-empty spell levels. Entirely new content has been delegated to its own section, clearly marked near the bottom of the file.
Did you change the psionic base system at
A few changes were necessary, for example, changing the discipline/subdiscipline arrangement to fit the school/subschool structure of traditional casting. But I did make some less trivial changes.
- Most significantly, save DCs are now calculated using the number of spell points (power points) spent on the spell, rather than the level of the spell (power). Also, you can now spend more points on a spell than given in its spell points (power points) line, usually to increase the save DC (in other words, all spells now have a "null augment"). This is mostly because adding "Augment: For every 2 additional spell points you spend, this spell's save DC increases by 1." to every single thing was starting to look really ugly.
- I added a "Polymorph" subschool to the transmutation school to contain the mechanics of my new Polymorph-ish spells.
- I added a "minion" type of spells to put a cap on the number of will-less minions any given caster can control.
- Are you done working on this?
No. Next up is to update more magic items, and hack on some epic support. Maybe make some more base classes of my own! I'll be here for a while.
- What are you working on right now?
New base classes, and trying to find the courage needed to tackle Epic.
Can I contribute?
Yes, you can! Feedback in the giantitp.com comment thread is always appreciated, much of it has already made its way into the document one way or another. If you are daring (and have the tiniest bit of LaTeX know-how), you can head over to Github (see to the right) and have a peek at the source, too.
|
OPCFW_CODE
|
- Drop-in capability for any Laravel 5.2+ project with an established User model.
- Image cropping for featured images during upload process.
- Clean, Medium-inspired interface.
- Dynamic RSS feed.
- Dynamic blog sitemap generation.
(More features planned, see
Milestones section below.)
So why create another blog package with so many others out there? After all, blogs (or to-do lists) are the hello world projects of the day. The TLDR summary is that I was not able to find a single package out there that met the following criteria:
- Drop-in install into any Laravel app minimal or no special configuration.
- Have a minimalistic interface, without cumbersome admin panels.
- Function and feel similar to Medium: clean, to the point, putting content first.
- Not a static page generator. (Yes, many see that as a benefit, I haven't seen it as one -- yet.)
There are one or two nice packages out there, but either their code is unnecessarily complex, so buggy that they can't be installed, or require use of tedious admin pages.
I was looking for something simpler, yet just as functional. This project is the evolution of that (granted, somewhat opinionated) desire. I hope others like me will find this package useful.
If you do try it, please send feedback as to how you're using it, where you experience friction, and what features (missing or implemented) are preventing you from using it.
- Bootstrap 4 (currently in alpha 2)
- jQuery 1.8+
- FontAwesome 4.6+
The above-listed libraries should already be included in your layout view, so that they are usable by Laravel Weblog.
- Install the package via composer:
composer require genealabs/laravel-weblog
- Add the service provider to your
// $providers = [ GeneaLabs\LaravelWeblog\Providers\LaravelWeblog::class, // ];
- Run migrations:
php artisan weblog:migrate
- Publish the required assets:
php artisan weblog:publish --assets
By default Laravel Weblog will run with the follow settings:
- Main blog URL:
- RSS Feed URL:
- Blog Sitemap URL
- App's User model: detect if either
config('auth.providers.users.model')exists and use it.
- Layout view:
If any of these don't fit your needs, perform the next two steps:
- Publish the configuration file:
php artisan weblog:publish --config
- Make any desired changes to the paths and layout file options in
Naturally the default layout won't necessarily be completely to your liking. The views can be overriden with your own customized versions. To publish and customize the views, perform the next two steps:
- Publish the view files:
php artisan weblog:publish --views
- Edit the views as desired in
- Create a medium-style editor.
- Add an image uploader and manage deletion.
- Add a featured image to blog post.
- Crop featured image to social network standard sizes.
- Create dynamic RSS feed.
- Create dynamic sitemap.
- Ability to tag posts.
- Ability to assign a post to a category.
- Ability to publish in the future.
- Format and structure codebase -- allow for overriding.
- Implement final design tweaks -- base on Bootstrap 3.x.
- Ensure Laravel 5.1 LTS (and therefore PHP 5.6.x) compatibility.
- Add live-blogging functionality using Laravel Echo (Laravel 5.3 and up).
- Add Bootstrap 4 compatibility.
- Add email subscription functionality (newsletter signup form).
- Add social login options (via socialite) that only get triggered when needed.
- Add social like button (heart) that can share the article on the liker's social networks.
- Post to Twitter on publish (for future dates as well).
- Post to Facebook on publish.
- Post to Apple News on publish.
- Create Facebook Instant Articles feed.
- Publish to Medium as articles (and categories as Publications?)
- Integrate code blocks with GitHub Gist.
- Add ConstantContact integration for newsletter signup form.
- Add MailChimp integration for newsletter signup form.
|
OPCFW_CODE
|
When I originally made my blog back in 2018, I used a custom blogging software that I made myself. Although suitable, I never got around to finishing it thus decided to move to WordPress (“WP”) instead. One of the major parts of moving over to WP would be having a Recent Posts section on my non-WP section of laimmckenzie.com, like below.
Why? Most users visit the homepage of a website before any other part of the site, depending on their purpose of being there. A major part of moving to WP meant that I was putting my blog on a different domain – having it on the same domain under a directory could get messy very quickly, it also looks nicer in my opinion.
Ok cool, now the boring part is over. Here’s how to actually do it. When you create a WP blog it creates a new database on your MySQL Server, with a bunch of tables in it. This is the same as every other Database you’ll ever look at it, it’s just super messy. The way I’m going to show is this is using PDO, but you can do this any way you like really, the process is the same.
Once you have the basic database connection, you’ll need to start connecting your chosen page to that database. Create a file called index.php and copy in the below, this will link the index page up to the connection file and function file that is yet to be created.
The above is a basic idea of the integration required, the conn.php file is the connection to the database, the funcwp.php file is the functions that you will call on the pages to get the actual data out of the database – both of these files are required for every page that you want WP content on.
Now that we have the basic page set up, we need to create the funcwp.php file. In this example, I will be pulling the 5 most recent posts from the database, with their status as published – this means it will not include drafts or posts published as private.
The table that stores all of your posts on WP is called
wpposts. The columns that we will be using are
post_date. A full list of columns for
wpposts can be found here, however the above should do for most situations.
At this point you should have three files – conn.php, funcwp.php and index.php – at the moment none of your data is actually going to be shown, a simple foreach loop will allow you to pull the data from the function and output it how you like, an example of this is below.
That’s pretty much it. Assuming your database connection is valid, and you have some posts in your
wpposts table that match the criteria chosen, if you load up index.php on your server you should be able to see your most recent 5 posts outputted in the page.
|
OPCFW_CODE
|
Prestantiousnovel Fey Evolution Merchant – Chapter 322 – Celestial 1 quirky disturbed share-p1
Novel–Fey Evolution Merchant–Fey Evolution Merchant
Chapter 322 – Celestial 1 prickly rapid
Lin Yuan could not support but help remind him, “Uncle Hu, it’s latter. Undertake it the future. Your wellbeing is important.”
Profession Ranking: C-Rank
Just after informing Tian Ningning, Lin Yuan matched having an opponent over the Celestial Stairway as always.
As for the intense interpretation, it turned out while he obtained discovered that the Mother of Bloodbath and Almost endless Summer ended up Suzerain/Myth Breed feys.
Right after educating Tian Ningning, Lin Yuan matched with the opponent for the Celestial Stairway as always.
Job Rate: C-Rank
i swear i won’t bother you again reddit
Morbius (Bronze X/Tale)
Exceptional Fulfillment: Instantly to Celestial Stairway (No getting rid of streak from Superstar Tower to Celestial Stairway)
(Superstar Tower) Duels: 100, Victories: 100, Deficits: , Top Floorboards: 100
Regardless if Lin Yuan got a Celestial Stairway’s distinctive t.i.tle, which might increase the Celestial Stairway duels’ time reduce, he felt there is no requirement to employ this privilege.
Even if Lin Yuan enjoyed a Celestial Stairway’s unique t.i.tle, that could increase the Celestial Stairway duels’ time minimize, he noticed there were no requirement to take advantage of this opportunity.
Acid Deterioration Queen Bee (Yellow gold I/Star)
Lin Yuan developed to create his faction. He possessed a powerhouse, Almost endless Summertime, but he did not have the right strategist. If Wen Yu had the capability to deploy solutions, prepare, and perform, it may help Lin Yuan get rid of numerous concerns.
One’s entire body was the investment capital to do all the things. If one’s body system collapsed, then τηευ may have nothing at all.
When it comes to powerful which means, it was subsequently as he acquired found out that the Mother of Bloodbath and Never-ending The summer months were definitely Suzerain/Delusion Particular breed of dog feys.
When Lin Yuan improved feys for many days and times, he would even make sure that he got five a long time of rest on a daily basis.
Right after taking part in this Conflict Flag video game, all people returned on their suites. At that moment, Lin Yuan discovered that Hu Quan acquired yet to go back to his place to relax. As an alternative, he started to carefully carve those good, absolutely jade-textured wood fragments within the living room.
i didn’t notice that meaning
As for the intense that means, it absolutely was as he got found that the Mother of Bloodbath and Endless Summer have been Suzerain/Belief Particular breed of dog feys.
Celebrity Tower Surface: Celestial Stairway 1-Superstar
Twilight Starbird (Silver I/Dream I)
When Lin Yuan superior feys for many days and night time, he would even guarantee that he obtained five a long time of sleep everyday.
As Hu Quan mentioned this, his sight betrayed warmness and significant meaning. The warmth was because Hu Quan has been by himself for so many years and had been used to solitude as a result of his career.
Even though Chimey always brought Guru a hassle, and Master usually simply had to help Chimey resolve a lot of complications, there was no denying that Chimey was the main loved one to Master on earth besides Lin Yuan.
Lin Yuan given back to his room and unveiled Guru out of the Mindset Fasten spatial sector. He wished to free up Wizard throughout the supper to have with all people, but Guru wanted to stick with Chimey within the Soul Fasten spatial zone.
(Celestial Stairway Marketing and advertising Duel) Duels: 2, Victories: 2, Failures:
He originally wished to proceed to the Celestial Stairway corresponding as always, but he instantly thought about Tian Ningning.
Hu Quan responded that has a teeth, “I’m processing the fragment condition I need to make your Mom of Bloodbath’s and Countless Summer’s stats on the attribute walls primary. By doing this, it might all be arranged about the element wall tomorrow.”
Lin Yuan actually believed he would be equalled which has a rough challenger this period, but his rival only possessed two Rare metal IV/Exclusive feys.
|
OPCFW_CODE
|
What can I download for Windows Mobile 6?
Microsoft Download Manager is free and available for download now. The Windows Mobile 6 SDK Refresh adds documentation, sample code, header and library files, emulator images and tools to Visual Studio that let you build applications for Windows Mobile 6. Note: There are multiple files available for this download.
How to install Windows Mobile 6 professional SDK?
To start the installation immediately, click Run. To save the download to your computer for installation at a later time, click Save. To cancel the installation, click Cancel. Windows Mobile 6 Professional SDK Refresh does not contain the Windows Mobile Standard SDK Refresh and you will need to download both to target both platforms.
What can I do with CORE Media Player?
Speaking of options, The Core Media Player has a very rich settings menu you can use to customize many of the application’s its features, regardless if we’re talking about the interface or the way it handles multimedia files.
How to uninstall Windows Mobile 6 SDK refresh?
Uninstall previous versions of the Windows Mobile 6 SDK before installing the SDK Refresh. To start the installation immediately, click Run. To save the download to your computer for installation at a later time, click Save. To cancel the installation, click Cancel.
Are there any free apps for Pocket PC?
Freeware Pocket PC. Free Software and Game Downloads for Windows Mobile and Windows Phone 7 GPS tracking, realtime statistics, logs and social share ! Finger-friendly shopping list software.
Which is the best Windows Mobile Pocket PC?
Released in August 2006, the Fujitsu Siemens Pocket LOOX T830 quickly became one of the most well-received Pocket PCs with the Windows Mobile 5.0 operating system. Its main attraction was a GPS module with a highly sensitive SiRFstarIII chip that guaranteed excellent signal reception even in densely populated urban areas.
Is there a SDK for Windows Mobile 6 professional?
Windows Mobile 6 Professional SDK Refresh does not contain the Windows Mobile Standard SDK Refresh and you will need to download both to target both platforms. Windows Mobile 6 Professional SDK Refresh contains Windows Mobile 6 Professional emulator images and one Windows Mobile 6 Classic emulator image.
Are there any free mobile apps for PC?
BlueStacks is free to download and lets you run your apps and games on your pc without draining your phone’s battery. You can transfer files between your mobile and pc. BlueStacks also enables you to download apps directly on your computer. You can get it here: http://www.bluestacks.com/ 2. Andy
How to download MOBILedit from the App Store?
Pro (on App Store) MOBILedit! Lite (on App Store) Enter activation key and get download link to proper product. Device drivers necessary to connect a phone to Windows and start using our products. Universal phone driver suitable for most Android phones.
Is the new Windows Phone 7 compatible with Windows Mobile 6.x?
With all the buzz about Microsoft’s new Windows Phone 7, there’s one group that may feel left in the cold: those with older Windows Mobile 6.x devices. Here’s some ways you can keep your now-obsolete device a little more viable and useful today.
How to install Windows operating system on Android phone?
By selecting the language, windows driver download will automatically start. When the download process has completed, click on the “Install” button. You have the option to on “Remove Android”.
|
OPCFW_CODE
|
iOS7 - what kind of In-App purchase is this?
The IAP never expires. It is linked to a UUID in a KeyChain. The UUID is used in a database and other places as an identifer, and is critical to functionality.
If they upgrade to a newer iPhone and reuse the Apple ID, then the IAP follows them. The UUID shouldn't be changing in this case. Everything is cool so far.
But I don't want the IAP to be shared across multiple devices sharing the same Apple ID. I want them to pay for the IAP because every new device with the IAP represents a cost to me.
So I'm at a dilemma on how to classify it:
It isn't a consumable because you never need more than one and you don't use it up.
It could be a non-consumable, however, those need to be restored based on Apple ID. Here I get screwed with the Restore Purchases requirement. Basically, any Joe Schmoe can give out his Apple ID to his friends, and everybody gets the IAP for free. I don't want the IAP transferred to multiple devices.
It could be a non-renewing subscription, but it never expires, so they never need to add additional subscriptions. Can I specify the subscription lasts for a very long time (like 20 years) and limit them to purchasing one? The Apple guidelines aren't very specific on this.
It's not an auto-renewable subscription for multiple reasons detailed above.
It is a non-consumable. If you expect the Apple ID will be used across multiple devices, just price your IAP accordingly to account for this.
Thank you. Apple really needs to let us use the UDID. It would solve a lot of problems!
To be honest, I'm not sure this question is a great fit here because it's not a programming question and nobody but Apple's app review team will be able to give you a definite answer. I do know that subscriptions need to be restorable across multiple devices, so I don't know if that helps you out. The only non-restorable purchase type you can have is a consumable.
I am also not sure that your logic quite works - let's say you store the UUID in the keychain. How are you deriving it so that it's locked to the SIM? You don't have access to the IMEI or anything that uniquely identifies the SIM card on iOS.
My understanding is that non-renewing subscriptions don't need to be restored. See http://support.apple.com/kb/ht4009 "Redownload an In-App Purchase". Maybe my terminology isn't right, but when I upgraded to a new phone, I just transferred the SIM, and the UUID was moved, too. Once I installed the App, I got everything back.
It is nothing to do with the SIM. It is to do with you restoring a backup of the phone. If you restore a backup then application data is restored - which includes the data in the keychain. You could restore that same backup image to 3 phones and all would have the same UUID
@Paulw11 - Do you have a link or reference that the SecItems are backed up?
You have proved it when you moved to a new phone - the item was still in the key chain. It certainly didn't come from the SIM because your app will work on a iPod or an iPad without 3G
Thanks, have removed erroneous reference to SIM in OP.
You want to restrict the use to one device. That means it must be a "consumable IAP" according to Apple guidelines (note - non-renewing subscriptions won't work - they need to be copied to all devices owned by the user). So make it consumable but make it easy - sell the product as "5000 uses". Each time the user uses the function or the App, charge them one 'use'. This also has the advantage that a heavy user may willingly pay you twice.
And…you can use bluetooth (MKSession) or iCloud to transfer all remaining uses from one device to another device to solve the problem of a user purchasing a new device.
|
STACK_EXCHANGE
|
NODWELL keyword has no affect on roaming Geofences
Describe the bug
NODWELL keyword has no affect on roaming Geofences
To Reproduce
Steps to reproduce the behavior:
Move 2 object's points simultaneously and multiple notifications are generated
Expected behavior
Only 1 Notification should happen.
Additional context
Documentation seams to be unclear. Also how to detect only "enter" not exit
I'm unable to reproduce this issue.
Without NODWELL
In terminal one, I created a roaming geofence named myfence geofence that monitored the people collection for when points were within 5,000 meters of each other.
tile38> SETCHAN myfence NEARBY people DISTANCE FENCE ROAM people * 5000
tile38> SUBSCRIBE myfence
{"ok":true,"command":"subscribe","channel":"chan1","num":1,"elapsed":"4.569µs"}
In terminal two, I added two points that are +10,000 meters.
tile38> SET people p1 POINT 33.394 -111.908
tile38> SET people p2 POINT 33.516 -111.952
I then moved p2 to be "nearby" p1 (within 5,000 meters).
tile38> SET people p1 POINT 33.414 -111.948
The notification appeared in terminal 1.
{"command":"set","group":"60636c131aa670fa16bb427c","detect":"roam","hook":"myfence","key":"people","time":"2021-03-30T11:21:07.543336-07:00","id":"p2","object":{"type":"Point","coordinates":[-111.948,33.414]},"nearby":{"key":"people","id":"p1","object":{"type":"Point","coordinates":[-111.908,33.394]},"meters":4328.112}}
You can see it has a "nearby" member that indicates that the "p2" is now nearby "p1".
Then, in terminal 2, I moved p2 again, but still kept it within the 5000 meters.
tile38> SET people p2 POINT 33.421 -111.950
And again, in terminal 1, a notification appeared.
{"command":"set","group":"60636c981aa670fa16bb427d","detect":"roam","hook":"myfence","key":"people","time":"2021-03-30T11:23:31.909439-07:00","id":"p2","object":{"type":"Point","coordinates":[-111.95,33.421]},"nearby":{"key":"people","id":"p1","object":{"type":"Point","coordinates":[-111.908,33.394]},"meters":4920.604}}
Finally, I moved p2 to its original position
tile38> SET people p2 POINT 33.516 -111.952
And a new notification happened.
{"command":"set","group":"60636c981aa670fa16bb427d","detect":"roam","hook":"myfence","key":"people","time":"2021-03-30T11:25:51.712169-07:00","id":"p2","object":{"type":"Point","coordinates":[-111.952,33.516]},"faraway":{"key":"people","id":"p1","object":{"type":"Point","coordinates":[-111.908,33.394]},"meters":14166.611}}
You can see there is the "faraway" member that indicates that the "p2" is now further away than 5,000 meters.
With NODWELL
Follow all of the same steps except for the SETCHAN add the NODWELL
tile38> SETCHAN myfence NEARBY people DISTANCE FENCE NODWELL ROAM people * 5000
You will that that the third notification does not happen because the NODWELL tells the geofence that you do not want multiple "nearby" events in a row for the same two object.
Also how to detect only "enter" not exit
For roaming geofences, there's "nearby" and "faraway" instead of "enter" or "exit". There's no way to turn off "faraway" events.
|
GITHUB_ARCHIVE
|
Build Youtube in React 33: reducer for watch video component
Let’s continue where we left off and start with creating the
reducer for our
2 Adding watch details reducer
2.1. Let’s talk about state
It is important to understand where we store the more detailed information on the video we are about to fetch. It has to live somewhere in our global
When we have more information on a video, we will just add / update an entry in our
videos.byId “dictionary”. So the new state might look like this:
Have a look at line
10. We simple replace or update the video associated with a particular
id. Now we also have the description for the video associated with id
Now that we are clear on how to store the data, we also know that we must update our videos
2.2. Endpoint response format
Let’s first talk about how the endpoint’s answer is structured. Since we are using the
all effect, we expect to get an array of responses back. Right now we only perform one request, so we expect the array to only contain one element.
A typical response for the call we are about to make looks like this.
Remember that when doing a request with
redux saga, we also get a bunch of other stuff in our response we don’t really need. Generally, a response consists of three parts.
The result contains the response in the form of a
JSON object. This is what we will be working with. The entire response string is stored inside
body. We also get information on the
Now we don’t care about the
headers and the
body for now and will just be working with the
I’m just bringing this up so that the
reducer is easier to understand.
2.3. Reacting to watch details action in video reducer
We add a new
case statement to our
switch and now react to the
Let’s have a look at the
reduceWatchDetails function which does the actual work. We will first look at a quick and dirty solution and then do it properly.
We know that the
items array in the responses array will contain the details about a particular video.
So we could simply get the first element in the responses array. That’s the response we’re looking for. We also know that the
items array of the response (as shown above) will just contain one element because the
video id is unique. So we could simply take the first element in the items array and insert that into our videos.byId object.
This sort of works, but that’s not the quality we’re striving for. This parsing mechanism is very fragile. If we change the action later on and the first response is no longer the response with the
video details, our entire
reducer crashes and burns.
So let’s do this properly.
2.4. Robust watch details reducer
Instead of just taking the first response and hard-code an index, we just use the Array.prototype.find function.
The find function will return the first item that matches the condition we specify. We are searching for a response of kind
Youtube returns the kind for each response it sends back. Once we have found the right response, we just pick the first video in the
items array. We know that this response will exactly contain one element because we explicitly ask for a video with a specific id.
Now you might argue, well what happens when I pass an
id which is actually not a
video id. Well, in this case, the promise gets rejected and we dispatch a
WATCH_DETAILS_FAILURE action instead of a
WATCH_DETAILS_SUCCESS action. So we basically don’t end up in the
2.5. Youtube video response types
Well, there is actually one more thing we could do. We have a hard coded
string in our
find expression. This is not best-practise. Therefore let’s create a file where we define the different kind of responses we are working with. Like so we are more flexible.
- Create a new file called
Add the following code:
Now we can update our
3 Fetching video details in Watch component
3.1. Wiring up watch component to action creators
Let’s test out our new reducer. For this, we need to dispatch the
WATCH_DETAILS_REQUEST action inside
Watch. Let’s first pull the dependencies from our global
Redux state in by using
First of all, we pull the
youtubeLibraryLoaded from our
Redux state in. If the library hasn’t loaded yet, we cannot perform any requests.
We also pull in the
watchDetails action into our local state by using the
Finally we export our new component with the react-redux connect helper. Note that we are are also making use of the
withRouter helper because Redux and React router sometimes get in their way. We already covered this in the previous tutorial.
Let’s add the needed code inside the component itself.
fetchWatchContent function extracts the
video id from the current
URL. After that it starts the fetching process by calling
We make use of our data fetching logic in
componentDidUpdate. Note that we only attempt to fetch data when the we already loaded the
Youtube client library.
3.2. Updating App component
Remember that we are now using a
default export inside our
Watch component. Therefore, we need to update the
import statement in our
4 Wrap Up
Go to the
Home feed and click one video thumbnail. After that you should be redirected to
Let’s check out our store with the
Redux Dev tools extension.
Click on the image to see it in high resolution.
Nice, the video details are fetched and stored in our Redux store.
In the next tutorial, we will start making use of this data and actually show it in the
You can find the entire code on Github.Follow @productioncoder
Please follow me on Twitter @productioncoder to stay up-to-date. I’m happy to receive feedback of any kind.
If you are interested in more high-quality production-ready content, please enter your email below.
|
OPCFW_CODE
|
Created Thu, 23 Jun 2011 06:26:58 +0000 by Mr_Fixit
Thu, 23 Jun 2011 06:26:58 +0000
Anyone use the SRF08? I have been using it in single echo mode but really want to use its multi-echo capabilities, however I have been unable to get any results that are remotely useful.
Here is a sample of data I am receiving when its pointing to the ceiling 6" above. Command is 0x50, so data is in inches: Multi Echo Starting Echo:0 = 32 Echo:1 = 57 Echo:2 = 82 Echo:3 = 107 Echo:4 = -123 Echo:5 = -98 Echo:6 = -73 Echo:7 = -48 Echo:8 = -23 Echo:9 = 259 Echo:10 = 284 Echo:11 = 309 Echo:12 = 334 Echo:13 = 359 Echo:14 = 129 Echo:15 = 154
Playing around a bit, I have found that the first few echos had some accuracy, but the others dont even when I setup multiple targets.
So, anyone been able to write some code to handle multiple echos? Willing to share? Single echo is ok, but severely hindering.
Thu, 23 Jun 2011 19:27:35 +0000
ceiling 6" above? Kinda close!
How about experiments out in the open, multiple targets?
You might be bouncing around multiple walls...
I have a 6- SRF08 array on the new 'bot I'm working on.
Fri, 24 Jun 2011 03:55:21 +0000
6 feet. :) Yeah, gonna work on it outside and figure this out.
Your bot looks pretty cool! I still have some mechanical assembly/fabrication to do on my bot before I can snap a shot, but you may be able to answer another question for me since you are using the same sonar modules:
What angle should I place two SRF08's so that I get some overlap in front of my robot? My bot is 14 inches wide and would like to be able to determine if objects are directly in front of the bot. If the sonar envelopes overlap in this region then en object in that area should return roughly the same echo on both devices. This will also enable to let me determine when an object is to the left or the right of the bot's path and only use two devices.
Fri, 24 Jun 2011 17:13:31 +0000
As I recall, six (6) SRF08 rangers nicely covered 360 degrees, with little if any overlap. So I'd put them at less then 45 degrees of beam separation. You might actually want 30 degrees.
My "hex" motif and 5" diameter aluminum disks naturally lent to using six sensors. Even a little room left for a Sharp IR sensor next to each (next rev?).
Sat, 25 Jun 2011 05:47:45 +0000
Cool! I am going to toy around this weekend and see exactly what the detection window is at various distances. The chart given is a bit too general. Also going to experiment with limiting the beam width based on information provided on the site for Devantech. This should also help reduce the problem of stray echos.
|
OPCFW_CODE
|
First time posting, I asked about this problem on some other sites but to no avail. I recently built a new PC that was somewhat inspired by Wendell’s Linus build video with the x399 aorus xtreme but is used the 2920x. After putting it all together when I powered it on it failed to boot with the motherboard debug LED showing code 16 and the CPU LED to the bottom left of the socket being lit up red.
The only thing the manual says is that the CPU LED indicates a problem with the processor and the error code 16 is ‘reserved’. I assumed either the mobo or CPU was dead so I RMAed the CPU. After I got the new one the same error occurred so I got a new mobo. The error still occurs.
I tested with a different PSU and RAM just to make sure I wasn’t on the wrong track. The only things I can think of are my PSU isn’t enough watts (850w) or the BIOS needs to be updated to support the new CPUs even though the Gigabyte site says all TR2 CPUs should be supported on all versions. I’m out of ideas so anything new to try would be great.
you could mail your mobo and cpu here and I will test?
did you try the clear cmos button? any chance the psu cable(s) for one or both of the cpu power thingie is messed up? swap cpu power cables around?
Have you tried:
- Ensuring all power connectors are plugged in and secured properly to the motherboard and graphics card
- reseating the CPU ensuring to install tightening the screws as per the defined order?
- reseating the ram, ensuring it is placed in the correct slots as defined in the manual?
Thanks for the replies.
I have tested with 2 different power supplies, both 850w corsairs, and 2 different RAM kits with the same result. I reseated the CPU multiple times with both CPUs and mobos. I hit the clear cmos button as well. I will give it another shot and explicitly check all the CPU power cable and RAM configurations to double check. atm I removed the graphics card from the system because the system never gets far enough to initialize it.
Just retested again and still the same thing. The CPU LED is red as soon as I hit the power so I assume thats the issue but the odds of getting 2 DOA CPUs and mobos is pretty low.
Im just gonna continue the necro a bit. Was this issue ever solved? any insight ? 2 DOA CPUs is rather unlikely at the level of QC the two main companies run at lol
Sorry, my bad. Hoped deleting the post would rebury. I was wrong!
Was saying that updating the bios solved my non-post issue with the Zenith Extreme & 2950x.
Its not a bad necro by most measures LOL. If there is something to contribute and improve the post then by all means just try not to after 3 months dead or more haha…
but yeah id love to know what was wrong
|
OPCFW_CODE
|
MS-access wrong date format when converting field from text to date/time
I have an access database given to me where all the dates are stored in a text
field in the format mm/dd (eg: 3/13/2009 12:20:36 AM)
I want to convert the field to a date/time but access formats it as dd/mm which
makes it so that if the day is bigger then 12 or not the converted date might
be wrong.
Example with the current format when stored as text in the DB:
3/12/2009 11:32:40 PM
3/13/2009 11:32:40 PM
If I simply convert the data type of this field from the design view
of the table from text to date/time date type I get the following:
03/12/2009 11:32:40 PM
13/03/2009 11:32:40 PM
How would I go about fixing the stored values?
I don't care much about the format the dates will be showing in as I'll be able
to easily change how it looks but getting them to convert properly from text to
date/time has proven to be tricky.
Preferable I'd like to fix it from access directly but I can do it from C# if needed.
Thanks.
how are you trying to convert it now?
If this is a local Access application, it uses your system's date time format, so changing your localization settings in Windows to use MM/DD will make Access convert that way, unless this has been overridden somewhere in the app.
Hopefully this is a one time operation the OP is doing. Other wise, you are just about guaranteed that some user is going to change his/her date settings, and break the application.
The usual methods for this problem are: use a non-ambiguous date format, such as dd-mmm-yyyy (e.g., 17-Jun-2009) or use DateSerial. Keep in mind that Jet SQL interprets non-specified dates in US form, i.e., mm/dd/yyyy. This is, of course, disastrous if your local date settings are dd/mm/yyyy.
Format(CDate("3/13/2009 11:32:40 PM"), "mm/dd/yyyy" will give you 03/13/2009
You've got good answers on your immediate problem however you appear to be running your system in dmy format. Thus you should be aware of the following.
SQL statements require that the dates be either completely unambiguous or in mm/dd/yy, or mm/dd/yyyy format. Otherwise Access/Jet will do it's best to interpret the date with unknown results depending on the specific date it is working with. You can't assume that the system you are working on is using those date formats. Thus you should use the logic at the following web page.
Return Dates in US #mm/dd/yyyy# format
http://www.mvps.org/access/datetime/date0005.htm
yyyy-mm-dd is unambiguous for the Access database engine and has the huge advantage of being ISO 8601 standard.
The Import function in Access has decent date-parsing functionality, and will let you specify quite a few different formats. Not sure how to best apply this to something already in Access-- a quick way might be to copy the data to Excel, and then re-import it.
I actually tried that at the beginning but it hasn't worked for me.
|
STACK_EXCHANGE
|
This series consists of talks in the area of Foundations of Quantum Theory. Seminar and group meetings will alternate.
We use the mathematical language of sheaf theory to give a unified treatment of non-locality and contextuality, which generalizes the familiar probability tables used in non-locality theory to cover Kochen-Specker configurations and more. We show that contextuality, and non-locality as a special case, correspond exactly to *obstructions to the existence of global sections*.
Classical constraints come in various forms: first and second class, irreducible and reducible, regular and irregular, all of which will be illustrated. They can lead to severe complications when classical constraints are quantized. An additional complication involves whether one should quantize first and reduce second or vice versa, which may conflict with the axiom that canonical quantization requires Cartesian coordinates.
A family of probability distributions (i.e. a statistical model) is said to be sufficient for another, if there exists a transition matrix transforming the probability distributions in the former to the probability distributions in the latter. The so-called Blackwell-Sherman-Stein Theorem provides necessary and sufficient conditions for one statistical model to be sufficient for another, by comparing their "information values" in a game-theoretical framework. In this talk, I will extend some of these ideas to the quantum case.
Wheeler's delayed choice (WDC) is one of the "standard experiments in foundations". It aims at the puzzle of a photon simultaneously behaving as wave and particle. Bohr-Einstein debate on wave-particle duality prompted the introduction of Bohr's principle of complementarity, ---`.. the study of complementary phenomena demands mutually exclusive experimental arrangements" . In WDC experiment the mutually exclusive setups correspond to the presence or absence of a second beamsplitter in a Mach-Zehnder interferometer (MZI).
This is a geometric tutorial about straight and twisted vectors and forms (ie, de Rham currents) leading to some wild thoughts about the EM field as a *thing*, ie with properties similar to a piece of matter; and to some even wilder thoughts about a metric-free GR.
I provide a reformulation of finite dimensional quantum theory in the circuit framework in terms of mathematical axioms, and a reconstruction of quantum theory from operational postulates. The mathematical axioms for quantum theory are the following: [Axiom 1] Operations correspond to operators. [Axiom 2] Every complete set of positive operators corresponds to a complete set of operations. The following operational postulates are shown to be equivalent to these mathematical axioms: [P1] Definiteness.
Quantum theory can be thought of a noncommutative generalization of classical probability and, from this perspective, it is puzzling that no quantum generalization of conditional probability is in widespread use. In this talk, I discuss one such generalization and show how it can unify the description of ensemble preparations of quantum states, POVM measurements and the description of correlations between quantum systems.
We will analyze different aspects of locality in causal operational probabilistic theories. We will first discuss the notion of local state and local objective information in operational probabilistic theories, and define an operational notion of discord that coincides with quantum discord in the case of quantum theory. Using such notion, we will show that the only theory in which all separable states have null discord is the classical one. We will then analyze locality of transformations, reviewing some general properties of no-signaling channels in causal theories.
|
OPCFW_CODE
|
Websocket behind reverse proxy always has new clientId / wrong userId
Self hosted, behind nginx proxy HTTPS->HTTP, following config:
sudo docker run -p 80:80
-e GRIST_LOG_LEVEL=debug
-e PORT=80
-e APP_HOME_URL=https://mydomain.com
-e APP_STATIC_URL=https://mydomain.com
-e HOME_PORT=share
-e GRIST_DOMAIN=mydomain.com
-e GRIST_SUPPORT_EMAIL=myfowardauthemail
-e DEBUG=1
-e GRIST_PROXY_AUTH_HEADER=myfowardauthemailheader
-e GRIST_FORWARD_AUTH_HEADER=myfowardauthemailheader
-e GRIST_IGNORE_SESSION=true
-e GRIST_ALLOWED_HOSTS='mydomain.com'
-v /home/mydir/persist:/persist:z
-e GRIST_SINGLE_ORG=myorg
-it gristlabs/grist:latest
The proxy send the Upgrade, Connection Upgrade, and Host headers.
I am signed in automatically via the browser, but creating or attemping to view a doc results in 'No view access'. Looking at the log, I see my userId=5 for general operations, but the websocket coming in says:
Client responding to #0 ERROR No view access userId=1, org=myorg, clientId=5fe391985a8aa5cc, counter=4
DocManager.openDoc Authorizer key { urlId: 'cGJVmbQbdS5h8RYc8Xu97P', userId: 1, org: 'myorg' }
There is a grist_core cookie set to the domain as .mydomain.com. After deleting the sessions db and just visiting the home and opening a doc results in this entry:
g-wgw4r4Y3tfQnTVZrdvAiGB|1677424972578.0|{"cookie":{"originalMaxAge":null,"expires":null,"httpOnly":true,"domain":"mydomain.com","path":"/","sameSite":"lax"},"alive":1}
Here's a bit of the logging that happens in this scenario:
/cGJVmbQbdS5h/Untitled-document, language=en-US, platform=MacIntel, userAgent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/<IP_ADDRESS> Safari/537.36, org=myorg, email=myforwardauthemail, userId=5, altSessionId=undefined
2023-02-25 15:15:46.315 - debug: Auth[GET]: mydomain.com /session/access/active customHostSession=, method=GET, host=mydomain.com, path=/session/access/active, org=myorg, email=myforwardauthemail, userId=5, altSessionId=undefined
2023-02-25 15:15:46.321 - debug: Auth[GET]: mydomain.com /session/access/all customHostSession=, method=GET, host=mydomain.com, path=/session/access/all, org=myorg, email=myforwardauthemail, userId=5, altSessionId=undefined
2023-02-25 15:15:46.329 - debug: Auth[GET]: mydomain.com /worker/cGJVmbQbdS5h8RYc8Xu97P customHostSession=, method=GET, host=mydomain.com, path=/worker/cGJVmbQbdS5h8RYc8Xu97P, org=myorg, email=myforwardauthemail, userId=5, altSessionId=undefined
2023-02-25 15:15:46.471 - debug: Auth[GET]: mydomain.com /docs/cGJVmbQbdS5h customHostSession=, method=GET, host=mydomain.com, path=/docs/cGJVmbQbdS5h, org=myorg, email=myforwardauthemail, userId=5, altSessionId=undefined
2023-02-25 15:15:46.611 - info: Comm: Got Websocket connection userId=1, org=myorg, clientId=e31d34222d3a0cc4, counter=2, urlPath=/?clientId=e31d34222d3a0cc4&counter=3&newClient=1&browserSettings=%7B%22timezone%22%3A%22America%2FChicago%22%7D&user=myforwardauthemail, reuseClient=true
2023-02-25 15:15:46.612 - debug: Client sending clientConnect newClient=true, needReload=false, docsClosed=0, missedMessages=undefined, org=myorg, clientId=e31d34222d3a0cc4, counter=3
2023-02-25 15:15:46.673 - info: Client onMessage '{"reqId":0,"method":"openDoc","args":["cGJVmbQbdS5h8RYc8Xu97P","default",null]}' org=myorg, clientId=e31d34222d3a0cc4, counter=3
2023-02-25 15:15:46.674 - debug: DocManager.openDoc Authorizer key { urlId: 'cGJVmbQbdS5h8RYc8Xu97P', userId: 1, org: 'myorg' }
2023-02-25 15:15:46.674 - warn: Client Responding to method openDoc with error: ErrorWithCode: No view access
at assertAccess (/grist/_build/app/server/lib/Authorizer.js:540:19)
at DocManager.openDoc (/grist/_build/app/server/lib/DocManager.js:265:43)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Client._onMessage (/grist/_build/app/server/lib/Client.js:435:58) {
code: 'AUTH_NO_VIEW',
details: [Object]
} AUTH_NO_VIEW userId=1, org=myorg, clientId=e31d34222d3a0cc4, counter=3
Logically it seems like the websocket session (or cookie?) isn't being ignored, or is being ignored and tagged as a new session. Not sure how to solve this; I suppose if I could pass an upstream session as a header, but I kind of thought the point of "IGNORE_SESSION" was to avoid this.
Verrrry close to this finally working, so fingers crossed!
Actually solved this, didn't realize the FORWARD_AUTH_HEADER wasn't being sent on that resource path (/dw). Swimming now! :)
Glad you found a fix @rtgoodwin
|
GITHUB_ARCHIVE
|
Org-mode: scheduling time to work on tasks
I'm still trying to wrap my head around how the agenda can work, and I'd like to get a few other people's approaches to it. What I'd like is to be able to take a task, say writing a paper. I know that will take about 6-8 hours, and I'm not going to write it in one sitting. So, I need to set aside three or four 2 hour blocks to work on it.
According to the manual, Deadlines are for the time something must be done by.
DEADLINE: <2016-03-16 Wed>
Simple enough. Scheduling allows you to set a time to start, so I can schedule the paper two weeks before it's due to start it.
SCHEDULED: <2016-03-02>
So far so good, but it hasn't actually given me specific times to work on the task. This question suggests scheduling an item, and letting it appear on the agenda until it's done, but that still leaves the problem that a large task can only be scheduled once.
I could assign several timestamps to it, one for every day I want to work on it.
<2016-03-04
<2016-03-06 7pm>
<2016-03-08>
This seems to be a popular solution, but the agenda and todo views don't seem to have an easy way to manipulate timestamps, as they do for deadlines and scheduled dates, with C-c C-d and C-c C-s.
The other option is to break up the task into a number of subtasks (e.g., research paper, write body, write conclusion, write intro), and schedule those individually. But, there seem to be many tasks that breaking it up just wouldn't make sense, or where the smallest subtask will still take several hours.
So, have I misunderstood how to schedule things? Is there a simple way to manipulate timestamps in agenda view, or do I just need to plan my projects out to the point every scheduled item can be done in a few hours? What approaches are have worked for you?
I have grown so accustomed to having time-stamps in my *Org Agenda* buffer that I forgot that it is not a built-in feature -- it's something that I created in my custom setup. For me, I just use Shift+up and Shift-down on a time-stamp. At some point, somebody ought to submit a feature request to have an option to include time-stamps in the agenda view. One option would be to just jump to the master todo-list, then use Shift+up / Shift+down on a time-stamp and jump back to a refreshed *Org Agenda* buffer. Not sure if that is any better than your C-c C-d and C-c C-s though.
Consider programmatically organizing your master todo-list with things like org-sort-entries and collapsing what you don't want to see and using sparse-trees. I use the agenda buffer when a targeted search is needed, and I use a calendar view to show me deadlines, birthdays and holidays. So, I don't really use org-agenda-list that much any more, but I do use org-tags-view and org-search-view all the time. They are on speed-dial to my custom keyboard global shortcuts.
@lawlist what have you bound Shift+up to? C-c C-d allows adding a deadline to an entry in the agenda views, not particularly useful since I typically add deadlines when I create a task.
@lawlist, Those functions do seem useful too. I'd say they could easily be an answer. It looks like a complete approach for working with org files.
Here are some built-in functions -- org-mode version 8.2.10 and perhaps earlier (but untested) -- for changing the date in org-agenda-mode that the original poster may find useful -- they are available in the drop-down menu or certain mouse pop-up context menus, and some of them have keyboard shortcuts already defined:
"Change Date +1 day" org-agenda-date-later
"Change Date -1 day" org-agenda-date-earlier
"Change Time +1 hour" org-agenda-do-date-later -- "C-u S-right"
"Change Time -1 hour" org-agenda-do-date-earlier --"C-u S-left"
"Change Time + min" org-agenda-date-later -- "C-u C-u S-right"
"Change Time - min" org-agenda-date-earlier -- "C-u C-u S-left"
"Change Date to ..." org-agenda-date-prompt
See the section of the manual: http://orgmode.org/manual/Agenda-commands.html
Another option would be to jump to the master org-mode file, and change relevant portions of the time-stamp with Shift+up / Shift+down / Shift+right / Shift+left -- and then redo the agenda buffer. For more information on those features type:
M-x describe-function RET org-shiftup RET
M-x describe-function RET org-shiftdown RET
M-x describe-function RET org-shiftright RET
M-x describe-function RET org-shiftleft RET
The three main functions used to create org-agenda buffers are org-agenda-list; org-tags-view; and org-search-view. Consider putting them on speed-dial with global keyboard shortcuts.
Consider programmatically organizing the master org-mode file with functions such as org-sort-entries, and using the outline collapse features to make everything more readable. See also the section of the manual relating to creating sparstrees (a collapsed outline based on a search criteria): http://orgmode.org/manual/Sparse-trees.html
Scheduled tasks can have repeaters:
* TODO Write paper
SCHEDULED: <2016-03-17 18:00 +2d>
In this example, "Write paper" will appear on the agenda every 2 days. Each time it appears, you do it and mark it as DONE. The schedule will then advance by 2 days, hiding the completed task from today's agenda. This only works, of course, if you work on it periodically.
|
STACK_EXCHANGE
|
A Brief About Microsoft Certified: Azure AI Engineer – Associate Exam AI-100
Artificial Intelligence (AI) is no more a buzzword in the tech world; rather IT companies across the globe are now putting effort into AI as the paradigm of next-generation computing depends mostly on this aspect. AI – 100 is an associate-level certification from Microsoft, which recognizes the capacity of an individual to work with subject matters of AI like machine learning, NLP (Natural Language Processing), cognitive services, speech and face recognition, knowledge mining and so on. AI – 100 certified professionals are considered qualified IT professionals to work with the Azure AI portfolio. In order to become a Microsoft certified Azure AI professional, you have to appear for the exam ‘AI-100: Designing and Implementing an Azure AI Solution’ and after successful completion, you will earn Azure AI Engineer Associate certification.
Skills Measure By AI – 100 Certification
The course curriculum of AI – 100 is designed to deliver the skill-set required for AI-based applications and operations used in IT business process management. Content of AI – 100 courses is segmented into three different sub-topics, which are:
· Analyze The Requirement Of The Solution
(Weight: 25% to 30%)
I. Be familiar with the cognitive services of the Application Programming Interfaces (APIs).
II. Measure the security protocols of the applications, data and processes.
III. Be able to recommend the appropriate applications, storage mechanisms and the services.
IV. Finding the requirements for automation.
· Design And Develop The Solution For AI
(Weight: 40% to 45%)
I. Develop a plan for the solution, which incorporate one or more pipelines.
II. Develop a plan for the solution using cognitive solution.
III. Develop a plan for the solution by implementing the BOT Framework.
IV. Develop a computer based platform in order to assist the solution.
V. Be able to create an appropriate plan for compliance, security, data governance and integrity.
· Deploy And Monitor The Solutions For AI
(Weight: 25% to 30%)
I. Deploy an AI based workflow.
II. Incorporate AI based applications and services.
III. Deploy and monitor the processes for the AI based applications.
Learning Path For AI – 100 Certifications:
· Name of the exam: Designing and Implementing an Azure AI Solution
· Exam code: AI – 100
· Total number of questions: 62
· Total duration of the exam: Three hours and forty minutes
· Total marks of the exam: 1000
· Passing score: 700
· Cost of the exam: £130
Job Roles for AI – 100 certified professional
Usually, AI – 100 certified professionals are working for the posts/positions, mentioned below:
· AI Engineer
· Machine Learning Engineer
· Senior Software Engineer (Applied Machine Learning)
· Senior Data Analytics
· Business Intelligence Engineer
· Senior Software Engineer (Data Engineering)
· AI Developer
· Data Scientist
· Deep Learning Engineer
Average Salary of AI – 100 Certified Professional
On an average, AI Engineer and/or AI – 100 certified professionals are getting paid around £96,132 per annum
Getting Prepared for AI – 100 Exam
Microsoft Learning Platform and Microsoft Docs are two official resource platforms for the candidate of AI – 100 certification exam. Plenty of resources are available here, in these two platforms from where the candidates can find and go through their required resource, in order to ensure a good preparation for the exam. You may also find instructor led training, offered by colleges and universities under the school of ‘Professional Development’. There are many online study groups, from where you can find reference books and resources for AI – 100 certification exams. You may also fine tune your preparation through practice tests for the exam, which are available at different online learning portals.
|
OPCFW_CODE
|
import json
import urllib.request
import plotly.graph_objects as go
from voprov.models.model import VOProvDocument
def prov_to_plotly(data):
"""
Convert a provenance bundle/document into a plotly object.
:param bundle: The provenance bundle/document to be converted.
:type bundle: :class: `ProvBundle`
:returns: :class: `plotly.graph_objects`
"""
if isinstance(bundle, VOProvDocument):
node = []
color_node = []
for e in data['entity']:
node.append(e)
color_node.append("lightgoldenrodyellow")
for edes in data['entityDescription']:
node.append(edes)
color_node.append("orange")
for a in data['agent']:
node.append(a)
color_node.append("yellow")
for act in data['activity']:
node.append(act)
color_node.append("blue")
for b_key, b_value in data['bundle'].items():
if 'description' in b_key:
for b_act in b_value['activityDescription']:
node.append(b_act)
color_node.append("orange")
elif 'configuration' in b_key:
if 'parameter' in b_value:
for b_param in b_value['parameter']:
node.append(b_param)
color_node.append("green")
if 'configFile' in b_value:
for b_param in b_value['configFile']:
node.append(b_param)
color_node.append("green")
source = []
target = []
color_links = []
for att in data['wasAttributedTo'].values():
source.append(node.index(att['prov:agent']))
target.append(node.index(att['prov:entity']))
color_links.append("lightgoldenrodyellow")
for des in data['isDescribedBy'].values():
source.append(node.index(des['voprov:descriptor']))
target.append(node.index(des['voprov:described']))
color_links.append("orange")
for use in data['used'].values():
source.append(node.index(use['prov:entity']))
target.append(node.index(use['prov:activity']))
color_links.append("red")
for gen in data['wasGeneratedBy'].values():
source.append(node.index(gen['prov:activity']))
target.append(node.index(gen['prov:entity']))
color_links.append("green")
for conf in data['wasConfiguredBy'].values():
source.append(node.index(conf['voprov:configurator']))
target.append(node.index(conf['voprov:configured']))
color_links.append("lightgreen")
value = [1 for i in range(len(source))]
fig = go.Figure(data=[go.Sankey(
valueformat=".0f",
# Define nodes
node=dict(
pad=15,
thickness=15,
line=dict(color="black", width=0.5),
label=node,
color=color_node
),
# Add links
link=dict(
source=source,
target=target,
value=value,
color=color_links
)
)])
fig.update_layout(title_text="cellphone",
font_size=10)
return fig
|
STACK_EDU
|
GISjobs.com It All Starts Here
Portland, OR, USA
LISA L. SILVA
4015 SW Pasadena Street
Portland, OR 97219
Master of Arts
GIS Development and Environment
Clark University, Worcester MA
Bachelor of Science
Geography and GIS Portland State University, Portland OR June 2000
CompTIA CTT+ Technical trainer certification -2010
Esri Certified ArcGIS Desktop Associate -2011
Data Science: R Basics -HarvardX 2018
Data Science: Visualization -HarvardX 2018
ArcGIS Pro -ESRI Academy 2018
ArcGIS Online Mobile using Android devices.
Lead Training Instructor for the following Esri GIS Courses: April 2007- February 2012
ArcGIS 1: Introduction to GIS
ArcGIS 2: Essential Workflows
ArcGIS 3: Performing Analysis
Introduction to ArcGIS Sever
ArcGIS Business Analyst
Learning Python for GIS
ArcGIS Spatial Analyst-statistics
Small Business Owner May 2014-Present
Silva Cafι www.silvacafe.com
Owner and manager of a coffee shop and bakery in Southwest Portland.
Used GIS software and statistical data to build a successful business plan.
Calculated cost of goods for each menu item.
Building respectable customer relationships and business connections for five years.
Our customer population has tripled from 2016 to 2018; raising profits by 70% in the last two years.
Assistant Instructor/GIS Lab Assistant, January 2013-June 2014
Portland State University and Portland Community College
Volunteered as a GIS Analyst subject matter expert (SME) for graduate and undergraduate students.
Provided assistance to students using ArcGIS applications and open source GIS Software, such as, Quantum GIS, PostgreSQL, PostGIS (Postgres) as it pertained to image analysis, geodatabase structure building, data management and general spatial analysis.
IT Human Resource Training Specialist February 2012-December 2012
Bonneville Power Administration (BPA)
Developed BPAs ArcGIS Online maps and GIS software trainings.
Authored detailed GIS training workbooks and produced GIS training videos for over 400 BPA employees.
Managed and maintained an inventory of critical company data using ArcMap database structures.
Demonstrated multiple GIS Analysis procedures and GIS models for transmission assets throughout BPA, including knowledge of the electrical utility industry and experience in acquiring right of way and related property interest data.
Delivered GPS field data collection demonstrations, ArcMap editing, managed ESRI ArcSDE Enterprise Geodatabases for MS SQL Server and data managing procedures to the BPA GIS field team. Also worked with outside data sources from RLIS data, Census data, and TIGER data.
Successfully worked with multiple teams during development GIS designs at BPA.
Published BPA GIS software documents and presentations using PowerPoint and Microsoft office software for the internal 2010 SharePoint site.
Esri GIS Course Instructor April 2007- February 2012
Esri Educational Services-Headquarters, Redlands, CA
CompTIA CTT+ Technical trainer certification
Esri training instructor on topics of GIS Desktop application and GIS database structures.
Advisor and editor of ArcGIS desktop classroom materials.
Continuous updated studies of data analysis techniques, multi user geodatabase management, Python coding, converting multiple GIS data formats for editing; including converting CAD generated drawings.
Used Model Builder to string together multiple GIS tools and scripts for automation functions.
Experience conducting live online technical training classes using Adobe Connect accounts and linked conference calls.
Produced and maintained Esris internal technical trainers SharePoint 2010 site that is used by over a hundred people in the educational department.
Esri Desktop and Extensions Technical Support April 2008-April 2009
Esri Support Services-Headquarters, Redlands, CA
Client support of ArcMap applications and data structure, which included support of; Business Analyst extension, Model Builder, geodatabase management, Python script, mobile ArcPad, Spatial Analyst extension, Time Animation, Map layouts, and Imagery Server using screen sharing live client support services.
Reported software bugs and wrote software knowledge base articles for the Esri support website.
GIS Lab Director and GIS Instructor May 2006 September 2006
The Iracambi Rainforest Research and Conservation Center, Brazil
Implemented and raised funding for vegetation classification map projects in the Atlantic Rainforest of Brazil through field work, GPS handheld devices and ArcGIS mapping software.
All projects were submitted as a published thesis paper at Clark University in Worcester, MA.
Directed the GIS lab research center projects, organized data collection reports and data storage in multiple file geodatabases.
Researched rainforest protection publications for future environmental protection project proposals.
Provided GIS software training, mobile GPS usage training and cartography skills training to other volunteer researchers; www.iracambi.com.
Software Technical Support August 2005 May 2006
IDRISI Clark Labs, Worcester, MA
Installation and software support via email for IDRISI mapping software users.
Tested raster software tools, tutorials and data models including the Land Change Modeler based on a Neural Networks Model for the IDRISI Andes 15.0 Edition; www.clarklabs.org.
Peace Corps Volunteer September 2002 December 2004
The Gambia, West Africa
Trained local communities in sustainable agriculture and forestry management.
Trained local villages on how to set up a solar energy system.
Shared resources on improved irrigation techniques.
Collected data for the German Forestry Department database on bush fire activity.
Posted 2019-01-03 under Geospatial
|
OPCFW_CODE
|
Why is this signature independent of the message?
Assume that we have the following signature scheme CL Signature:
Choose a cyclic group $G = \langle g \rangle$ of order $q$.
Uniformly and randomly choose two elements $x,y \in \mathbb{Z}_q$, and compute $X = g^x$ and $Y = g^y$.
The secret key is $sk = (x,y)$, while the public key is $pk = (q, G, g, X, Y)$.
On input a message $m \in \mathbb{Z}_q$, secret key $sk$ and public key $pk$, choose a random $a \in G$ and output the signature: $$\sigma = (a, a^y, a^{x + xym}).$$
In the same paper, they ensure that $\sigma$ is NOT information-theoretically independent of the message $m$ being signed and propose an alternative that achieves this independent notion $$\sigma = (a, a^z, a^y, a^{zy}, a^{x + xy(m + zr)}),$$ where $z,r \in \mathbb{Z}_q$ are another uniformly random elements such that $Z = g^z$ is also part of the public key $pk$.
Several questions arises on that:
What exactly means to be information-theoretically independent?
Why it is not achieved by the first scheme?
What happens if we change $a^{x + xy(m + zr)}$ with $a^{x + xy(m + r)}$?
Intuitively, I think that being information-theoretically independent means that the signature reveals no information about the message $m$. Then, why the first one reveals something about the message $m$?
Note that in the first variant given a signature and knowing the public key, the message $m$ is uniquely determined. Information-theoretically dependent means what you assume it means. An unbounded adversary who can compute all discrete logs, can thus easily figure out the unique $m$ in the first scheme.
In the second variant, the message is information-theoretically hidden (even for unbounded adversaries). Note that $m+zr$ is basically „an encryption“ with a secret key $r$ (and $r$ does not appear anywhere else). So for $v=m+zr$ you can always find a suitable $r‘$ for any possible choice of $m‘$. This information-theoretically hides the message.
The problem with your modification is as follows. In the second variant a message is $(m,r)$. If you modify it to $a^{x+xy(m+r)}$ then you can open your signature to any message $m‘$ by computing $r‘=(m+r)-m‘$ and providing $(m‘,r‘)$ as the new message. This clearly will verify. Thus, your modification does not yield an unforgeable signature scheme. For the second variant of the CL signatures, however, the scheme can be shown to be unforgeable as $z$ basically commits you to the $r$ value.
Thanks for the answer. Anyways, do you have any intuition why the first signature is not good in practice? The mentioned problem is a realistic "problem" inside the post-quantum world.
I think one cannot say that it is "not good in practice" without having a concrete application in mind. Sure if you want to hide messages unconditionally when seeing only signatures without messages even if you have a powerful quantum computer, then the second scheme makes sense. But note that you could achieve that with any signature scheme by simply signing an unconditionally hiding commitment to the message instead of the message itself. Do you have a concrete application in mind?
The CL Signature is used for anonymous credentials. I think unconditionally hiding is needed here because you are identified through commitments. If you use $g^x$ as your "identification" for different organizations, then they will obviously know that they are interacting with the same user (since they know $g$).
Yes, sure in this case you want to have commitments to the values and so the second variant is the one to go for.
|
STACK_EXCHANGE
|
- Trending Categories
- Data Structure
- Operating System
- MS Excel
- C Programming
- Social Studies
- Fashion Studies
- Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
How to Increase Disk Inode Number in Linux
In Linux, an inode is a data structure that stores information about a file or directory. Each inode contains details such as the file's ownership, permissions, size, and location on the disk. Inodes are crucial to the functioning of the file system as they allow the operating system to locate and access files quickly. However, in some cases, the number of inodes on a disk may be limited, leading to potential performance issues. In this article, we'll look at how to increase the disk inode number in Linux.
To understand how to increase the disk inode number, it's essential to know how inodes work. Inodes are created when a file or directory is created on a disk. Each inode is assigned a unique number, which is used by the file system to access the file or directory. When a file is deleted, its corresponding inode is also deleted, freeing up space on the disk.
Checking the Inode Usage
Before increasing the disk inode number, it's important to check the current inode usage. To do this, you can use the df -i command, which will display the inode usage for each mounted file system. For example −
$ df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 5242880 327680 4915200 7% /
The output shows the inode usage for the / file system. In this case, there are 5242880 inodes available, of which 327680 are currently in use. The IUse% column shows the percentage of used inodes. If the usage is approaching 100%, it may be necessary to increase the disk inode number.
Increasing the Disk Inode Number
To increase the disk inode number, you'll need to create a new file system with a higher inode count and then move the existing files to the new file system. This process can be time-consuming and may require additional disk space, so it's important to plan accordingly.
Creating a New File System
To create a new file system with a higher inode count, you can use the mkfs command. For example, to create a new ext4 file system with 10 million inodes, you can run −
$ sudo mkfs -t ext4 -N 10000000 /dev/sdb1
In this example, /dev/sdb1 is the device on which the new file system will be created. The -t option specifies the file system type, and the -N option sets the number of inodes.
Mounting the New File System
Once the new file system has been created, you can mount it to a directory using the mount command. For example, to mount the new file system to the /mnt/data directory, you can run −
$ sudo mount /dev/sdb1 /mnt/data
Moving the Existing Files
With the new file system mounted, you can now move the existing files to the new location. One way to do this is to use the rsync command, which can copy files and directories while preserving ownership, permissions, and other attributes. For example, to copy all files and directories from the /data directory to the new location, you can run −
$ sudo rsync -a /data/ /mnt/data/
The -a option enables archive mode, which preserves all attributes of the files and directories being copied.
Updating the File System Table
Once the files have been copied to the new file system, you'll need to update the file system table to ensure that the new file system is mounted automatically at boot time. To do this, you can edit the /etc/fstab file and add an entry for the new file system. For example −
/dev/sdb1 /mnt/data ext4 defaults 0 2
In this example, the first column specifies the device, the second column specifies the mount point, the third column specifies the file system type, the fourth column specifies the mount options (in this case, the defaults), and the last two columns specify the dump and fsck order.
When increasing the disk inode number, there are several considerations that should be taken into account. First, it's important to ensure that the file system type supports the desired number of inodes. Some file systems, such as ext2 and ext3, have a fixed inode count that cannot be changed without reformatting the disk. In contrast, ext4 and xfs file systems allow for dynamic inode allocation, which makes it easier to increase the inode count without reformatting.
Second, increasing the inode count can have implications for disk space usage. Each inode requires some overhead, which can add up if a large number of inodes are created. Therefore, it's important to consider the number of files and directories that will be stored on the disk and to choose an appropriate inode count that balances the need for performance with the available disk space.
In addition to creating a new file system with a higher inode count, there are several alternative methods that can be used to increase the disk inode number in Linux. These include −
Resize the existing file system − If there is unused space on the disk, it may be possible to resize the existing file system to increase the inode count. This can be done using tools such as resize2fs or xfs_growfs. However, this method can be risky and may result in data loss or corruption if not done correctly.
Use a file system with a higher default inode count − Some file systems, such as xfs, have a higher default inode count than others, such as ext4. Using a file system with a higher default inode count can help avoid the need to increase the inode count manually.
Use a different file system type − Different file system types have different limits on the number of inodes they can support. For example, btrfs and ReiserFS have much higher inode limits than ext4. Using a different file system type may be a viable option if the current file system is limiting performance due to its inode count.
To ensure a successful inode increase process, it's important to follow best practices. These include −
Creating a backup − As mentioned earlier, creating a backup of the file system before making any changes is crucial. This can help prevent data loss or corruption in the event that something goes wrong during the inode increase process.
Testing the process in a non-production environment − Before attempting to increase the inode count on a production system, it's a good idea to test the process in a non-production environment. This can help identify any potential issues or challenges and allow you to refine the process before implementing it in a live environment.
Monitoring disk space usage − Keeping an eye on disk space usage before, during, and after the inode increase process can help prevent issues related to disk space constraints.
Verifying file system integrity − After increasing the inode count, it's a good idea to verify the integrity of the file system to ensure that no data has been lost or corrupted.
By following these best practices, you can help ensure a successful inode increase process and minimize the risk of data loss or corruption.
Increasing the disk inode number in Linux can help prevent performance issues and ensure that the file system can accommodate a large number of files and directories. However, it's important to carefully plan and execute the process, as it can be time-consuming and may require additional disk space. By following the steps outlined in this article, you can safely and effectively increase the disk inode number in Linux.
Kickstart Your Career
Get certified by completing the courseGet Started
|
OPCFW_CODE
|
As hinted at on Mastodon, I’ve gone ahead and releaesd the first preview version of Starchart Studio - a component story tool similar to Storybook designed for island architectures and powered by Astro.
Why yet another story tool? Good question!
When looking to integrate a tools like Storybook into a recent Astro project, I kept on running into hurdles or DX decisions I didn’t love: most seem designed with SPA architectures in mind, support for various frameworks felt limited (usually React or Vue first), and none supported Astro components, which were going to make up a large portion of my Astro project. I wanted something that would work with anything I was likely to throw at it and didn’t require tight coupling. So, as I’m likely to do, I made a thing.
Astro was a great starting place: it has built-in dynamic page and component rendering, a robust component model, support for lots of JS frameworks out-of-the-box, and is super flexible. Building a story tool on top of it was mostly defining a content model and DX patterns with very little novel code needing to be written. This left me with only really needing to “solve the hard stuff”, aka Starchart’s logic and DX. For that, I really only needed to figure out:
- How to generate both stand-alone previews and full-story pages for one story input (1 to n page generation with different layouts)
- How to list all components without asking the developer for duplicate work
- How to connect state to components in a dynamic way
The biggest “bit” of a story tool is having two or more views of the same story: the fully-featured story page and a stand-alone iframed page for the component to sit in isolation. I wanted to keep basic component stories to a single file, so I settled on Astro’s MDX integration as a base for stories. This allows developers to write Markdown documentation while also exporting an object with configuration, including a component that can be passed around and rendered. When globbed, they come with a bunch of useful properties, too, which can be used to build out robust systems if not used directly to generate pages.
One of properties is
file. When combined with
getStaticPaths and dynamic params in filenames, you can create multiple related pages from a single input. Because this function is run server-side from Node, you can use Node built-ins without worry, like
path. I grab the basename of the file, and generate two entries in
getStaticPaths, one for the isolated component, and one for the fully-featured story page. I can even have different properties that get passed to those components (from
Astro.props) from this, making it possible to further change exactly what’s available on the page. This includes signaling if the page should be inline and, if so, choosing a different layout. The main
Starchart.astro component basically exists to do just that:
chart, in the above code sample, is actually the list of all components, and their URLs, ready to be made into nav of some sort. Getting this from an individual page is kinda tricky;
getStaticPaths runs in isolation so it’s hard to capture the glob and reuse it. When you use Starchart’s
getStaticPaths function, you’re actually calling a method on a singleton class instance; that lets me save the results to a property and recall them during other function calls, which you make to set up the story. As a side note, a singleton class to hold state is a pattern I find myself using quite often when with JS modules.
With this basic setup and the built-in component model, I had prop-only components rendering! Inputs couldn’t be changed, but I considered that a fair tradeoff considering they’re designed to not be once loaded. But sometimes you have state, and that posed problems.
I knew developers would need to define functions to connect form updates to their state management of choice, but Astros’
define:vars for passing variables into script tags stringifies everything, so no functions allowed. While I maybe could work around this with an
exec, it was strike one. Next, the system works using dynamic tags to render the component, but dynamic tags can’t be hydrated. This unfortunately took me a while of spinning to realize; as it’s only mentioned under dynamic tags and not in the hydration docs, but I got there. That was strike two. One more and this whole thing collapses. But fortunately, Astro’s component model, while a hinderance here, actually saved me.
See, Astro is all about mixing islands of interactivity with static content. This means that you can make a static Astro component that has hydrated components in it, and that works fine. Because that component doesn’t need to be hydrated, it can then be dynamically loaded. And thus, a simple, if slightly inelegant, solution emerged: hydrated components need to be wrapped in a static component. We’ll pass you all the props you need to render your component, just swap the direct component call in your story for your wrapped one. This also provides a clear place to add Starchart-specific code for managing state and events without coupling it into your production component.
The last bit is pretty straight-forward: the isolated component lives in an iframe, so
postmessage to send data across that border, with a couple of custom events and some developer-facing helper functions to manage it all for them, so they just need to write how their state changes when it comes it.
The last bit I added was a width slider to the preview area, inspired by Brad Frost’s ish.. I absolutely loved ish. during my early responsive web design days, and with the rise of container queries, I feel like it’s time to shine is once again here, but this time for components. I’ve started with a basic implementation, changing width, but I think I’d like to add height, too, and the size buttons: small-ish, medium-ish, large-ish, xl-ish, and of course, the disco mode.
I also want to add the ability to include variants for a component. I haven’t quite figured out how to do that yet, but with my solution for building inline pages squared away, it’ll probably be a variation (get it, get it) on that.
Finally, this needs some visual polish. I’ll get to that.
And a website. Every good OSS project these days has a website.
With all of that, I’ve release version 0.2.1 (because no good initial release goes without an immediate point fix) of Starchart Studio. The README has how to use it and the repo has a working demo in the
src directory. If you try it out, LMK!
|
OPCFW_CODE
|
Let’s say you’re the original writer of a document, and you’re reviewing changes suggested by others. As you go through the document, you decide what you want to do with the comments and changes – accept or reject them, either individually, or all at once.
Remove tracked changes
The only way to get tracked changes out of a document is to accept or reject them. Choosing No Markup in the Display for Review box helps you see what the final document will look like, but it only hides tracked changes temporarily. The changes are not deleted, and they’ll show up again the next time anyone opens the document.
To delete the tracked changes permanently, accept or reject them. Click REVIEW > Next > Accept or Reject. Word accepts the change, or removes it, and then moves to the next change.
Let’s say you are the original writer of a document, and you are reviewing changes suggested by others.
This is what you see when you open the document.
You can click a line to show the changes, or you can go to the REVIEW tab, click the arrow next to Display for Review, and choose another option.
All Markup shows all changes, Simple Markup is the same as clicking a line to hide changes, No Markup shows you what the document would look like, with all the changes in place, and Original shows your original document with no changes.
As you go through the document, you decide what you want to do with the comments and changes.
When you make an edit, your markup appears in a different color.
Word automatically chooses a color for each reviewer.
Change anything you want, including the changes from the other person.
If the markup looks confusing, click the line to show Simple Markup.
When you are done editing the changes, you finalize the document by removing the comments and markup.
It is not enough to simply click No Markup, because that only hides it. The markup is still there.
To make it go away permanently, you need to go through the document, and Accept, or Reject changes.
Select All Markup, then, press Ctrl+Home to move the cursor to the very beginning of the document.
Now click Next, and the first change is selected.
Decide what you want to do with the change, and click either Accept or Reject, and the selection moves to the next change. Decide what you want to do with this change, and you move to the next one.
You can continue this way through the whole document, or click the arrow under to Accept or Reject, and choose an option.
You can accept or reject a change, and not move to the next one.
You can simply accept or reject all changes in the document, or accept or reject all changes and stop track changes.
This last option turns off Track Changes, accepts all changes, and removes the markup.
Now when you type in the document, your changes aren’t tracked.
The last thing to do is remove comments. You can right-click a comment to delete it.
Or go to the Comments group, click the arrow under Delete and Delete all comments in Document with one click.
So now, you have the basics of reviewing the comments and track changes.
But the process of reviewing can get pretty complicated, especially when you review input from many sources.
Up next, you’ll see how tracking changes online can make the job much easier with multiple people.
|
OPCFW_CODE
|
Difference Between Microsoft Office / Windows RTM, Final, Retail, OEM & Original MSD
You might still be confused about the many versions of Microsoft Office / Windows that have the name Beta, RC (Release Candidate), RTM, Retail, OEM. Here I will explain a little about this.
Microsoft usually in the process of making Windows has several stages including:
This version is the earliest version of Windows OS. This version is still in the development stage and not for the public.
The beta version is the same version of the development stage as the Alpha version, except that this Beta version already has a more advanced stage and can be released to the public. Several years ago (Around 2012) Microsoft released several Beta versions of Windows products for Windows 8 like.
- Developer Preview: This version is intended for developers (application developers / programmers). Usually this version is not recommended for the public even though it has been released on the official website.
- Consumer Preview: This version is level above the DP version (Developer Preview), but the difference with the DP is that the Consumer Preview version can be enjoyed by the public as an experimental material. But this version hasn’t entered the Final version or there are still many bugs in its features.
After that, Windows will enter the RC (Release Candidate) version. This version is actually the same as the Beta version but the level is higher. This Release Candidate version can be said to be just one or two more steps before the Final version.
RTM / Final / Original MSDN
After the development is complete, the final version is released in the form of RTM or Release To Manufacturing. The RTM version will be given directly to electronic companies that work with Microsoft, for example HP, ASUS, Lenovo, etc. The RTM version is also given to MSDN (Microsoft Developer Network) users. Basically this RTM version has reached the Final version, only this RTM version will be divided into several parts depending on the license agreement including:
- Retail: The Retail version is actually the same as RTM with the same Windows Build. Retail is intended for consumers directly in the form of DVD Box. The Retail version is still divided into 2 versions of the license agreement.
- Full License or commonly called Full Packaged Product (FPP): This version is the most expensive Retail version. Users can use it without any usage restrictions. Users can also upgrade, reinstall or transfer licenses to another PC as long as they are not installed on 2 PCs simultaneously. This version is usually intended for consumers / home users.
- Upgrade License: This version is the same as the FPP version, but with a cheaper price. Usually Microsoft offers this version for consumers who already have an original product key either FPP or OEM version in the previous OS, one example is an upgrade from Windows 7 to Windows 8.1.
OEM or Original Equipment Manufacturer is a Windows license agreement that is usually pre-installed or has been installed automatically into a PC / Laptop. The difference between this version and the Retail FPP version is that the OEM version cannot be installed on other devices and has been “tied” directly to the motherboard of the device (usually injected into the BIOS product key) so when reinstalling, usually the OEM version does not require serial number again. The next difference between Retail and OEM is that the OEM version is usually integrated directly with the device driver. With this OEM version we don’t have to bother looking for suitable drivers. The convenience of the OEM version is to have recovery that is compatible with devices that have an OEM version of Windows installed. So it’s like the user is “fixing” his own device because the company has provided it by default.
The Volume Licensing version is arguably the wholesale version of Windows. This version is only intended for the small to top companies. This version is usually activated with MAK (Multiple Activation Key).
The conclusion is that the RTM version is arguably the “Golden Master” version of the Windows OS. Because with this RTM version, later it can be added various kinds of updates such as Service Pack, Tweak to make the OEM version and so on. Essentially all versions of Retail, OEM, VL are based on this RTM version.
|
OPCFW_CODE
|
As an electronic design automation (EDA) company, Agnisys provides many benefits for chip design and verification engineers. Our specification automation solution generates both the register transfer level (RTL) design and key elements for verification with simulation and formal tools. As the diagram below shows, our solution also provides value to other teams. In this post, I’d like to focus specifically on embedded programmers, since we are seeing increasing interest in our capabilities in this domain.
By embedded programming, I mean just about any type of software that interacts directly with the hardware. This includes boot code, firmware, device drivers, and what is sometimes called hardware-dependent software (HdS). In a system-on-chip (SoC) device, these programs generally run on the embedded processors within the chip, but some parts may run on a host system as well. The key common element is that the software controls and communicates with the hardware by reading and writing a set of control and status registers (CSRs). Because we support register automation, we can facilitate embedded code development by generating the C/C++ header files for the registers in the design.
Our IDesignSpec NextGen (IDS-NG) solution reads register definitions in a variety of popular formats and supports entry and modification of register details in an intuitive graphical editor. As shown in the diagram above, IDS-NG generates design code, verification code, and documentation in addition to the C/C++ code. The header files enable embedded programmers to get access to names, locations, and various properties of your registers and bit fields. Within IDS-NG, users can select from several options to suit their requirements:
The header file contains structures and unions used to create an exact representation of the register model
The header file contains macros and #define preprocessor commands but is devoid of any structures/unions for registers
The header file conforms to MISRA C, a standard created by the Motor Industry Software Reliability Association
The header file supports reuse at different levels/hierarchies of the register map with a base address argument for each block that is used multiple times
Having a ready-to-use C/C++ register description is obviously a great help, but programmers often would like additional support in writing the routines that access the registers. IDS-NG also generates C/C++ tests that check many aspects of the registers and memories within a design block. Reading and writing registers is just the start. The tests also check for proper operation of special register types such as alias, lock, shadow, FIFO, buffer, interrupt, counter, paged, virtual, external, and read/write pairs. Other generated tests include read/write of random values, zeroes and ones, and walking ones for both registers and memory.
These tests are constructed using an application programming interface (API) that is also available to users for creating their own custom test sequences. IDS-NG enables users to describe the programming and test sequences of a device from a single specification and automatically generate sequences ready to use from the early stages of design and verification all the way to post-silicon validation. These sequences can be described naturally and succinctly in the same graphical editor used to define the registers and memories. Sequences can be simple, or complex involving conditional expressions, array of registers, loops, etc.
As noted earlier, hardware-dependent code can run on the processors within the chip or in a host system (for example, device drivers). In simulation, the same test sequences may be run from a testbench conforming to the Universal Verification Methodology (UVM). Further, these sequences are often adapted for use in automatic test equipment (ATE) during chip production. IDS-NG encompasses all these options, generating:
The embedded processor C/C++ code, which may run in simulation but is essential for emulation, post-silicon validation in the bring-up lab, and chip operation in the field
The SystemVerilog code necessary to run the sequences in the UVM testbench and to coordinate the processor-based tests with the UVM environment
Various output files suitable for use by ATE engineers
In addition to supporting user-defined blocks with custom registers and sequences, we also provide our Standard Library of IP Generators (SLIP-G) for popular standard designs such as DMA, GPIO, I2C, I2S, PIC, SPI, AES, PWM, UART, and timers. Users have many options to customize these blocks, and IDS-NG tailors the registers and tests appropriately for their choices. We also make a standard configuration API available for each generated IP block so that programmers can easily supplement our tests with their own if they wish to do so.
To summarize, we do a lot to help embedded programmers. IDS-NG automatically generates C/C++ header files, tests, APIs, and sequences. Every time that you make a change in your specification, you can re-generate all these output files at the push of a button. This keeps all project teams in sync, reduces debug time, shortens your schedule, and saves resources. IDS-NG does the routine work of ensuring correctness with high quality, repeatable results, while your programmers focus on the algorithms and other differentiated parts of their code. To learn more, I invite you to watch our recorded webinar
|
OPCFW_CODE
|
SELECTs in Database Change Log and Source Control
We are overhauling the way that we store our database in source control and keep a change log of it. I was reading the following article: http://thedailywtf.com/articles/Database-Changes-Done-Right, and in the Short Section of "The Taxonomy of Database Scripts" it describes the three types of scripts (QUERY, OBJECT, and CHANGE). I like the idea of generalizing scripts into these three categories but I'm wondering about the QUERY type. Questions:
Why would someone want to put a SELECT statement into source control
outside of an object?
The database will change afterwards and make the QUERY script
unusable, what then?
The data may change returning a different result set, this would
defeat purpose of source control, what then?
Would the original result set have to be saved to solve the 2nd and
3rd issues?
What is an example of a SELECT statement that might be put into
source control?
Wouldn't a INSERT statement work better and store the results in a
table as in baselines?
I just can't see the purpose of storing SELECT statements into source control.
If there is a purpose could someone please answer the above questions and maybe state the pros and cons of storing a SELECT statement into source control?
History. Log it, and you will use it. Do not log it, and it will be lost forever.
@Max Vernon, then the result sets of the selects should be saved as well? otherwise I don't see the point of keep history of selects that don't work.
Ostensibly, the select statements used to work on some prior version of the database. Perhaps you have some kind of crazily complex T-SQL that you want to find and use again someday, that has been recently overwritten with a new version. Version control lets you go back to old versions to see how it has changed over time. Can be very useful.
Thanks @Max Vernon
I'll probably implement, in case we ever need to.
Well, in the article you reference, he specifically states "The first category of scripts fall out of the realm of database changes." That said, I've worked with plenty of legacy codebases that had select statements in source control; they were just in the application code. (So, subject to all the problems you mention and hidden from the DBA.)
Before answering your question about selects specifically, It seems important to mention that the whole point of having a source control system is to be able to trace back where things went wrong. That's why people might choose one over, say, an archive of daily backups. In your second bullet point, you mention that your select statement might become invalid, but if you have a version control system you would be able to check out a previous version where the select statement still matched the schema. Once you figured out what it was intended to do, you could check in a corrected version of that statement at a later version.
I guess I've now answered your first through third points. It's unlikely (though entirely possible) that a select statement sitting in it's own sql file would be used by some other piece of software, but consider that a select statement sitting inside a stored procedure is equally able to become out-of-synch with schema. Expanding on point 3, it's precisely because we recognize that this is a problem that we have source control, it does not defeat the purpose of SC, dealing with that problem is the purpose.
I can't really think of a case where storing the results would be helpful, but if you were worried you'd be unable to interpret it in the future, perhaps a description in a comment would be useful?
If you have select statements that are consumed by application code, powershell scripts, etl, reports, or possibly even by the ops team for routine tasks, these all seem like good candidates for being considered a part of your codebase. Remember, the purpose of the source control is not as a backup strategy or as logging, it is to be able to bring up an internally consistent version of your code at a particular point in time. That is why the answer to your last point is no. this is about the code and the schema, not so much about your data. The article that prompted this does a good job of explaining why it's challenging to put anything but purely reference data in your source control system.
|
STACK_EXCHANGE
|
The local machine is a win7 64-bit machine. First, I have installed the AjaxControlKit per the instructions and the video provided in this thread. Source Error: Line 1: <%@ Page Title="" Language="C#" MasterPageFile="Admin.master" AutoEventWireup="true" CodeFile="AddResult.aspx.cs" Inherits="Admin_AddResult" %> Line 2: Line 3: <%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="asp" %> Line 4: Line 5:
ThanksRegards, Sheo Narayan http://www.dotnetfunda.comSandhya, if this helps please login to Mark As Answer. | Alert Moderator Posted by: Sandhya on: 9/24/2010 [Member] Starter | Points: 25 0 You are correct. As a guest, you can read any forum posting. i used AjaxControlToolkit dll inside Bin Folder i am getting this error on page wherever i used Ajax controls ********************************error page below**************************************** Server Error in '/' Application. Pretty obvious now you think of it, In the IIS manager you do exactly the same, I thought that the hosting website would do it automatically. http://stackoverflow.com/questions/6844443/could-not-load-file-or-assembly-ajaxcontroltoolkit-or-one-of-its-dependencies
Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\atlmfc\src\mfc\'... add that as a reference or copy-paste it. Reply SonicMan Participant 1021 Points 223 Posts Re: AjaxControl toolit - toolscriptmanager cant find source Feb 27, 2012 06:15 AM|SonicMan|LINK Hi Is that the VS install by you? <%@ Register Assembly="ajaxcontroltoolkit" Namespace="ajaxcontroltoolkit" Tagprefix="cc1" %> makes no sense to me at all. >> when T is the server mapping on my local computer.
Member 28 Points 112 Posts Re: Parser Error Message: Could not load file or assembly 'AjaxControlToolkit.dll' or one of its... Could Not Load File Or Assembly Ajaxcontroltoolkit Access Is Denied How do I calculate passive perception for a monster? The system cannot find the file specified. https://forums.asp.net/t/1605781.aspx?Parser+Error+Message+Could+not+load+file+or+assembly+AjaxControlToolkit+dll+or+one+of+its+dependencies+The+system+cannot+find+the+file+specified+ I'm still quite new to IIS, so there may be a simple solution.
Logos, company names used here if any are only for reference purposes and they may be respective owner's right or trademarks. | 11/5/2016 11:11:37 PM No new comments. Could Not Load File Or Assembly Ajaxcontroltoolkit Sharepoint 2013 Browse other questions tagged asp.net ajax sharepoint or ask your own question. Then reference the dll file from that folder of the correct version, then you won't get such AjaxControlToolkit problem. Server Error in '/' Application.
Congratulations to the winners of November month, they have won INR 7125/- cash prizes! I tried to exclude those files and it build property and I am able to run the application as well. <%@ Register Assembly="ajaxcontroltoolkit" Namespace="ajaxcontroltoolkit" Tagprefix="asp" %> Have you tried the suggestions? Could Not Load File Or Assembly Ajaxcontroltoolkit There Is Not Enough Space On The Disk The system cannot find the file specified-1Could not load file or assembly 'AjaxControlToolkit'1Could not load file or assembly 'Intelligencia.UrlRewriter' or one of its dependencies.
Or by stephen? Your security policy may be different between your dev and production. Looking in directory 'D:\'... The system cannot find the file specified Related 7Sharepoint “Could not load file or assembly” “The system cannot find the file specified”12Could not load file or assembly 'AjaxControlToolkit' after upgrade to How To Add Ajaxcontroltoolkit.dll In Visual Studio 2013
Error Code: <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajaxControlToolkit"%> Working Code: <%@ Register Assembly="AjaxControlToolkit, Version=3.5.60501.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e" Namespace="AjaxControlToolkit" TagPrefix="ajaxControlToolkit"%> share|improve this answer edited May 28 '12 at 19:04 Darshana 1,62251736 answered May 28 I have installed Ajax control toolkit on my local computer and in the folder application is configured the folder bin with the AjaxControlToolkit.dll. current community chat Stack Overflow Meta Stack Overflow your communities Sign up or log in to customize your list. Parser Error Message: Could not load file or assembly 'AjaxControlToolkit' or one of its dependencies.
asked 2 years ago viewed 1829 times active 2 years ago Visit Chat Linked 15 ERROR Could not load file or assembly 'AjaxControlToolkit' or one of its dependencies 5 Could not Unknown Server Tag 'asp:calendarextender' Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\atlmfc\include\'... Registration is fast, simple and absolutely free .
If still there is Error check the Register Assembly line on newly added page/MasterPage and copy that Register Assembly line to other pages wherever your error is. Please mark the replies as answers if they help or unmark if not. Can the file not be found in the application's Bin folder? Ajaxcontroltoolkit.dll For Visual Studio 2013 Source Error: Line 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Line 2: Debug="true" %> Line 3: <%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="asp" %> Line 4:
Server Error in '/AJX' Application. I resolved this issue. Sep 23, 2010 12:05 PM|sowju|LINK Hi... Look at this article http://www.dotnetfunda.com/articles/article454-using-pagemethods-and-json-in-aspnet-ajax-.aspx where I have explained how to use PageMethod.
Greetings SpaceLama Reply SpaceLama Member 61 Points 141 Posts Re: AjaxControl toolit - toolscriptmanager cant find source Feb 12, 2012 11:59 AM|SpaceLama|LINK < asp:Content ID="MainContent" ContentPlaceHolderID="MainContent" runat ="Server"> < asp:ToolkitScriptManager ID="ToolkitScriptManager1" Third, I am not Stephen, don't know a Stephen who uses the AjaxControlToolkit, and have no user named Stephen on my computer. If you have any feedback about my replies, please contact [email protected] Microsoft One Code Framework Reply cagrel None 0 Points 3 Posts Re: AjaxControl toolit - toolscriptmanager cant find source Feb What are the odds we are all named Stephen?
in my index.aspx page i am using ajax auto complete extender in my page. Please review the following specific parse error details and modify your source file appropriately. Winners Winners & Prizes Ads Social YouTube/DNFVideo Facebook/DotNetFunda Twitter/DotNetFunda LinkedIn/In/DotNetFunda Plus.Google.Com Like us on Facebook Top Forums Authors Sat, 05-Nov-2016 Authors All Time Authors 525034553290 Latest members | More ... (Statistics what was I going to say again?
I'm getting the below error and not sure how to proceed. Does トイレ refer to the British "toilet" or the American "toilet"? It works locally, why not on a hosting Website: Solution: Go to your hosting control panel, in my case to the application tab, select the folder of your website and click Check that the new site has "IUSR" present and permissions of "Read","Execute",List".
© Copyright 2017 ngogeeks.com. All rights reserved.
|
OPCFW_CODE
|
You can set the workload automation options for your policy, so that VMware Aria Operations can optimize the workload in your environment as per your definition.
How the Workload Automation Workspace Works
You click the lock icon to unlock and configure the workload automation options specific for your policy. When you click the lock icon to lock the option, your policy inherits the parent policy settings.
Where You Set the Policy Workload Automation
- Select Configure from the left menu, then select Policies.
- Click the Policy Definition card.
- Select a policy that you want to modify. Ideally, this should be an active policy. Or, click the ADD button to add a new policy.
- Select the Workload Automation card to review the changes, or click EDIT POLICY to make changes.
|Workload Optimization|| Select a goal for workload optimization.
Select Balance when workload performance is your first goal. This approach proactively moves workloads so that the resource utilization is balanced, leading to maximum headroom for all resources.
Select Moderate when you want to minimize the workload contention.
Select Consolidate to proactively minimize the number of clusters used by workloads. You might be able to repurpose resources that are freed up. This approach is good for cost optimization, while making sure that performance goals are met. This approach might reduce licensing and power costs.
|Cluster Headroom|| Headroom establishes a required capacity buffer, for example, 20 percent. It provides you with an extra level of control and ensures that you have extra space for growth inside the cluster when required. Defining a large headroom setting limits the systems opportunities for optimization.
Note: vSphere HA overhead is already included in useable capacity and this setting does not impact the HA overhead.
|Change Datastore||Click the lock icon to select one of the following options:
|Target Network Policy Setting for WLP||Click the lock icon to select the following option:
When you select this checkbox, the Workload Placement algorithm in VMware Aria Operations will automatically choose compatible target network, while making the decision to move the VM for the optimization. For choosing compatible network WLP engine will consider the segment path and logical switch UUID of the Distributed Port Group.
Workload Optimization Across networks is supported when the optimization candidate clusters are assigned with different port groups (configured with NSX). These port groups configured via NSX have same segmentID and Logical Switch UUID. To enable this ability, check the respective setting in VMware Aria Operations Workload Automation policy settings.
Note: Segment ID and logical switch UUID properties are published on the VC port groups by NSX. So Worload Placement cannot provide a target network if it is not a NSX configuration and those properties are missing.
This setting is not selected by default.
|
OPCFW_CODE
|
The contemporary stress state in the upper crust is of great interest for geotechnical applications and basic research likewise. However, our knowledge of the crustal stress field from the data perspective is limited. For Germany basically two datasets are available: Orientations of the maximum horizontal stress (SHmax) and the stress regime as part of the World Stress Map (WSM) database as well as a complementary compilation of stress magnitude data of Germany and adjacent regions. However, these datasets only provide pointwise, incomplete and heterogeneous information of the 3D stress tensor.
Here, we present a geomechanical-numerical model that provides a continuous description of the contemporary 3D crustal stress state on a regional scale for Germany. The model covers an area of about 1000 x 1250 km2 and extends to a depth of 100 km containing seven units, with specific material properties (density and elastic rock properties) and laterally varying thicknesses: A sedimentary unit, four different units of the upper crust, the lower crust and the lithospheric mantle.
The model is calibrated by the two datasets to achieve a best-fit regarding the SHmax orientations and the minimum horizontal stress mThe model is calibrated by the two datasets to achieve a best-fit regarding the SHmax orientations and the minimum horizontal stress magnitudes (Shmin). The modelled orientations of SHmax are almost entirely within the uncertainties of the WSM data used and the Shmin magnitudes fit to various datasets well.
Only the SHmax magnitudes show locally significant deviations, primarily indicating too large values in the upper part of the model.
The model is open for further refinements regarding model geometry, e.g., additional layers with laterally varying material properties, and incorporation of future stress measurements. In addition, it can provide the initial stress state for local geomechanical models with a higher resolution.
Heidbach, O., Rajabi, M., Cui, X., Fuchs, K., Müller, B., Reinecker, J., Reiter, K., Tingay, M., Wenzel, F., Xie, F., Ziegler, M. O., Zoback, M.-L., and Zoback, M.: The World Stress Map database release 2016: Crustal stress pattern across scales, Tectonophysics, 744, 484–498, https://doi.org/10.1016/j.tecto.2018.07.007, 2018.
Morawietz, S., Heidbach, O., Reiter, K., Ziegler, M., Rajabi, M., Zimmermann, G., Müller, B., and Tingay, M.: An open-access stress magnitude database for Germany and adjacent regions, Geothermal Energy, 8, https://doi.org/10.1186/s40517-020-00178-5, 2020.
Ahlers, S., Henk, A., Hergert, T., Reiter, K., Müller, B., Röckel, L., Heidbach, O., Morawietz, S., Scheck-Wenderoth, M., and Anikiev, D.: 3D crustal stress state of Germany according to a data-calibrated geomechanical model, Solid Earth, 12, 1777–1799, https://doi.org/10.5194/se-12-1777-2021, 2021.
|
OPCFW_CODE
|
Files with / in their name....
gkshenaut at ucdavis.edu
Thu Jul 23 16:35:44 MDT 2009
On Jul 23, 2009, at 1:49 PM, Simon Powell wrote:
> Yeah - this is from a Mac OS X server to a Linux box. It just sees
> the / and then stops as it expects a directory and sees a file.
If the files are on MacOS, ':' is the classic path delimiter, and in
OS/X this is translated for what they call "BSD" programs into '/'.
But that means that the underlying system can have '/' in filenames,
and they get translated into ':' in the 'BSD" perspective. In other
words, ":Users:foo:21/08/08.ppt" is translated into "/Users/foo/
21:08:08.ppt" and vice versa. Maybe this is related to your problem in
> On 23 Jul 2009, at 21:46, Paul Slootman wrote:
>> On Thu 23 Jul 2009, monkeymoped wrote:
>>> Hi there - I am trying to do a site to site nightly rsync between
>>> two boxes
>>> using ssh - all of the setup steps work and everything can talk
>>> nicely etc.
>>> However, the users unfortunately have lots of files with / in the
>>> (eg 21/08/08.ppt etc.) - when I try and run rsync on the content
>>> of these
>>> folders rsync understandably moans about the filenames being
>>> invalid (ie it
>>> thinks they are folder seperators rather than filenames) and
>>> stops. Is there
>>> a way I can get rsync to 'blindly' synch these files and ignore
>>> the / in the
>>> filename? I would just say to the clients to rename said files but
>>> there are
>>> a whold bunch of them, ie way too many to do this for. Any help much
>> Wierd, last time I used a microsoft system, internally a forward
>> slash /
>> was equivalent to a backslash \ , i.e. in C you could do:
>> fd = open("C:/path/to/file", O_RDONLY);
>> and it would would just fine, accessing C:\path\to\file ...
>> Anyway, you might be helped by using transliterate.diff from the
> Please use reply-all for most replies to avoid omitting the mailing
> To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
More information about the rsync
|
OPCFW_CODE
|
Wix Database IntegrationDatabase integration with Wix
The effectiveness of database building and administration is critical to the overall effectiveness of a website. Wix Code makes it simpler than ever to build your own database without any prior knowledge. Which is the database theme defined? This is the name given to the procedure of building a well-organized database datamodel.
It provides logic styling choices and the memory settings necessary to create the theme in one of the languages. There will be detail attribute values for all entity objects in the perfect datamodel. What is the best way to create a database? The creation of a database will require it to follow the basic rules of the norms. Bracstorming is the first stage in database development.
This latter procedure involves specifying which table and field are used. It is necessary to use a modelling utility and a group and divide the information into different boxes. This is how to build a database in Wix. With Wix, the tried and tested website creation software, you can build a database for your website.
With Wix Code, the whole proces is simplified and even beginners can build their own database. With this new solution, even non-developers can build high-performance Web sites with large database sizes. No matter if you have to build an on-line shop, a large knowledge-based website or another on-line site, this makes the job easy and without prior experience.
Every database contains sandbox and live version of datas. If you create the database boxes for your database, you will encounter 3 different items per box. Name of the field: Every cell is displayed by its 'cell name'. The Wix function allows you to modify the name of the filed later. Feldschlüssel: It is the identifier used to reference a particular coded area.
The value is unambiguous for a particular area and cannot be modified. Feldtyp: That is what determines the kind of information that can be added to a box. The Wix Content Manager provides different fields such as reference, text, number, image, rich text, date and time, boolean, URL and documents. Every change to a particular box can affect your website.
After you have finished creating the various arrays, you should map the main array to your database. Every database compilation must have this box. Although the caption box is the standard primitive box, you can specify any other box as it is. Ideally, the information in this area should be unambiguous, but not strictly necessary.
Dynamically generated pages extract the information you need from your database to present it on your website. Easily customize the look and feel of your pages to meet the needs of your website. If you have web pages that are dynamically created, your database creates automatic boxes that forward information to the page that is dynamically created. Each of these boxes has a billed relative identifier.
You cannot edit the values in a recalculated area. Changing the dynamically page address changes the address in the datafield. It is also possible to define users' authorizations to determine who can gain permission to use your database sets. The Wix Code allows you to define authorizations for each database group.
You can also use this utility to help you with importing and exporting your work. Simple handling of input and output dates is available. Files are in CSV and can be read and written to any computer. Following the step-by-step instructions in this tutorial, you can finish the database building with Wix Code. It' been used by million of the world' s consumers to build breathtaking and high-performance Web sites.
By introducing Wix Code, the Wix Code database management system has further improved the ease of database development. You can now build huge on-line shops and information pages by following easy footsteps.
|
OPCFW_CODE
|
Pro tip: How to use semi-relative time ranges in Grafana
If you’re even the slightest bit familiar with how Grafana dashboards work, you’ve probably realized that the time range selector is one of the most important features. After all, when you’re using Grafana to visualize time series and logs, defining a time range is required for metrics and logs queries. You can select absolute time ranges (from 2021-12-02 00:00:00 to 2021-12-05 23:59:59) or relative time ranges (from 2 days ago until now), and changing a time range will automatically refresh all the panel queries with the new time range. (Check out our time range controls page for more specifics.)
But did you know that in addition to absolute and relative time ranges, you can use semi-relative time ranges? What I mean is you can set the start time to an absolute timestamp, and the end time to a “now” that is relative to the current time. This is helpful because it allows you to watch in real time what happens from a fixed point until now.
As time progresses, the plot will automatically and progressively zoom out to show more history and fewer details (since the interval between data points gets bigger). At the same rate, the importance of high data resolution will decrease, while the relevance of viewing history trends over the entire time period will increase.
Semi-relative time range dashboards are useful when you need to monitor the progress of something over time, but you also want to see the entire history from the beginning point in time.
I discovered two use-cases for semi-relative time ranges while using Grafana. The first one is tracking an issue that occurs infrequently. The second one is real-time monitoring of metrics that are updated during business hours. Let me show you how they work.
Troubleshooting a randomly-occurring problem
Last year, after my pfSense router at home kept randomly rebooting, I decided to troubleshoot. The problem happened infrequently and was hard to catch, so I built a dashboard specifically to monitor this issue. I gave it a semi-relative time range (fixed start time, relative end time) to visualize the frequency of reboots over time.
Start time: 2021-06-01 00:00:00
End time: now
The graph panel below shows (in green) the count of log lines per 12 hour interval for the
kernel syslog application. I was more interested in the annotations (vertical dotted red lines) marking every appearance of the string “Bootup complete” in the logs. I used this dashboard to quantify and analyze the unplanned reboots pattern from May onward.
My pfSense logs graph panel
I noticed that the occurrences were really random and getting worse over time. This pattern indicated that the problem was likely hardware-related.
After changing the hardware, I wanted to track whether the unplanned reboots continued or stopped. I didn’t know how long it would take to confirm the problem was resolved, since the occurrences were random.
With a semi-relative time range, I was able to keep the time range prior to the hardware change in view. That made it easier to see how the situation improved or degraded over my troubleshooting period, and then I could validate whether I had solved the issue.
Below is the same graph, a little more than a month later. Notice how the start of the graph remains fixed, so that we can see the entire history of the problem at all times. As time advances, the graph compresses to the left. I replaced the hardware on August 15. After waiting a few weeks, the graph clearly indicates my fix resolved the problem.
The graph panel after one month
This visualization was the right tool to use to troubleshoot the issue because I was interested in observing an ever-increasing time range to analyze and figure out the pattern of occurrences. The data points were far apart, and the whole process took several months to detect and a few weeks to resolve. If I had used a relative time range (for example, the last seven days), it would have been difficult to see how the pattern changed progressively over the course of weeks and months.
Monitoring during specific work hours
Today, we live and work in the web browser. We open hundreds of tabs each day, and we often reopen the same set of tabs many times for projects and daily work. I organize my Firefox tabs in persistent sessions using Tree Style Tabs and Tab Session Manager add-ons. I wanted to have some stats about my saved tabs and sessions, so I wrote a script that uses Prometheus and Grafana to display counts of Firefox tabs on graphs.
The dashboard I created to visualize my tabs open over time has another type of semi-relative time range. It defaults to the current business day so far, because it’s the time range I’m most often interested in viewing when I consult this dashboard.
Start time: now/d+8h
End time: now
This is equivalent to the “Today so far” time range preset, but it starts at 8:00am instead of 12:00am by appending
+8h to the periodic start time. I don’t start opening Firefox tabs until ~8:30-9:00am, so there is no point in starting the time range at 12:00am.
My Firefox tabs dashboard
Let’s say you have an important business process metric you want to track every day for the current day so far, but the data starts coming in during normal business hours starting at 8am. Using the period selector
now/d would waste lots of screen space with non-existent data from midnight to 8am. Setting the start time to
now/d+8 instead will optimize the display by limiting the view to the business hours you are interested in.
I wanted to share my pro tips for advanced users to highlight the flexibility of Grafana as a visualization tool. Now I wonder if there are more instances when semi-relative time ranges could come in handy. If you have any ideas, reach out on our public Slack workspace, or email me at email@example.com and let me know!
Grafana Cloud is the easiest way to get started with metrics, logs, traces, and dashboards. We have a generous forever-free tier and plans for every use case. Sign up for free now!
|
OPCFW_CODE
|
identify equilibrium region after large transient
I am looking at data from a mechanical system under an unsteady load. I'm trying to find the simplest way to identify the portion of the signal once the system reaches its new equilibrium. Here's a simple example.
Any suggestions for a simple and robust approach would be most welcome!
@StanleyPawlukiewicz The system is at rest around 75K and then actuation is applied. The actuation is high frequency, relative to the natural frequency of the system, so there is a long transient. The transient should decay exponentially, so there is no 'exact' time of equilibrium, and I'm looking for a test to indicate when its small enough to ignore.
It is difficult to think about robustness without knowing anything about the noise characteristics.
For example, it might be enough to obtain a number of these waveforms (20-50), obtain a box plot of the amplitude at each time instance and use that information to derive a very simple threshold below which the signal enters a region that can be safely characterised as equilibrium.
If there is enough noise to cause the signal to "jump" above the threshold, then you might want to try introducing some form of hysteresis in the Thresholding process. In other words, you might consider "equilibrium" the point when the waveform has dropped below a certain threshold and has stayed there for at least a $u$ amount of time.
If the noise characteristics are such that it is impossible to "tune" such a threshold then a simple 3-5 tap one-dimensional median filter or a very simple low pass filter might be enough, to smooth out the waveform sufficiently and enable you to apply a threshold more reliably.
If the process is such that the waveform is "floating" and a fixed threshold is impossible, then you would be looking at the point when the rate of decay is sufficiently flat. Since you already know that you have an exponentially decaying waveform, you should use this to your advantage. In this case, you can still apply some sort of smoothing pre-processing (e.g. filtering, if required) and then fit the data to an exponentially decaying model. You can then obtain the first order finite difference and set a threshold on that variable for when it drops sufficiently close to zero denoting that the curve is flattening out.
But in your case, you seem to have some sort of undershoot as well (some sort of inertia effect (?)). In extreme cases, the exponential decay model might not be a good assumption in terms of fitting the exact waveform. In that case, it might be better to do a DFT on the settling segment and try to quantify the settling time by known results from automatic control system theory.
Hope this helps.
Thanks for the nice answer. I'll take a look at these ideas.
@weymouth Thanks for letting me know. I am glad to hear that you found this helpful. You can upvote or accept this answer from the buttons on the left of the answer box. Accepting the answer will also stop it from circulating the board as unanswered.
|
STACK_EXCHANGE
|
来自澳洲代写的顾客授权发布的Business Statistics,ETB1100作业要求片段,我们不会发布ETB1100的answer在网站,我们曾经写过ETB1100及相关的Business Statistics写过很多作业,考试,如果你也需要代写这个课程的作业请联系客服WX:QQ 5757940 ,代写人的代写服务覆盖全球华人留学生,可以为AU的学生提供非常准时精湛的服务,小作业assignment代写、essay代写享适时优惠,project、paper代写、论文代写支持分期付款,网课、exam代考预约时刻爆单中赶紧来撩。
您的任务是复制并修改两个ML示例。1.信用/风险评分2.使用k-means聚类的客户细分首先,您需要从Knime下载一本电子书,名为:Practicing Data Science。Knime的Stefan Helfrich非常乐意提供一个代码,这样您就可以免费下载任何电子书:在课堂上,我们将查看模型,因此您还需要下载工作流或在Knime Explorer中的示例中找到它们...
Your assignment will be to copy and then modify two ML examples.
1. Credit/Risk Scoring
2. Customer Segmentation using k-means clustering
To start, you will need to download an e-book from Knime called: Practicing Data Science. Knime's Stefan Helfrich was so kind to provide a code so you can download any e-book for free:
In class, we are going to go over the models so you will also need to download the workflows or find them in the Examples in our Knime Explorer. You will find the locations to download the workflows in the e-book.
We might go through most of this in class first (that is the copy part of the assignment). For homework, you are going to modify each one. 1. For the Credit/Risk model, I want you to swap ML technique from Random Forest to decision tree. You will submit a simple table that shows your AUC and accuracy percent for the two ways that you did the modeling. For example, it should look like this:model type AUC (area under curve in ROC graph) Accuracy (from Scorer Node)
Decision Tree .801 .795
Random Forest .822 .792
You should also write a short statement of which model you would recommend and why you would recommend it. For example, you might say that "Bank should use the Random Forest model to predict risk/score because the area under the curve was higher." Or you could say, "Telco should use the Decision Tree model because the accuracy was higher." Your next sentence should say why you chose either AUC or accuracy to make your decision.
You should also submit your Knime export as a knwf file also with your one-page write-up.
2. For the customer segmentation example, you will use two different values for k (10, 5) which represents the number of different customer segments. After you run both, you will submit a simple table that shows the denormalized means for your cluster_0 values for day_mins and eve_mins. For example, it might look like this:
cluster 0 day mins eve mins
k=5 128 105
k=10 89 101
You should also submit your Knime export as a knwf file K Means Clustering
Here is a video explaining kMeans Clustering:
If you are a student from an English-speaking country, please feel free to contact us at [email protected] and we will provide you with an excellent writing service.
作为现存十年的代写服务机构,我们没有任何学术丑闻,我们保护顾客隐私、多元化辅导、写作、越来越多的小伙伴选择代写人为他们解决棘手的各类作业难题,保障GPA,为留学梦助力! 我们的客服团队及写手老师总是能第一时间响应顾客的各类作业需求,有些人即使有重要的事甚至带伤上场协助考试。Final季,忙的时候一天十几场考试还在继续坚持着,我知道,他们明明可以不用这么辛苦的…但是他们为了坚守承诺,为了另一端屏幕外的那一份期望,他们没有选择退缩、时刻为同学们提供最好的!这么有温度的代写还不添加备用一下?WX/QQ: 5757940
|
OPCFW_CODE
|
Transferring Files Within the Unraid Server
If you are using Windows Explorer to move files between drives, you are actually copying the files TWICE across the network, from the Unraid server to your Windows machine, and back again. For copying a few files, this is not a problem. But if you are moving a lot of data, here are faster methods.
Midnight Commander - Easy to Use GUI Tool
Use Midnight Commander and PuTTY instead. Type mc at the command prompt in a telnet/PuTTy session to start the GUI. Midnight Commander is built into Unraid v4.3 and up. For earlier versions, and a link to PuTTY (an alternative to Telnet that allows use of a mouse within mc), see this thread). Midnight Commander is a Linux console tool, and needs to be run from either the physical console on your Unraid server, or from a Telnet console on your desktop station. For more information, see the Telnet page, which includes information on PuTTY.
Move Files Overnight
If you go to the Unraid server and run Midnight Commander from there, you can use it to move a bunch of files overnight. But if you use mc from a Telnet prompt from your Windows (or other) workstation, you will have to leave the computer on and the Telnet session open until the disk operations are complete. If the Telnet session ends, so does the copy or move operation.
But with a little knowledge of Unix commands, you can easily start moving files around your Unraid server and then shut down Telnet and your workstation. The key is the "nohup" command (nohup means "no [don't] hang up"). If you put "nohup" before any command and an ampersand (&) afterwards, the command will run in the background until it is complete. Your command prompt will return immediately.
So, for example, if you wanted to move a bunch of movies from Disk1 to Disk2, you could use this command from a Telnet (PuTTY or otherwise) prompt ...
nohup mv /mnt/disk1/Movies/* /mnt/disk2/Movies &
Do a quick check to see that files are starting to appear in the destination folder to make sure you didn't have a typo in the command, and then exit from the Telnet session. The files will continue to be moved as fast as Unraid can move them, and use ZERO network bandwidth. Make sure it is complete before shutting down your Unraid server, as copying hundreds of gigs can take a long time to complete even at the fastest speed.
nohup can also be used with the "cp" (copy) command (see Unix Commands section below)
nohup creates a log file called 'nohup.out' with the command output. The basic "mv" command doesn't create any output, but "cp" outputs the name of each file it copies. If you use "cp" to copy a LOT of small files (300,000+), you risk having nohup.out get quite large - large enough to fill up your Unraid server ramdisk - not a good thing.
There are two effective methods available to move files from one drive to another from within Unraid (v4.x and later).
1) Copy the files from disk# (where '#' is the number of the disk in Unraid)
cp -r /mnt/disk# /mnt/disk#
cp -r /mnt/disk4 /mnt/disk8
Copies all contents of disk4 to disk8. All files/directories on disk4 remain.
Note the above example will create a dir named 'disk4' on disk8 with the contents underneath it. The original file date/time stamps will not be preserved.
See below for syntax to copy the root directory names only with all files underneath them and preserve the original file date/time stamps.
The -r option causes the cp command to copy directories recursively. It is not necessary with a simple file copy.
If you want to follow along as the copy proceeds, add the -v option (requesting verbose output).
To copy the root directory names only and everything under them, preserve the original file date/time stamps and log the output to a text file on the flash drive in a format readable by an editor like windows notepad use this syntax:
cp -r -v -p /mnt/disk4/* /mnt/disk8 | todos > /boot/disk1copy.txt
2) Move the contents of disk1 to disk2 using the mv command
mv /mnt/disk#/ /mnt/disk#
mv /mnt/disk1 /mnt/disk4
Moves all contents from disk1 to disk4. All files/directories on disk1 are now gone.
Caution: Using the move command may be potentially dangerous as it will copy to the destination drive and then delete your data file(s) from the source drive. In the interest of maximum safety, you may want to use copy instead.
If you want to copy or move entire folders from one drive to another, and the folder names have spaces in them, you need to use "quotes" around the folder name, as in this example:
mv /mnt/disk2/"The Empire Strikes Back" /mnt/disk3
In the above example, the entire folder called The Empire Strikes Back would be moved from Disk 2 to Disk 3 with the same sub-folder structure.
Wildcards are available as well. For example, if you want to copy all of the files from Disk 2 over to Disk 3, use the mv command like this:
mv /mnt/disk2/* /mnt/disk3
In this example, all files and folders on Disk 2 would be relocated over to Disk 3 in the exact same folder structure as it was on Disk 2.
|
OPCFW_CODE
|
Post by Ryan McCue
I just posted a blog post regarding WP_Error, and why it (quite frankly)
I'd be interested to hear your thoughts on WP_Error and the possibility
of using exceptions as well. I'd also love to hear from anyone who has
implemented exceptions in their own plugin code.
(Bonus points if you're a core developer, especially if you're westi ;) ).
WordPress, for better and sometimes for worse, has its own way to doing
things. WP_Error has served us well. Refactoring areas of the codebase to
use and handle exceptions just doesn't make a lot of sense right now. Not
when we strive for backwards compatibility, and not when we put users
first. As you said in your article, it wouldn't be a fun or particularly
safe conversion. It is a good rule of thumb to actually benefit from such
overhauls. I'd rather invest my time in a lot of other architectural
You also mention the idea of putting try/catch inside of the plugin API —
that would be terribly slow, and defeats the purpose of forcing exceptions
on developers. Congratulations, a plugin developer didn't code for an
exception because they knew core would catch it and issue a wp_die()
message, like that error would ever happen anyway. No thanks.
We use WP_Error for probably three distinct things:
1. As return codes. Often, WP_Error is used when the error is not
"exceptional." It enables you to pass multiple errors, and also extra data.
Using WP_Error as a return value is nice in the authentication API, for
example. It is also very helpful as a value passed to filters, to then
either be modified or checked against by plugins. The action/filter
paradigm is almost unique to WordPress, and is what defines WordPress
development in many ways. WP_Error assists with that, in a far cleaner way
than an exception would. Even with exceptions, we would still want WP_Error
as verbose return codes, because don't get me started on bitwise operators.
2. As legitimate errors where exceptions could be used, but shouldn't be.
This kind of factors into return codes. I personally do not find a failed
HTTP request to be "exceptional" that, if uncaught, should result in a
fatal error. At the very least, it certainly depends on what you're trying
to do — an API request, or maybe just something in the background.
Sometimes, you actually just don't care about the return value of
something. But hey, that's just me.
3. As legitimate errors where exceptions can and should be used, but won't
be. Exceptions can be useful in particular instances in WordPress. For
example, during plugin/theme installation and upgrade, for core updates.
We're doing a ton with the filesystem here: downloading archives, verifying
and unzipping them, creating folders, moving files, setting permissions,
etc. Some code here is ripe for exceptions — otherwise, we're forced to
check, over and over again, that our most recent result isn't a WP_Error.
Wrapping all of this in a try/catch would be nice. (I think relying on
exceptions to bubble is lazy; we'd be handling these all within that
However, two major problems with this. One,
wp-admin/includes/update-core.php is copied over during a core update and
executed by the currently installed version. Can't throw a WP_Exception
when WP_Exception doesn't exist yet. Since you can update across multiple
major versions at once, you can never really do this well without coding
around it. So, the one place where I could see a benefit to exceptions,
couldn't truly benefit from them without needless refactoring.
The second problem is that PHP doesn't have finally blocks. This is
absurdly stupid. We'd be using try/catch when dealing with filesystem
operations, in particular upgrades. That means we need to roll back, clean
up, release locks, whatever. If WP_Error sucks because it requires slightly
more code and forces programmers to defensively consider errors, rather
than letting everything bubble up, then it sucks. But if try/catch is
supposed to make the code cleaner, then the lack of finally really blows
that argument out of the water.
I'm not saying there isn't a single use in WordPress for a WP_Exception. I
know for a fact there are some, but I also don't think there are enough
examples where an exception would be oh-just-so-much-better than error
objects to justify a paradigm change. As it is, WordPress is getting
complicated (almost too complicated) in certain areas, and it'll only serve
to hurt theme developers, designers, and many weaker plugin authors the
more "by the book" we become.
And of course, plugin authors can already use exceptions if they wanted to.
Write your own HTTP API wrapper that throws exceptions — come on, you're a
programmer, do what programmers do: make a personal, convoluted abstraction
|
OPCFW_CODE
|
This is a repost of Evolution of “Hello World!”, originally written by user Helka Homba
It should not be closed as a duplicated, due to meta consensus here.
The original was asked over two years ago and was last active more than six months ago. I have permission from Helka Homba to post this here
Since the original, many languages have been invented, and many people have joined the site who have never had an opportunity to answer the original, so I feel that this repost is acceptable.
The challenge is to make a program that prints
2^n to stdout, where
n is the number of your program. The catch is that your program must have a Levenshtein distance of 10 or less from the program in the answer submitted before yours.
How This Will Work
Below I will submit the first answer using C#, which prints 2^(n=1)=
The next person to answer must modify the code with up to 10 single character insertions, deletions, or substitutions so that when it is run in the language of the new answer, it prints
n being the answer number). For example, the 25th answer (let's say it's in Pyth) would print 2^25, or 33554432.
This will continue on until everyone get stuck because there is no new language the last answer's program can be made to run in by only changing 10 characters. The communal goal is to see how long we can keep this up, so try not to make any obscure or unwarranted character edits (this is not a requirement however).
Please format your post like this:
#Answer N - [language] [code] [notes, explanation, observations, whatever]
Where N is the answer number (increases incrementally, N = 1, 2, 3,...).
You do not have to tell which exact characters were changed. Just make sure the Levenshtein distance is from 0 to 10.
If you answer in some language or the resulting code is just a mess, do please explain what you did and why it works, though this isn't required.
The key thing to understand about this challenge is that only one person can answer at a time and each answer depends on the one before it.
There should never be two answers with the same N. If two people happen to simultaneously answer for some N, the one who answered later (even if it's a few seconds difference) should graciously delete their answer.
- A user may not submit two answers in a row. (e.g. since I submitted answer 1 I can't do answer 2, but I could do 3.)
- Try to avoid posting too many answers in a short time frame.
- Each answer must be in a different programming language.
- You may use different major versions of a language, like Python 2/3
- Languages count as distinct if they are traditionally called by two different names. (There may be some ambiguities here but don't let that ruin the contest.)
- You do not have to stick to ASCII, you can use any characters you want. Levenshtein distance will be measured in unicode characters.
- The output should only be
2^nand no other characters. (Leading/trailing whitespace is fine, as is unsuppressible output like
- If your language doesn't have stdout use whatever is commonly used for quickly outputting text (e.g.
- When the power of two you have to output gets very large, you may assume infinite memory, but not an infinite integer size. Please be wary of integer overflows.
- You may make use of scientific notation or whatever your languages most natural way of representing numbers is. (Except for unary, DO NOT output in unary)
Please make sure your answer is valid. We don't want to realize there's a break in the chain five answers up. Invalid answers should be fixed quickly or deleted before there are additional answers.
Don't edit answers unless absolutely necessary.
Once things settle down, the user who submits the most (valid) answers wins. Ties go to the user with the most cumulative up-votes.
Edit these when you post an answer:
Erik the Outgolfer
Languages used so far:
- C# (Pavel)
- /// (boboquack)
- Retina (Dennis)
- Jelly (Jonathon Allan)
- Pyth (boboquack)
- ><> (Destructible Watermelon)
- Minkolang (Kritixi Lithos)
- Perl (Pavel)
- Python (Qwerp-Derp)
- dc (R. Kap)
- Charcoal (Jonathon Allan)
- Self Modifying BrainFuck (Leo)
- SOGL (dzaima)
- ShapeScript (Jonathon Allan)
- Pyke (boboquack)
- Ruby (Nathaniel)
- 05AB1E (ovs)
- STATA (bmarks)
- bc (Kritixi Lithos)
- Japt (Okx)
- 2sable (Kritixi Lithos)
- Cheddar (Jonathon Allan)
- Pylons (Okx)
- Bash (zeppelin)
- Pushy (Okx)
- CJam (Erik the Outgolfer)
- MATL (Okx)
- MATLAB (Tom Carpenter)
- Octave (Kritixi Lithos)
- R (ovs)
- Convex (Okx)
- Mathematica (ghosts_in_the_code)
- Pip (Okx)
- Stacked (Conor O'Brien)
- GolfScript (Okx)
- Actually (Lynn)
- RProgN (Okx)
- Scheme (bmarks)
- Element (Okx)
- J (Blocks)
- Cubix (ETHproductions)
- zsh (zeppelin)
- VBA (Taylor Raine)
- Fish (zeppelin)
- Reticular (Okx)
- Perl 6 (Pavel)
- RProgN2 (ATaco)
- PHP (Matheus Avellar)
- Jolf (Conor O'Brien)
- Haskell (nimi)
- Befunge-98 (Mistah Figgins)
- Gnuplot (zeppelin)
- QBIC (steenbergh)
- FOG (Riker)
- Qwerty-RPN (Okx)
- Korn Shell (ksh) (zeppelin)
- Julia (Riker)
- Python 3 (Pavel)
- Vimscript (Riker)
- Dash (zeppelin)
- Vitsy (Okx)
- csh (zeppelin)
- Ohm (Okx)
- Bosh (zeppelin)
- es-shell (Riker)
- Gol><> (PidgeyUsedGust)
This question works best when you sort by oldest.
|
OPCFW_CODE
|
The installer will not let me partition a 1.1TB array (8x160GB disks, RAID5)
behind a 3w-7850 controller.
I have tried with both errata images; they solve other problems and both allow
me to get further into the installation. Now I get no indication of any errors
in the kernel log. Instead I get the following in a dialog box:
No such file or directory during read on /tmp/sda
The kernel logs on VT4 look normal except for this:
<4>SCSI device sda: -2053770239 512-byte sectors (47981MB)
Obviously it's overflowing. I'm used to this, because it also happens on a
700MB array I have running Red Hat 7.1, but it didn't cause any problems with
the installation there. I no longer have 7.1 around to install on the new machine.
Choosing "ignore" from the dialog box causes the above message to again appear
in the kernel log. Then the dialog box reappears; after several tries, I get
the "no devices found for installation" box.
I have verified (from the shell on VT2) that /tmp/sda exists as a block device
with major 8 and minor 0 as I'd expect.
Some further information:
On VT2, running fdisk /tmp/sda results in:
Unable to read /tmp/sda
I verified again that /tmp/sda does indeed exist, and to be doubly sure I did
"mknod /tmp/blah b 8 0" and received the same error message when trying to fdisk
What happens if you create /tmp/sda and try to run parted on it ?
I didn't realize that parted was available in the initial install image.
It runs and gives a prompt. There is a note about the geometry being
139508/255/63 and cylinder 1024 ending at 8032.499M.
However, it doesn't want to do much of anything. If, for instance, I do
"mklabel msdos" I get:
read failed: No such device
read failed: No such device
Error: No such file or directory during read on /tmp/sda
Retry Ignore Cancel?
I hope this helps.
Arjan -- do you know of anything specific to the boot kernel that might be
causing problems here?
Some more information for you:
I brought up a RAID5 array with only three 160GB disks, to try and determine
whether the problem relates to the size of the array. The installation works
fine on the resulting 320GB /dev/sda.
3ware has a driver out that's even newer than what is in current kernels
(1.02.00.019, I think). Unfortunately I have no idea how to make the new driver
available to the installer.
Sounds like the kernel driver is having problems with something that large
Several linux drivers (and it seems the 3ware too) have a 1 Tb limit ;(
The limit seems to be in the kernel on the boot floppies. I know that newer
kernels have no trouble with an array this large.
In any case, Skipjack-beta1 installs fine on this machine, so you can resolve
the issue. But this is supposed to be a production machine, and running the
beta makes me unconfortable. We'll see how it survives stress testing.
I suggest that this be resolved, since this particular problem is fixed in the
betas (and I'm sure you want to keep the unresolved bug count down).
Is that "resolved current" or "resolved rawhide"?
|
OPCFW_CODE
|
Australia "In general the final report is sweet, But m... "All round the ultimate report is sweet, But my system teacher was silent disappointed that there was no case scientific studies spelled out in the final report.
This statistical program is beneficial for students at equally degrees: whether or not they are exploring on a specific matter or They can be educating statistics. That's why, one should be effectively conscious of This system and therefore clarify doubts pertaining to it for greater being familiar with.
electrical engineering assignment help civil engineering assignment help computer software engineering assignment help info technological know-how assignment help mechanical engineering assignment help Pc science Mechanical matlab solidworks thermodynamics CAD assignment help AUTOCAD assignment help Basic Subjects social science biology chemistry math physics english geography
We all know that your faculty has rigid rules towards plagiarism. StatisticsHomeworkHelper.com won't tolerate any method of plagiarism. Your articles are going to be handed through a software package that checks for virtually any traces of copy-pasting ahead of it is distributed to the inbox.
Pupils with out having a good concept of this try and use software which can be recognised to them and they're far more comfy with. Consequently, they struggle to stay with old approaches.
We have a different wing of Stata assignment editors and proofreaders. All of these have accomplished Qualified programs on editing and proofreading. They've various years of practical experience. Thus, if you regularly wonder, “Can any one edit my Stata assignment”, then it is best to Get hold of us.
Availing our STATA on you could try these out the internet help can save you from each one of these pressure. Just deliver us your duties straight away immediately after it has been assigned to you. Our authorities are assignment crafting gurus and may take care of your tasks in the very best way. You will have ample time and energy to do other pursuits considering the fact that your assignments are in Secure arms. In addition to just doing all your assignment for you personally and submitting it punctually, we also ensure you best grades. You should take advantage of this brilliant company and insure your tutorial grades.
There are two several approaches one particular usually takes to Stata. One particular is to utilize it being an interactive device: persons start out Stata, load the information, and start typing or clicking on instructions. It is additionally definitely tricky to recuperate from problems that there's no “reverse” command in Stata.
We have employed a group of really skilled gurus to help students with their assignments on Stata. Our tutors are incredibly skilled and also have helped several learners internationally with their assignments.
Yow will discover our location as part of your country through Speak to us web site to get STATA assignment help, STATA homework help. Also you can submit your STATA assignment directly to our Web-site and relaxation We are going to do for you.
The pool of statistical details obtainable provides college students an notion concerning the topic on which They can be to analysis, and what get the job done continues to be one particular ahead of that investigation.
Our sampling assignments additional reading help platform help college students fix all assignments and homework connected to studies. We offer the very best figures help to battling learners and those who don’t have time to try and do their homework.
If you are not articles with any Portion of the Stata assignment help material furnished by us, you could request revision. Our Stata assignment experts are always willing to support your needs. We provide endless revision facility.
Reference checklist is undoubtedly an indispensible Element of an assignment. This record should be nicely-formatted. If you have perplexed when it comes to preparing the reference record, then you need to Speak to our Stata assignment industry experts.
|
OPCFW_CODE
|
package jdepend.model.component.modelconf;
import java.util.ArrayList;
import java.util.List;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
public class JavaPackageComponentModelConfRepo {
public Element save(Document document, JavaPackageComponentModelConf componentModelConf) {
Element nelement = document.createElement("componentModel");// 组件模型节点
nelement.setAttribute("name", componentModelConf.getName());// 添加name属性
nelement.setAttribute("type", ComponentModelConf.ComponentModelType_Package);// 添加type属性
// 添加组件信息
for (ComponentConf componentConf : componentModelConf.getComponentConfs()) {
Element selement = document.createElement("component");// 组件节点
selement.setAttribute("name", componentConf.getName());
selement.setAttribute("layer", String.valueOf(componentConf.getLayer()));
for (String packageName : componentConf.getItemIds()) {
Element eelement = document.createElement("package");
eelement.setTextContent(packageName);
selement.appendChild(eelement);
}
nelement.appendChild(selement);
}
// 添加未包含的packages
List<String> ignorePackages = componentModelConf.getIgnoreItems();
if (ignorePackages != null && ignorePackages.size() > 0) {
Element ielements = document.createElement("ignorePackages");
for (String ignorePackage : ignorePackages) {
Element ielement = document.createElement("package");
ielement.setTextContent(ignorePackage);
ielements.appendChild(ielement);
}
nelement.appendChild(ielements);
}
return nelement;
}
public JavaPackageComponentModelConf load(Node componentModel) throws ComponentConfException {
String componentModelName = componentModel.getAttributes().getNamedItem("name").getNodeValue();
JavaPackageComponentModelConf componentModelConf = new JavaPackageComponentModelConf(componentModelName);
for (Node node = componentModel.getFirstChild(); node != null; node = node.getNextSibling()) {
if (node.getNodeType() == Node.ELEMENT_NODE) {
if (node.getNodeName().equals("component")) {
String componentName = node.getAttributes().getNamedItem("name").getNodeValue();
int layer = Integer.parseInt(node.getAttributes().getNamedItem("layer").getNodeValue());
List<String> packages = new ArrayList<String>();
for (int k = 0; k < node.getChildNodes().getLength(); k++) {
Node Package = node.getChildNodes().item(k);
if (Package.getNodeType() == Node.ELEMENT_NODE) {
packages.add(Package.getTextContent());
}
}
componentModelConf.addComponentConf(componentName, layer, packages);
} else if (node.getNodeName().equals("ignorePackages")) {
List<String> ignorePackages = new ArrayList<String>();
for (int k = 0; k < node.getChildNodes().getLength(); k++) {
Node Package = node.getChildNodes().item(k);
if (Package.getNodeType() == Node.ELEMENT_NODE) {
ignorePackages.add(Package.getTextContent());
}
}
componentModelConf.setIgnoreItems(ignorePackages);
}
}
}
return componentModelConf;
}
}
|
STACK_EDU
|
New 2009 mac pro nvidia geforce 8800gt 512mb video card. Find great deals on ebay for nvidia geforce 8800gt and nvidia geforce 8800 gt mac. Temperatures of nvidia 8800gt waccelero s1 rev 2 cooler vs. To identify a graphics card part number, check the label on the back of the card. Ive got rebooted my mac using power button and instead of normal startup ive got 5. I havent noticed any major issues with redraw or video playback dont really play games on mac, but have just noticed gradients, either using gradient or paintbrush in photoshop cs3, looking pixelated or choppy. Most if not all nvidia pc cards will work in a mac pro wo the need for an efi rom once the nvidia web drivers are loaded. Early 2008 apple mac pro nvidia geforce 8800gt 512mb video. Upgrade your mac pro with the graphics power you need to push the boundaries of visual computing. On there, someone posted that if you have the supported cards listed, you can download graphics card drivers straight from nvidia. The gt 630 pcie x16 card will work within the specs of.
Find great deals on ebay for nvidia geforce 8800 gt and nvidia geforce 8800 gt mac. Nvidia 8800 gt 1st generation graphics upgrade kit for. The nvidia geforce 8800 gt card requires mac os x 10. New 20082009 apple mac pro nvidia geforce 8800gt 512mb video graphics card. Nvidia geforce 8800gt 512 mb for mac pro 2008 ebay. Nvidia geforce 8800 gt mac edition specs techpowerup gpu. Ive been informed that new machines use efi64, the old mac pros use efi32, to work in an efi32 machine you need an efi32 rom. About graphicscard compatibility between mac pro models.
Pump up the graphics of your mac pro with the apple nvidia geforce 8800 gt macintosh computers from apple are known for their power, and they typically work well right out of the box for many professional graphics applications. Apple mac pro nvidia 8800 gt 512mb pcie video card gt 120 8800 2600 20062012. The camera i used in this video is sony dscs750 i used imovie 09 for editing thanks and have a. Early 2008 apple mac pro nvidia geforce 8800gt 512mb video graphics card. Nvidia geforce 8800 gt graphics upgrade kit graphics adapter gf 8800 gt pci express 2. Make offer apple mac pro nvidia geforce 8800 gt 512mb pcie video card 8800gt 2600 120. The apple store now lists a geforce 8800 gt graphics upgrade kit for firstgeneration mac pros. Nvidia 8800 gt card comes to firstgen mac pros macworld. Supported gpus nvidia pcie 590 gtx nvidia pcie 580 gtx nvidia pcie 570 gtx nvidia pcie 560 gtx nvidia pcie 550 gtx nvidia pcie 520 gt nvidia pcie 480 gtx nvidia pcie 470 gtx nvidia. Mac mini mac pro macbook air macbook pro macos catalina. Nvidia geforce 8800 gt driver for mac download geforce 8 9 also flashing of the video card bios voided the warranties of most video card manufacturers if not all thus making it a lessthanoptimum way. Buy early 2008 apple macpro mac pro nvidia geforce 8800gt 512mb video graphics card cp187. Apple introduced the geforce 8800 gt as a graphics option for the mac pro when. Well, it looks like my geforce 8800gt for mac is bricked.
As it currently stands, this card is does not yet have apple boot camp drivers i even tried the 2008 mac pro 64 bit drivers, and the official nvidia. Nvidia geforce 8800 gt upgrade for macpro apple community. Great savings free delivery collection on many items nvidia geforce 8800 gt mac for sale ebay. The awardwinning nvidia geforce 8800 gt features 112 highpowered processing cores and supports dual 30inch apple cinema hd displays. Evga nvidia geforce 8800gt 512mb svideo, dvii graphics video card p3n802ar. Graphics cards free delivery possible on eligible purchases. Is getting the nvidia geforce 8800 gt for mac a better choice. Frankly, there are some brands of 8800gt out there that work with your mac. Apple mac pro nvidia geforce 8800 gt 512mb graphics video card 2008 2012.
The geforce 8800 gt is not compatible with the older mac pro august 2006, april 2007 as we assumed. Buy nvidia geforce 8800 gt mac and get the best deals at the lowest prices on ebay. Upon the release on the penrynbased mac pros in january, apple introduced the nvidia geforce 8800gt as a new video card option. Make offer apple mac pro nvidia 8800 gt 512mb pcie video card gt 120 8800 2600 20062012. Hi, have a 2008 mac pro, with nvidia 8800gt card installed by apple and i have an eizo s2110w lcd screen. The wait is finally over the geforce 8800 gt is now available for the 1st generation mac pro. But i would be cautious of that as my computer was a 2008 model and not compatible.
Click on the apple icon in the upper left corner of the screen and select software update. In the past, the mac versions of ati cards had 128k flash chips, so youd have to find a pc card that had a 128k flash, instead of a 64k flash. Mac nvidia 8800gt with accelero s1 rev 2 passive cooler. All the flashable gpus for mac pro update jan 2018.
Apple mac pro nvidia geforce 8800 gt 512mb pcie video card 8800gt 2600 120. Mac pro graphics card upgradenvida geforce 8800gt youtube. Nvidia geforce 8800 gt upgrade for macpro more less. I was weary at first, but i downloaded them anyway. Nvidia takes apple mac pro users to new level of stunning. With 112 processing cores and two dvi ports capable of driving two 30inch displays at full resolution 2560 x 1600, the only thing to worry about is desk space. Downloadable graphics drivers from nvidia for os x. Apple nvidia quadro 8800gt 512mb video graphic card mac. This allows the card and your computer to run cooler. For the people having trouble installing the official nvidia geforce 8800 gt drivers in windows on their original mac pros, theres a way to manually install the drivers from within windows. After a while, however, you may need an upgrade to your graphics. Nvidia geforce 8800gt 512 mb graphic card apple mac pro 3,1 4,1 5. Mac pro owners still waiting for nvidia 8800gt option.
Upgrading to nvidia geforce 8800 gt from geforce 7300 gt mac pro stock gpu. Nvidia geforce 8800 gt 1st generation graphics upgrade kit for mac pro. Compatible with any intelbased mac pro, the geforce 8800 gt includes nvidia s video processing technology which offloads h. The geforce 8800 gt mac edition was a graphics card by nvidia, launched in february 2008. Built on the 65 nm process, and based on the g92 graphics processor, in its g92270a2 variant, the card supports directx 11. Early 2008 apple macpro mac pro nvidia geforce 8800gt. Existing mac pro owners initially saw this is an opportunity to upgrade from current cards as well as ailing 1900xt cards.1077 799 1090 1454 362 225 1471 1081 1206 692 542 627 785 1444 506 167 676 576 813 1379 965 263 129 162 280 590 345 969 554 211 1276 713
|
OPCFW_CODE
|
function initialize(){
var man1hover=$("#man1");
var man2hover=$("#man2");
var flag1=1;
var flag2=1;
man1=new Array(7)
man2=new Array(7)
man1hover.mouseover(function(){
if(flag1) {
for (var i = 1; i <= 7; ++i) {
man1[i]=new Image()
man1[i].src="images/lee"+i+".png"
}delay=200
nextImage=1
startAnimation1()
};
});
man2hover.mouseover(function(){
if(flag2) {
for (var i = 1; i <= 7; ++i) {
man2[i]=new Image()
man2[i].src="images/chun"+i+".png"
}delay=200
nextImage=1
startAnimation2()
};
});
man1hover.mouseout(function(){
flag1=0;
});
man2hover.mouseout(function(){
flag2=0;
});
}
function startAnimation1(){
interval=setInterval('animation1()',delay)
}
function startAnimation2(){
interval=setInterval('animation2()',delay)
}
function animation1(){
i=nextImage
++nextImage
nextImage%=7
document.display.src=man1[i].src
}
function animation2(){
i=nextImage
++nextImage
nextImage%=7
document.display.src=man2[i].src
}
|
STACK_EDU
|
First degree in Computer Science at the Technion.
Ph.D. in Computer Science (direct path) at the Technion.
Post-doc at the faculty of mathematics and Computer Science at Weizmann institute.
Post-doc research in the Computer Security and Industrial Cryptography (COSIC) at the Department of Electrical Engineering (ESAT) of the Katholieke Universiteit Leuven, Belgium.
Post-doc research in the Cryptography Research Group in the Departement d’Informatique of Ecole Normale Superieure, Paris, France.
Orr won the Krill Prize (along with yet another graduate of the excellence program – Nathan Keller) in 2014.
Participated in the Technion Excellence Program: 1997 – 2000.
Orr was accepted into the program at the age of 17. He started taking advance courses in the second semester of his studies. This led to joining a research group at the end of the second semester of his studies, along with publishing a research paper at the summer of 1998.
Following his research, he continued doing M.Sc. (and then Ph.D. in the direct path) with Prof. Biham, which was the lecturer at the advanced course (and the head of the research group).
Among his research projects are: security analysis of block ciphers (like Serpent), new cryptanalytic techniques (rectangle attack, combined attacks), and analysis of stream ciphers (like the GSM stream cipher A5/1).
“During the last class of the advanced course, Prof. Biham entered the class and told us that the US government has just declassified a cipher which was classified until that moment. We started working on the cipher, using the techniques which we explored during the course.
We then published a short report on the findings, and after further research, we have published a research paper in the “Selected Areas in Cryptography 1998″ workshop.
The project that we were asked to submit at the end of the advance course was analysis of one of the candidates to the AES competition (the US national institute of standards and technology was at the time at the beginning of a selection process for the next generation encryption algorithm). My project was then presented in the AES workshop held in Rome in 1999.
After that, I’ve started working along with Prof. Biham on a cipher which is used in the GSM system – A5/1. After a very long research, our results have been published in the Indocrypt 2000 conference (and by then I’ve been Prof. Biham’s masters student).”
Recommendations to the students enrolled in the program and to future candidates:
For students – take a deep breath, and look for stuff that interests you.
You have the possibilities to pursue topics that interest you without too much formal requirements (no perquisites) and relatively ease of mind from the financial side.
For prospective students – take a deep breath. The competition is fierce, but you can even learn a lot during the process itself.
Today (2018): Orr is an associate prof. at the computer science dept. at the University of Haifa.
|
OPCFW_CODE
|
Tonight I went to a truly bizarre talk (actually, it was a series of four talks). The evening was advertised as a seminar presenting an irrefutable mathematical proof for the existence of God. Frustratingly, questions were held until the very end, so by the time the last fellow spoke one was so busy feeling blustery about what he said that one had forgotten many of the issues of the first talks. Also, unfortunately for the speakers the organizer of the event had, for some unfathomable reason, decided it would be a good idea to send a mass invitation to the math department. This then spilt over into some of the related departments (like physics), to the point where over three quarters of the audience were largely atheistic in mindset and highly versed in mathematics.
Roughly, the set of talks went like this:
Talk 1: "Discover Your Purpose in Life"
The fellow started by stating that there were two ways of reasoning, you either methodically looked at a sequence of data or you took all the information at once and made a comparison (at no point did he address how one knew whether one had all the relevant information or not). His talk was actually quite difficult to follow, for he seemed to jump from unqualified statement to unqualified statement. He made a number of tired and old arguments, including the "fine-tuning argument" and a number of other arguments from incredulity (mangling the concepts of probability and logic on his way). He then ended with one of the most bizarre theological renditions I have ever heard, including making the statement that the Earth was small in relation to the universe because the Earth was the kingdom God gave Satan to prove that Satan couldn't even run it properly... essentially, as far as I could follow, relegating the Earth to the status that I had understood hell to hold in the Abrahamic faiths. This was an inconsistent position, however, for he insinuated that God held sway over the happenings on Earth at multiple other points in his talk (as well as the other speakers), while also making the claim that no one held sway because everyone on Earth had the 'gift' of free will.
Talk 2: "The Proof"
A long, nonsensical power-point presentation of numerology finding coincidental recurrences of the number 19 in the Qur'an. Patterned coincidences in text have been well and thoroughly refuted numerous times (a good resource is here). Also, numerical coincidences in no way makes a mathematical proof.
Talk 3: "Why Bad Things Happen"
This talk was surprisingly the best of the bunch, although only because the fellow who gave it was an accomplished speaker who never really made much of a point (although at one point he did make the claim that your free will gave you control over whether you were on God's side, at which point your life would be good, or Satan's side, at which point your life would be bad. I wanted to ask about things like hurricanes and other natural disasters, which make life miserable for believer and non-believer alike and over which we have no control, but I never got that chance). He also made a couple statements which sounded very much like Yoda's philosophy (things along the line of "Don't give in to anger and hate"), so that kind of endeared him to me.
Talk 4: "Here's Craig"
For some reason, no title was given for this talk, and the speaker was only introduced as "Craig", hence the title given. This was a pretty wasted talk, as the speaker was clearly speaking to the wrong audience. He was attempting to reconcile the Bible with the Qur'an, meaning he basically quoted a lot of both of them without really saying much himself. At the end of his talk he made a very bizarre statement that completely contradicted the "free will is everything" sentiment espoused by two of the previous speakers by intimating that everything that happened was according to God's plan, including things like medical and scientific breakthroughs. He then left that hanging there as a confusing and highly arguable statement, and apparently disappeared (he failed to return to the podium for the question and answer period).
The Question and Answer Period: "Over an hour of brutal and highly charged argument"
I honestly felt a little bad for the speakers, because I don't think they were prepared for the response they got. Professor Charles Dyer got the first word in, and thoroughly blasted the numerology "proof" as such a twisted and overly round-about method of revelation that it was just as likely to be a trick of the devil as the work of any all-powerful god. As an opening salvo, while incendiary, it was not particularly devastating. There was a lot of blustering and, "Oh, but you haven't gone through the rest of the proof, this was only the rough beginning of it...", at which point Dyer and another member of the audience, a fellow named Ali in possession of a very robust knowledge of the Qur'an, tried to get across the profound contradiction imposed by the combination of omniscience and omnipotence as espoused by the speakers. This was largely lost on the speakers, at which point the organizer tried to salvage the evening by calling on another member of the audience.
This was a mistake. She called upon a mathematician in the audience named Alfonso who launched into a blistering tirade against their numerology, pointing out that very similar analyses had been done on numerous other books and were all based on the simple preponderance of coincidence available with very large data sets. I think it was a combination of his accent, rapid speech, hostility, and calling out of nonsense that would shake their worldview, but his question was not well received. The organizer herself got quite upset and snappy, and once again tried shuffling between questioners to ease the burden.
Alas, things continued to not go well as more of the audience clamped on to inconsistencies and fallacies. I got a brief moment to speak (I believe that the organizer was once again seeking reprieve), so I made the attempt of trying to engage the speakers on their level. My question was that even if one accepted what they were saying, why would God have let hundreds of generations of people live in complete and utter ignorance all over the world prior to revealing His word through the Qur'an, and even once that was revealed he continued to neglect the people of the Americas and Australia and other regions for more centuries. Even once he released the Qur'an, he did so with ultimate "proof" of his existence embedded in a manner that would require the invention of modern computers to adequately analyze, thereby preventing its discovery until 1974 (when this numerology was apparently completed). To my profound disappointment, the second speaker (who was standing at the podium at the time) said that he thought one of the other speakers should answer my question because he wasn't well versed in "that sort of thing", at which point no one else came up and the organizer simply called on another person. So much for my attempt to engage the speakers on their own level.
The final response was a calm and quiet audience member (I don't know his name) who simply pointed out the fact that numerical coincidences do not provide a proof. This was met with some uncomfortable squirming of the speakers as they professed to be "simply presenting information for others to make up their minds about". When they finally asked what a valid mathematical proof entailed, Alfonso started to give an answer when the organizer abruptly (and, I think, quite rudely) cut him off and wished everyone a good night, bringing the evening to a close.
Thus ended an odd and somewhat vexing (though still rather entertaining) evening.
|
OPCFW_CODE
|
What Is a VPC? A Beginner's Guide to AWS VPCs
Amazon Virtual Private Cloud (VPC) is a service that enables you to deploy AWS resources within a network that is completely isolated and customized according to your needs. With VPC you have control, over your networking environment allowing you to choose the IP address range create subnets and configure route tables and network gateways.
What are the benefits of using an AWS VPC?
Reduced downtime and inconvenience
Reduced risk of data breaches
How do I create an AWS VPC?
Most users prefer to use the AWS Management Console to create a VPC. Here’s how to set up your VPC step-by-step:
Step 1: Go the AWS management console and search for VPC and select it.
Step 2: Click on Create VPC
Step 3: Under the VPC setting select the vpc only, give the name of the VPC and set the IPv4 CIDR for na and click on the Create VPC. For now say Name: demo-vpc IPv4-CIDR: 10.0.0.0/16
Step 4: Now, select the Subnet on the dashboard column.
Step 5: Click on Create subnet and select the VPC created earlier, which is
Step 6: Give the name of the subnet and set the IPv4 CIDR block and click the create subnet.
Step 7: For creating a private subnet give the name and set the CIDR block such that it doesn't overlap and click the Create subnet.
now you can see the available subnet.
Step 8: Now go to the Route tables under the Dashboard.
Step 9: Now Create the route table and Name the table for the public subnet set the name and select the VPC created earlier.
Step 10: For the private subnet again, create the route table give a name and select the VPC created earlier.
Step 11: Select the Internet gateways from the Dashboard and Click on Create Internet gateway in the top right corner.
Step 12: Give a name for the Internet gateway settings, this is created to connect our VPC to the Internet.
Step 13: At the top right corner Click on Attach to a VPC and under Available VPCs select the with the name demo-vpc created earlier and click on Attach internet gateway.
Step 14: From the Dashboard Click on NAT gateways and on the top right corner Select the Create NAT gateway
Step 15: Under the NAT gateway settings, give a Name to the NAT, under Subnet Field Select a Public subnet created earlier and, on Connectivity type select Public
under Elastic IP allocation ID click on the Allocate Elastic IP button. And Create NAT gateway at the end.
Step 16: Go to the Route table and firstly select the public route table we created earlier.
under the Routes click Edit routes button on the right side of Routes.
Select the Add route and under Destination Select 0.0.0.0/0 and under Target select Internet Gateway that we created since we are targeting any traffic that needs to go to the internet to the IGW and Save changes.
now select the Subnet associations next to the Routes and click on the Edit subnet association under Explicit subnet associations and select the public subnet created earlier and Save associations.
Step 17: Go back to the Route tables and select the private route table and click on the Edit routes.
Under the Destination select 0.0.0.0/0 and under Target select NAT Gateway and Save changes.
Click the Subnet associations and under Explicit Subnet associations click on Edit subnet associations.
select the private subnet we created earlier and save associations.
Step 18: Under the search bar search EC2 and click on it.
Step 19: Click on the Launch Instance
we are creating two instances public and private instance.
Step 20: For the first instance give a name to the instance say instance-public
Step 21: Create a key pair for now we named it randomkey
Step 22: Under the Network settings Click on Edit
Select the VPC created earlier and under subnet field select the public subnet.
Enable the Auto-assign public IP
Step 23: Launch the Instance
Step 24: we are creating a private instance for that click on Launch instance
Step 25: Name the instance for now we name it instance-private.
Step 26: You can select the same key pair or you can create a new one. For now we use the same key pair "randomkey"
Step 27: Under the Network settings Click on Edit.
Select the VPC created earlier we have "demo-vpc" , on Subnet select the private-vpc and Enable the Auto-assign public IP
Step 28: You can see the two instances now, select the public Instance which Status is 2/2 check passed. we have "instance-public" and click on Connect.
Copy the Command from the SSH client
Open the Terminal and change directory where your key-pair is
change the permission of the key-pair.
chmod 400 randomkey.pem
connect to the public-instance
ssh -i "key-pair_name.pem" ec2-user@<Public IP>
you can ping to google to see if there's a connection established.
Connecting Private instance from public instace
you need to note down the Private IPv4 addresses
open the keypair file and copy the text.
create a file and paste the copied keypair text inside and save and exit.
change the file permission
chmod 400 filename.pem
connect the instance using private ip address.
ssh -i filename.pem ec2-user@<private_ip_address>
you can ping to see the Private instance is connected to internet or not .
you have done it.
© 2023, Amazon Web Services, Inc.
|
OPCFW_CODE
|
The Navixy platform offers the capability to seamlessly integrate with SMS gateways, enabling the sending of messages. To leverage all the advantages of this feature, it's necessary to establish a connection with an SMS gateway. It's important to note that this functionality is not activated by default and can only be accessed once you configure the SMS gateway settings and activate it in the database.
SMS gateway is not configured in Admin panel. To apply or change its settings, you need a direct access to database.
SMS gateway is used for the following purposes:
Device activation (sending activation SMS commands).
Sending event notifications (alerts).
Device configuration directly from the user interface.
Here we outline troubleshooting steps if you are experiencing problems with text message delivery as well as automatic device activation.
Check SMS gateway settings
First thing to check is whether you actually have an SMS gateway activated for the platform. To do this, access your database and execute the following SQL query:
select * from google.sms_gates_to_dealers;
If the output is empty, then you have no SMS gateway activated. Proceed to SMS gateway configuration section to find information on setting it up.
If the output contains information, remember the
gate_id value and execute the following query:
select * from google.sms_gates;
Find the active gateway by its
id value (obtained as
gate_id previously) and make sure its connection parameters in
params column are valid. If they are incorrect, edit them.
If all the parameters are fine, and SMS gateway is activated, but no messages are delivered, proceed to the next step.
Check messaging service status
If setting seem correct, make sure that Navixy SMS-server is up and running. This is the service responsible for all the messaging. Without it, neither emails nor SMS can be sent.
If SMS-server is running but messages are still not delivered, proceed to the next step.
Check service logs
Check SMS-server log for any errors.
To find errors related to SMS message delivery to a specific phone number, search for log entries containing that phone.
For Linux, use the following command:
grep "12345678910" log.txt
For Windows, use any advanced text editor, as the default Notepad is unable to handle large text files properly.
The most common errors in SMS gateway operation are as follows:
Incorrect authentication data (login/password or API tokens).
SMS gateway address and/or port unavailable (network issues).
SMS gateway address and/or port incorrectly specified.
Insufficient SMS gateway funds.
Exceeded SMS gateway message quota.
All of the above problems can be clearly identified based on log entries.
Certain SMS gateways have been observed to remove the initial double space in an SMS message. However, it is important to note that this space is necessary for activating certain devices, such as Teltonika. If you encounter difficulties while activating your devices, we recommend reaching out to your SMS gateway provider to verify their support for double spaces.
If you find any errors indicating a failure on the Navixy platform side, be sure to report them to technical support, and we will provide all the necessary assistance.
|
OPCFW_CODE
|
Or, how I learned to stop worrying about which hosting providers supported my OS and love QEMU/KVM
So uh recently I was thinking about migrating my main server (the one that hosts, among other things, this website) to a new, improved, cleaner server. The one that runs this is on debian, lived through wheezy, jessie and stretch, and since I’ve experimented on it a fair bit in the ~4 years it’s been running, is littered with weird projects and packages that shouldn’t be installed and stuff. Yeah, that’s practically the definition of “bad admin practices”, but I was young(er) when this server started running.
Anyway, I was thinking of upgrading to a new server with FreeBSD, and using Jails to isolate the services (it’s, uh, still a WIP). Since I like Online.net a lot when it comes to hosting servers (they’re relatively cheap and they provide good service, which is all I ask generally), and that they support FreeBSD, I decided to order a server from them and work on the migration over the next few weeks.
Alas! After ordering the server, it appears they only support FreeBSD on UFS! Since I was born after 1983, I didn’t want to use UFS as root on a FreeBSD server, that would be a waste! So, obviously, I decided to use the KVM-over-IP access they provide to load up an ISO and install things my way.
Well, I was a fool, cause the class of server I ordered (the cheapest) don’t have KVM-over-IP! That’s a feature reserved for the slightly more expensive ones. But I didn’t want to upgrade and pay more per month, so I thought and thought, and I ended up coming with the following solution
So the idea is pretty simple: spawn a Qemu VM, with its first disk being the server’s physical disk, and the ISO of the OS you want to install. Then perform a simple installation, fix things up a bit (network interface name/IP, stuff like that), reboot, and profit.
What I did for FreeBSD 11 specifically was
sudo apt install qemu-kvm wget http://ftp.fr.freebsd.org/mirrors/ftp.freebsd.org/releases/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-disc1.iso qemu-system-x86_64 -hda /dev/sda -cdrom FreeBSD-11.1-RELEASE-amd64-disc1.iso -net nic,model=e1000 -curses -boot d
Then, do the install in the, uh, even-uglier-than-usual environment of the curses Qemu interface. Nothing special about this, it’s a standard FreeBSD install. Afterwards, spawn a shell, edit /etc/rc.conf
ifconfig_igb0="inet xxx.xxx.xxx.xxx netmask 255.255.255.0" defaultrouter="xxx.xxx.xxx.xxx"
Reboot, and your server should come up. If it doesn’t, well, you can always boot the recovery FreeBSD system to see what’s wrong, or reinstall and retry.
Of course, I’m speaking about FreeBSD here but this works with any target OS, linux, FreeBSD, Windows, Haiku, Plan9… whatever.
It’s kinda hacky, but
|
OPCFW_CODE
|
The "Art" of Digital Design
A course offering under the Georgia Tech Honors Program, ECE2883-HPC: The "Art" of Digital Design, provides a multidisciplinary opportunity for Tech students.
Just as 3D printers and computer-aided design tools have opened up prototyping physical parts to a wide range of users, FPGAs (field-programmable gate arrays) have done something similar for digital electronics. But they've been around a lot longer than 3D printers! Why has their impact been more limited? What are they good for? We'll explore this and more.
But I don't know anything about digital electronics!
Right. You're not alone. But returning to that 3D-printer analogy, there are a lot of people making 3D parts that don't have the optimal structure or even use the best material. There's some room to learn along the way, and maybe make a few mistakes. And in the end, you might not be the best digital designer, but on some future project team, you might be the ME (or IE, or ID, or AE, or BME, etc.) who knows what's possible to do in the electronics, and maybe even enough to get it done.
No books are required. A variety of electronic materials will be provided, including videos and lab exercises.
Why is this "more" than what we need? Well, we don't actually have to design a "computer" to accomplish something significant.
If you want even more background, an excellent textbook available at no cost electronically through the Tech library is Digital Design and Computer Architecture, by Harris & Harris:
- Library record
- Direct link to electronic copy in ScienceDirect (you must first be logged in to GT Library in your browser)
Some optional readings will be given in Harris & Harris from time to time.
You will need your GT Computer Account — An account that allows you to log in to a campus Windows computer using a GT Active Directory (AD) account & password.
You will eventually utilize the free download of Altera Quartus II Web Edition, version 13.0, Service Pack 1 (installation procedure to be described later).
If you REALLY want books, you will get something out of the books currently used in ECE2031:
- Digital Design Laboratory Manual Second Edition, by Thomas Collins and Christopher Twigg (ISBN: 978-0-7575-7157-2). Available in the bookstores or directly from the publisher at http://www.kendallhunt.com/store-product.aspx?id=45513 (including an eBook option).
- Rapid Prototyping of Digital Systems SOPC Edition, by Hamblen, Hall, & Furman.
How does this relate to that other course, ECE2031?
Another course (ECE2031) utilizes the lab a lot more than this one. Those students are taking a required course for ECE, CmpE, and some CS students. They use the same CAD tools and equipment, but
- they start with a prerequisite in digital logic that mainly allows them to take a more analytical approach, and also allows them to conclude the course with the full implementation of a computer;
- they follow a more rigid structure, characteristic of a course that handles a lot of students;
- they have a design project, but it is not as thoroughly integrated across the entire semester as yours.
What to expect
We meet weekly for a fairly typical Honors Seminar in our assigned Clough classrom. We will eventually split up into project teams, and they will meet at mutually agreed times, sometimes in the Digital Design Lab (Van Leer E283), and sometimes in locations convenient to the teams (who will have take-home hardware and software). We'll have a problem-based learning model, with a bit more "pushing" of information in the seminar session, along with some discussion about what we need to do in the lab to move each of your projects along. Then, in the lab, you'll be assisted by Dr. Collins and Kevin Johnson in a combination of learning activities and designing/building activities.
|
OPCFW_CODE
|
Field Flux control and Speed Of DC motor
I was studying electromagnetism and I tried to link with working of DC motors.
For the speed control of DC motor - The Flux of DC motor is reduced to increase the speed above rated speed.
I understood that as we reduce the flux density(B) due to field coils the
back emf produced in the armature coil reduces as per faradays law. So
Armature current will increase as per KCL applied in the armature circuit.
Ea= V - Ia*Ra
This increase in Armature current produces a further increase in magnetic flux density due
to the armature current.
Thus overall Magnetic flux density in the air gap increase. This
results for the increase in Magnetic torque on the armature winding due to
Lorentz force equation
and thus the motor's angular acceleration increases and thus the speed increases.
Is this understanding the right one? Please comment on whether this understanding the right one or any deeper understanding is available.
What happens if I Consider the armature reaction?
I also learnt that Flux weakening is commonly used to achieve motor speeds above rated speed.
As the increase in speed is indirectly achieved my increase in armature current Doesn't this make the current in the armature coil go beyond nominal value or rated value?
I understood that as we reduce the flux density(B) due to field coils the back emf produced in the armature coil reduces as per faradays law. So Armature current will increase as per KCL applied in the armature circuit.
Ea= V - Ia*Ra
OK so far.
This results for the increase in Magnetic torque on the armature winding due to Lorentz force equation and thus the motor's angular acceleration increases and thus the speed increases.
The increase in torque is a product of the flux and the increased armature current.
Thus overall Magnetic flux density in the air gap increase.
No - the flux generated by the armature conductors is tangential to the airgap. It's net effect is to increase the flux at one side of the pole, and decrease it at the other, but the sum of the flux doesn't increase, and due to the non-linearity of the magnetization curve of the lamination steel, may decrease a little. Armature reaction is the term that describes this distortion.
As the increase in speed is indirectly achieved my increase in armature current Doesn't this make the current in the armature coil go beyond nominal value or rated value?
If you are trying to produce the same torque with the weakened field, then yes, the current will exceed the rated value. Normally the motor will operate at a constant power condition under field weakening, and the maximum torque available will fall proportionally as speed rises, and the armature current will not change much.
The increase in torque is a product of the flux and the increased armature current
So won't reducing Flux reduce the torque? Why is the influence of Current overrules the Reduce in flux density? @Phil G
It's net effect is to increase the flux at one side of the pole, and decrease it at the other This implies torque is varying spacially? @Phil G
Normally the motor will operate at a constant power condition under field weakening But the motor Operation depends on Load requirement. You are saying we only use Field weakening to control the speed only in such a load requirement?
@Phil G
Reducing flux reduces the torque, and as you point out, that reduces the emf, causing the current to rise - by a greater proportion than the change in flux, since the Ia*Ra term should be substantially smaller than the emf. Torque generated is a sum of the forces on the armature, but yes, some of the airgap is 'working harder'. Field weakening when you have a load that will increase with speed, or present a constant torque will cause the current to rise, and probably cause overheating. It's useful with a load that falls off with speed.
The Ia*Ra term should be substantially smaller than the emf.
This statement is not clear! @Phil G
Ia * V is the amount of electrical power that is getting converted to mechanical power. Ia * Ra is lost as heat, so for a motor that has reasonable efficiency, the Ia*Ra term should be small, maybe 25% or less.
|
STACK_EXCHANGE
|
Infinite Loading Screen on Raspberry Pi 2
Hello there!
So I've installed the latest 1.740 Webmin on my Raspberry Pi 2 (Raspian) and activated this theme. I've even updated it already so there won't be a problem anymore, at least I thought so. When I try to access it from my Desktop browser nothing happens and is stuck in an infinite loading screen. Only the little loading-animation in the middle is turning around.
However when I make the browser window tiny so it'll trigger the little button on the left (for mobile devices) I can access other screens just fine and the theme starts working, it seems that there's a problem with the Dashboard display.
It's because some Perl modules are missing. Version 11.00 is coming very soon to completely fix this problem and bring new features. Meanwhile double click on the loader to make disappear and see the error message. There were familiar post just few days ago, it will help you to fix it. Please take a look:
https://github.com/qooob/authentic-theme/issues/135
Ah, thanks for that tip! So, yeah. I'll post it here anyways, just in case :)
Error - Perl execution failed
Can't locate LWP/Simple.pm in @INC (@INC contains: /home/pi/webmin-1.740 /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl . /home/pi/webmin-1.740/ ..) at /home/pi/webmin-1.740/authentic-theme/sysinfo.cgi line 196.
BEGIN failed--compilation aborted at /home/pi/webmin-1.740/authentic-theme/sysinfo.cgi line 196.
Does the theme work for you now?
After double-clicking I can choose different screens on the left, yes.
to fix the issue install the missing modules:
apt-get install libwww-perl
To fix the issue simple update to version 11. It has no dependencies!
https://github.com/qooob/authentic-theme/raw/master/.build/authentic-theme-latest.wbt.gz
Where to extract it? The theme was installed in webmin after the latest update...
Don't extract it anywhere. Goto Webmin->Webmin Configuration->Webmin Themes, then there in tabs click on Install Theme and choose From ftp or http URL. Enter the link above and hit enter.
If you want to do it manually. Go to /usr/libexec/webmin/authentic-theme and copy all files there. You could delete all files from there first.
That was easy indeed! Would be nice if that could be added to the documentation.
README.md is getting a bit long, do you mind if I create a /docs folder and separate the readme in multiple files?
Awesome! Will try once I'm home ;)
Thanks for the fast fix :)
Thank you, Richard, but no. I don't think it's necessary. Using Ctrl+F in your browser and searching for install is something that almost anybody could do.
https://github.com/qooob/authentic-theme#how-do-i-install-authentic-theme
I made docs for this purpose.
:+1:
|
GITHUB_ARCHIVE
|
Provide an input for Tick Format
Hi,
it seems that there is no input field for tick format in d3 formatting mini-language (at least I can't find any). Could you add this to the editor?
Hi!
Is your use-case covered by #686 or did you have another thing in mind for this issue?
Here is what the current version contains for tick formatting:
I was thinking more of a mask input for the tick labels, so that I could input, for example, a date formatting mask like %yyyy-%mm. This way user would have greater control on the actual tick label displayed.
Thanks for the suggestion! This editor wraps plotly.js pretty directly, and unfortunately I don't believe plotly.js supports such a parameter, so to get this done we would need a feature request over at https://github.com/plotly/plotly.js and get that merged in, and then we could add editor support for it.
Ok, should I create one there?
Yep, that's probably a good idea!
It looks like tickformat exists as an axis layout option in plotly.js: https://github.com/plotly/plotly.js/blob/master/src/plots/cartesian/layout_attributes.js#L504
Would that work to expose as an option in the editor?
It looks like tickformat exists as an axis layout option in plotly.js: https://github.com/plotly/plotly.js/blob/master/src/plots/cartesian/layout_attributes.js#L504
Would that work to expose as an option in the editor?
Yes, this is exactly what I'm after!
On a related note, would it be possible to make the date suffix on the first tick optional? That may be buried within plotly.js as well.
I really like having the date suffix there most of the time. However when the data are actually small time intervals, I'd like to be able to hide the epoch date that shows up. In the example above, the metric is on the order of 20 minutes. That could be more clear by enabling tickformat and not showing the "Jan 1, 1970" in this case.
I'm not sure about adding such a "programmery" widget... I don't know where we would provide the documentation for what's allowed in that textbox etc.
How does this relate to this writeup and the "Custom date format" shown on that page (presumably from an earlier editor)?
If it's labeled "Custom date format" with a tooltip having an example time format or two, would that work?
Hehe that is indeed a very old tutorial that refers to version 1 of our editor (this react-chart-editor is essentially version 3).
I'll think about what to do here. Maybe I'll do what I did with the separators in Style/General and provide a number of built-ins or something.
Note that there are a number of usability/context issues here: plotly.js doesn't provide a default value for this field as far as I can tell, and the applicable mini-language depends on the axis type (numerical vs date) so a fair bit of logic needs to be coded in order to provide decent feedback to the user about what's allowed when.
Yeah, I see where you're coming from. I noticed that the Exponents dropdown is only displayed for numeric axes (although I'm not sure how that happens behind the scenes). Could a prospective date format input work similarly, and only display for date axes? That could take care of the when it's allowed piece. For what is allowed, could a tooltip suggesting a couple examples be sufficient?
Could a prospective date format input work similarly, and only display for date axes?
Unfortunately not because it's the same tickformat that applies to both number formatting and date formatting.
For what is allowed, could a tooltip suggesting a couple examples be sufficient?
It could, yes, but we don't have any facility in the framework right now for per-input tooltips of this kind, so we'd be adding it in just for this, which doesn't feel great.
I could represent this as a simple dropdown with two options: auto/custom, similar to the way the prefix and suffix inputs work. Custom would allow the user to type this special mini-language and would override and hide the show-exponents, exponent-format and separate-thousands inputs automatically, but it still feels very out of place in this editor to have this totally opaque input field that allows this cryptic mini-programming language in the middle of what is an otherwise almost totally mouse-drive interface.
OK, how about something like this? I've added a dropdown with "simple" and "advanced" modes and things more or less auto-hide as expected...
Default:
Override:
Ugh, s is a valid number format but not date format unfortunately. Won't be quite that easy :)
OK, there we go, conditional options for date and non-date axis types. You can try it here #762
Thanks for playing around with this!! I appreciate the complexities here more, now that I see it flipping between d3-format and d3-time-format in advanced mode depending on type.
I think the advanced formatting options for numeric axes will be useful in some cases, and this looks like it will cover my intent with date axes nicely as well. Thank you!
Thanks for keeping the pressure up. I don't love the solution but it works well enough, and it wasn't all that hairy to implement after all :)
@bhogan-mitre in answer to your question about the suffix, I actually think that setting a tickformat will control that suffix.
I noticed that when I was playing around in the dev app as well. Very nice! Thank you.
|
GITHUB_ARCHIVE
|
In OpenSim 3.1, we've focused on improving the OpenSim scripting interface, accessible through the Graphical User Interface (GUI) and Matlab. We've also added to the OpenSim modeling and simulation libraries, with new components for modeling devices and estimating metabolic cost. OpenSim 3.1 also includes several usability improvements, including a toolbar for quickly running a forward simulation. As with every release, we've also implemented performance improvements and bug fixes. Improvements and additions include:
Enhanced scripting interface in Matlab and the GUI
Scripting through Matlab and the GUI allows users to call the full set of OpenSim modeling and simulation libraries (the API) without learning C++. Scripting also allows users to create batch scripts to streamline their workflows. We improved the OpenSim scripting interface for OpenSim 3.1. Enhancements include:
- Exposure of nearly all of the OpenSim API, including the most commonly used Simbody dynamics objects and functions, the Moment Arm Solver, and all the new modeling components described below
- Autocompletion in Matlab
- A streamlined Scripts menu in the GUI with shortcuts to find and run recent scripts.
- Enhanced error handling and reporting for script users
- Expanded plotting capabilities, including:
- Plotting of threshold values
- Plotting of fiber lengths, tendon lengths, etc. as a function of a motion
- Ability to set labels on plots and get handle to plot windows after creation for additional editing
The metabolic calculators or probes, based on the work of Umberger and colleagues (2010) and Bhargava and colleagues (2004), estimate cost during motion for the whole body or individual muscles. Learn more about OpenSim Probes in the User's Guide and try them out in the example Simulation-Based Design to Reduce Metabolic Cost.
Expanded modeling toolbox
The toolbox is designed to improve OpenSim's ability to design assistive devices and other mechanisms. The new components can interface with the OpenSim workflow (e.g. computed muscle control) and are editable in the GUI. Where appropriate, they are visualized in the GUI and the API visualizer. The new components include:
- A spring that acts along a path (PathSpring)
- A clutched path spring (ClutchedPathSpring)
- An expression based bushing that applies torques as a symbolic function of deflections (ExpressionBasedBushing)
- An expression based point to point force that applies forces as a function of deflection and deflection speed (ExpressionBasedPointToPointForce)
- A planar joint (PlanarJoint)
- A gimbal joint (GimbalJoint)
You can find documentation for each of these components in the OpenSim Doxygen, look up how to define these components in the GUI's XML Browser, and see examples in the new tutorial Dynamic Walking Challenge: Go the Distance!.
Forward simulation toolbar in the GUI
The new toolbar for quickly running forward simulations in the GUI allows users to:
- Set coordinate speeds as initial conditions for a forward simulation
- Quickly run and stop a forward simulation using the simulation controls in the GUI Toolbar
Performance and accuracy improvements
- We added tighter assembly tolerance to enforce constraints.
- We've fixed several memory leaks throughout the OpenSim code base.
- We implemented several improvements to the new Millard muscles models to reduce computation time.
Additional Usability Improvements
- The XML browser in the GUI now maintains new lines when you copy and paste to an .osim model file in a text editor.
- Users can add and delete probes via the Navigator Window in the GUI.
- To simplify the setup process for reducing residuals, we've hidden the (almost always unnecessary/redundant) Control Constraints file from RRA settings. The examples have been updated to reflect this change and we issue a "deprecated" warning when a user runs RRA with a constraints file.
- Users can set max iterations and convergence tolerance of the optimizer for Static Optimization.
Additional API Improvements
- We created a new "advanced" interface for ModelComponents, consisting of overrideable realize() methods.
- Millard Muscle curves are OpenSim Functions to allow plotting via scripting.
- We added a method to set the name of the output file for the InverseDynamicsTool.
- Users can set muscle thickness and color in the API visualizer.
We've added to the library of OpenSim examples and tutorials. New materials include:
- A new simple gait model with 10 degrees of freedom and 18 muscles that uses the MillardEquilibrium muscle model
- Simulation-Based Design to Reduce Metabolic Cost
- Dynamic Walking Challenge: Go the Distance!
- Pulling Out the Stops: Designing a Muscle for a Tug-of-War Competition
- Sky High: Coordinating Muscles for Optimal Jump Performance
- Depreciated_CPP_From the Ground Up: Building a Passive Dynamic Walker Model
- Fixed crash in Excitation Editor when creating new excitations for a new model
- Fix bug where excitation editor was losing changes upon load of a layout
- Fixed crash in Matlab/Scripting when creating Joints and adding them to model
- Fixed bug where ContactMesh loses associated geometry file on serialization
- Fixed bugs preventing loops from the GUI scripting shell
- Fixed GUI script handling of non-existing models
- Fixed bug where a Controller added to a model in the API was lost on serialization
- Enabled text and graphical editing of all actuators that use GeometryPath
- Fixed handling of coordinates out of range when computing geometry paths; prevents memory leaks
- Fixed bug in calculation of deactivation time constant for Thelen and Millard muscle models to reflect published paper
- Fix to Millard muscle models so steady state activation reaches the value of input steady state control
- Added option to exclude actuators from being controlled by CMC
- Removed obsolete Edit context menu from Actuator nodes in GUI
- Fixed slow plotting of Moments and MomentArms in the GUI plotter
- Removed duplicate defaults on round-trip read/save of objects by excitation editor.
- Fixed GUI handling of optional properties
- Fixed a bug in reporting of CMC and RRA results where analysis outputs were being linearly interpolated and then sampled
Fixed a bug in StaticOptimization that was estimating target accelerations from double differentiated coordinates (q) after the model had been assembled
Fix setting of initialStates in ForwardTool to be based on name rather than assumed (possibly incorrect) order.
- Fixed GUI tool issue with sizing of Time settings box
- Fixed bug in previewing experimental data
|
OPCFW_CODE
|
EnableAmendment timer starts at 77.78% (Version: [rippled 1.5.0])
Issue Description
I may be reading this incorrectly - but I think the 14 day timer to enable an amendment starts at 77.78% (28 Ayes out of 36 as of today). 28 Ayes of 36 is only 77.78% agreement. Amendment process doc says this must be 80% or above.
Observed this behavior with RequireFullyCanonicalSig amendment which is hovering around 27 Ayes at the moment. Once it achieved 28 Ayes (=77.78% agreement), the validator inserted an EnableAmendment transaction and the 14 day timer started.
Steps to Reproduce
Run:
# /opt/ripple/bin/rippled feature
on the validator when an amendment has 28 Ayes out of 36.
Expected Result
EnableAmendment transaction is not inserted into the ledger.
majority property is not set.
Actual Result
EnableAmendment transaction is inserted into the ledger.
majority property is set (see screenshot above).
Environment
rippled-1.5.0
Supporting Files
Thanks for opening an issue. A couple of people had already reached out to me about the DeletableAccounts amendment, wondering how come it reached majority, since 28 validators seem to support it but (a) the consensus quorum is presently 29 and (b) 28/36 is strictly less than 80%, which is the amendment activation threshold, so I already had a chance to triage this..
The answer boils down to two things:
First, for historical and largely obsolete reasons, the calculation for the majority is done in base 256, with the quorum threshold set to (V * 204)/256 where V is the total number of trusted validations. There are currently 36 validators on Ripple's recommended UNL, so I am assuming a server running with a UNL with a size of 36 for this example, but similar calculations apply for other sizes:
So the code calculates (36 * 204) / 256. Ref:
vote->mThreshold = std::max(1,
(vote->mTrustedValidations * majorityFraction_) / 256);
JLOG (j_.debug()) <<
"Received " << vote->mTrustedValidations <<
" trusted validations, threshold is: " << vote->mThreshold;
Now, if you plug that into a calculator you'll see it works out to 28.6875. So what's wrong?
The code is using integer arithmetic, which "truncates toward zero’’. Basically, you can think of it as "rounding down to the nearest integer" and what is the nearest integer that is strictly smaller? It's 28. For a little deeper dive into this check out this excellent StackOverflow answer.
The second, problem is that the code checks whether an amendment has achieved the required quorum by using greater than or equal to instead of strictly greater than. Ref:
bool const hasValMajority =
(vote->votes (entry.first) >= vote->mThreshold);
While the behavior of the code is well-defined in terms of C/C++, it's not ideal from a human perspective and certainly complicates the understanding of quorum when it comes to amendments.
I believe that this issue needs to be addressed in a future release of rippled, preferably with the 1.6.0 release. Ironically, the fix will require an amendment. :sweat_smile:
|
GITHUB_ARCHIVE
|
M: Mistakes: The Price Of Progress - melvinram
http://48hrlaunch.wordpress.com/2008/08/26/mistakes-the-price-of-progress/
R: demallien
Ahhh, so true.
It's like that whiny voice you get in your head when you're trying to work
with a poorly documented API, and you get blocked trying to figure out how to
do something. Then you get that after-lunch caffeine shot, or whatever, and
you realise - 'hey, I could, you know, just _try_ a few things, see how the
framework responds', and you have a prototype up and running in 5 mins...
I encounter the same problem when trying to write fiction - total Writer's
Block, because you are worried about the fact that what you are writing is
crap. Yes, it probably is crap, but don't worry too much, it's easier to edit
crap into something good than it is to write something good from scratch.
I just wish that I had a reliable way of getting myself hit over the head with
a piece of 4x2 each time I block like that, when writing _or_ programming,
rather than wasting unproductive, frustrating hours staring at a blank
screen...
R: melvinram
So I'm not the only one? lol Excellent!
|
HACKER_NEWS
|
While loops and breaking when boolean becomes false
I am trying to make a loop whereby the program keeps on going until bank>100000, and within that while loop there is another while loop which determines whether bank has reached a minimum(whether user has failed, and bank resets to 20000).
Currently the loop runs and continues forever even if successive iterations reach the 1000000 condition. Can anyone see a way of resolving this?
n.b. 'singles' is just an array containing one number.
Many thanks.
i = 0
newbank = []
iterations = []
while bank < 1000000:
bank = 20000
while bank > 100:
spin = randint(0, 38)
if choices2 == singles:
if spin == singles:
bank = bank + (bet * 35)
else:
bank = bank - bet
i = i + 1
iterations.append(i)
newbank.append(bank)
Fact: I find code with 1-space indents too hard to read.
what is the value of bet ?
@MartijnPieters: enjoy :)
Offtopic: I can not imagine more awful pythonic code. D: "iterations.append(i)" - WHY?! Do choices2 change during this cycle? Do you really need this if statement?
The value of the bet is 100<=bet<=20000 in 100 increments.
@joe: if singles is an array, spin, an int, will never be equal to it.
Your second while continues to run even if bank < 10000000, so you'll never leave that loop, and if you do, well, you'll stay on the first loop.
There is no way your code can stop.
Do you want something like...
while bank > 100 and bank < 1000000:
I don't know exactly what logic you're trying to achieve, but you need to think better about your stopping conditions.
Edit:
Upon reading your question better, I think you should get rid of the two loops, since one will be more than enough.
Initialize your bank = 20000 outside the loop,
bank = 20000
while bank < 1000000:
if bank < 100:
bank = 20000
....
D'oh! Yeah had forgotten to set the limit of the bank in the second while loop to <1000000
|
STACK_EXCHANGE
|
const chai = require('chai')
const expect = chai.expect
const samples = require('./interval-samples')
const isEmpty = require('../src/is-empty')
const intersection = require('../src/intersection')
describe('intersection', function () {
describe('both intervals are empty', function () {
it('it returns an empty interval', function () {
const first = samples['[2, 0)']
const second = samples['[3, 0]']
const result = intersection(first, second)
expect(isEmpty(result)).to.be.equal(true)
})
it('it returns an empty interval', function () {
const first = samples['(4, -4)']
const second = samples['(2, -2)']
const result = intersection(first, second)
expect(isEmpty(result)).to.be.equal(true)
})
})
describe('one interval is empty', function () {
it('it returns an empty interval', function () {
const firstEmpty = samples['[2, 0)']
const second = samples['[3, 9]']
const result = intersection(firstEmpty, second)
expect(isEmpty(result)).to.be.equal(true)
})
it('it returns an empty interval', function () {
const first = samples['(3, 11]']
const secondEmpty = samples['(2, -2)']
const result = intersection(first, secondEmpty)
expect(isEmpty(result)).to.be.equal(true)
})
})
describe('both interval are not empty', function () {
it('intersection of [3, 9] and [-4, -2] returns empty interval', function () {
const first = samples['[3, 9]']
const second = samples['[-4, -2]']
const result = intersection(first, second)
expect(isEmpty(result)).to.be.equal(true)
})
it('intersection of (2, 5) and [3, 9] returns [3, 5)', function () {
const first = samples['(2, 5)']
const second = samples['[3, 9]']
const result = intersection(first, second)
expect(result).to.be.deep.equal(samples['[3, 5)'])
})
it('intersection of [3, 9] and [4, 7) returns empty interval', function () {
const first = samples['[3, 9]']
const second = samples['[4, 7)']
const result = intersection(first, second)
expect(result).to.be.deep.equal(samples['[4, 7)'])
})
})
})
|
STACK_EDU
|
cmd/go: provide a convenient way to iterate all packages in go.work workspace
go.work makes it easier to work with multiple modules.
Often users want to run tests or analysis for all packages in their workspace. For a single module case, this can be easily achieved by using ./... pattern. Currently, however, there is no equivalent of ./... in workspace mode. So users need to run go commands from inside of each individual module, or supply the paths for each module directories listed in go.work. (e.g. go test -v $(go list -f '{{.Dir}}/...' -m | xargs)).
We need to figure out how to offer ./...-equivalent behavior when working with `go.work.
My personal preference is to reuse./... (VSCode Go uses ./... pattern to implement workspace-wide test, lint, coverage computation, etc).
p.s. I noticed the issue about clarifying the meaning of ... in module mode is still open https://github.com/golang/go/issues/37227
go test -v $(go list -f '{{.Dir}}/...' -m | xargs)
This is especially not possible in CI, as the go.work file shouldn't be checked in (according to the proposal). To have a consistent solution, it would be great to solve this problem for modules in general, not only for repositories with a go.work file.
how would you decide what to include if you don't have a go.work file?
I just would like to iterate over all sub modules, e.g., run all unit test in the active module and all sub modules.
Having a go.work file for local workflow setup is a use case, but I am setting up a project that I definitely plan on adding my go.work file to my committed sources. There's not much value add, other than enabling a fork to engage in development without having to recreate an identical go.work file, but that's only because the workspace feature doesn't have workflows baked into the tooling. It seems like a huge miss to add workspaces and not have a way to run go test across all the workspace modules or to be able to run go get for a module of a workspace without using cd.
@eighty4 If you have go.work checked in, you can easily list all modules with
go list -f '{{.Dir}}' -m | xargs
You could then do something like
mods=$(go list -f '{{.Dir}}' -m | xargs)
for mod in $mods; do
(cd "$mod"; go test ./...)
done
Not perfect, but works. It's much more pain if you don't check in go.work (as recommended by the documentation) and want to do such thing...
Sorry you had to repeat the answer for me. Works pretty well:
go list -f '{{.Dir}}' -m | xargs go test
This doesn't work for go mod tidy. Is it possible to achieve this with a one-liner:
# workspace dir
go mod tidy
# cd for each module
cd composable
go mod tidy
cd ../git
go mod tidy
cd ../testutil
go mod tidy
cd ../util
go mod tidy
Reusing ./... for workspaces would clash if you had a module in the workspace root (use .).
Would ./.... (4 dots) be too subtle?
Would ./.... (4 dots) be too subtle?
Yes, it think this would be too subtle. :( But I also lack a better idea. Maybe a flag, to make commands workspace aware, in the same way -m makes them module aware?
I don't know what the ./... and go test [packages] do or are for. I could only get pattern matching to work with go test such as go test my_test.go. Do you have to specify go test github.com/fullyqualified/modulepath to test a package? ./... seemed to have no effect.
What's an example of a module aware command with -m?
4 dots is interesting idea--maybe a little more obvious would be ./.../...?
An issue with these shell scripts is that you don't get an aggregated coverage report.
I am curious about the framing of the issue here: Is this a gap in workspace, or a gap in go test? My understanding is that workspace is an optional, local convenience. The ability to run all tests incl submodules, I think, should work whether or not an individual chooses to use workspaces feature. Is that a fair understanding?
The ability to run all tests incl submodules, I think, should work whether or not an individual chooses to use workspaces feature
I agree with that. Notice that this is a general issue with go tooling, not only for running tests. I'd also like to go tidy all submodules etc.
mods=$(go list -f '{{.Dir}}' -m | xargs)
for mod in $mods; do
(cd "$mod"; go test ./...)
done
With Go 1.20, the new -C flag can be used to simplify this to
mods=$(go list -f '{{.Dir}}' -m)
for mod in $mods; do
go test -C "$mod" ./...
done
This also works for go tidy @eighty4 :
mods=$(go list -f '{{.Dir}}' -m)
for mod in $mods; do
go mod tidy -C "$mod"
done
@seankhliao, is there agreement from the maintainers that
There is a gap in go tooling any time "./..." is allowed where a user may want to specify additionally to apply the tool to submodules
The fix should work whether or not someone uses workspaces
?
If there is agreement, then what are next steps?
Do we need a formal proposal for what the syntax should be for the fix, then a PR to implement?
bump @bcmills, any advice on how to advance a solution?
My opinion would be that no pattern should cross between unrelated module dependency graphs, which can produce confusing / conflicting information on dependency resolution.
IMHO the key idea of go.work is to make the selected modules "related" and the go command runs the version selection algorithm on the combined modules and already builds a very different dependency graph.
If users want to preserve the separation, they can do GOWORK=off go test ./... from their desired directory.
@hyangah, your proposal seems sensible and consistent to me. In that case, it sounds like your are proposing to evolve go.work to be strongly recommended tool for structuring a multi-module project, and checked into version control to provide consistency. Is that a fair understanding? This way, @seankhliao's preference to keep modules separate is held.
However, if we are unwilling to formalize Workspaces to be strongly recommended for multi-module projects, then I'm not sure it makes sense.
I think it works great this way.
Workspaces should evolve to simplify the go mod tidy upkeep across all modules of a workspace with a command that performs mod tidy in each workspace module, with each tidy being its own independent operation. The statement proposal's wording feels like a hang up.
@katexochen, the following one-liners appear to work:
test:go list -f '{{.Dir}}/...' -m | xargs go test
tidy:go list -f '{{.Dir}}' -m | xargs -L1 go mod tidy -C
Should it be
go work sync
go list -f '{{.Dir}}' -m | xargs -L1 go mod tidy -C
go work sync
go list -f '{{.Dir}}' -m | xargs -L1 go mod tidy -C
or
go list -f '{{.Dir}}' -m | xargs -L1 go mod tidy -C
go work sync
go list -f '{{.Dir}}' -m | xargs -L1 go mod tidy -C
?
@eighty4 go mod tidy then go work sync so go work could pick up the respective changes from workspace module dependencies.
|
GITHUB_ARCHIVE
|
It’s not the work that’s hard, it’s the discipline.
One of the most important characteristics of effective technical writing is consistency; be it in form, function, or style of writing. Structured authoring helps maintain consistency in the structure of content.
For example, an organization might define a chapter and sections within the chapter as:
The Traditional Way
Most organizations define a content structure and propagate it through training, mentoring, and style guides. It is with practice that technical writers master the intricacies of the defined content structure. Until then looking up the style guide or talking to the more experienced members of the team is the only option. Even after that, it is only the very disciplined of technical writers who is able to stay faithful to the defined structure.
Deviations from the defined structure are typically detected in a review. So one of the reviewer’s focus areas is ensuring that the defined content structure has been maintained. Given that a reviewer typically handles multiple technical writers, a significant amount of time of the reviewer’s time is spent on activities like checking document structure.
Structured Authoring: Using Technology to Define and Enforce Content Structure
The reviewer should ideally been concentrating on the content; relevance, completeness, flow, readability, and more. Anything that takes away time from the focus on content reduces the effectiveness of the reviewer.
Structured authoring is especially useful for organizations with large and/or geographically dispersed technical communications teams.
Structured authoring uses technology to define and enforce content structure. It aids the technical writer in adhering to the structure by specifying the “allowable” type of content at a particular place in the document. Even if the technical writer moves away from the defined structure, the structured authoring system will warn the technical writer of the misdemeanor. (You cannot use a table here!) So the technical writer no longer needs to remember the prescribed structure and the associated dos and dont’s. The structured authoring system does that!
From the reviewer’s point of view, structured authoring systems free them from having to check for validity of a document’s structure. That is more time that the reviewer can dedicate to the content.
Another benefit is that structured documentation systems often allow presentation (formatting) to be associated with content based on its position in the document structure. For example, a title in a first level section may use the style Heading 1, while a title within a sub-section may use the style Heading 2.
So the technical writer does not have to remember the formats to be applied. It is also one more thing off the table for a reviewer.
All in all structured authoring aids in improving the efficiency of the technical writing team and helps them in improving effectiveness.
Want to know more? Read the next post on defining content structures!
- Structured Authoring: Defining the Content Structure
- Structured Authoring: Introducing XML
- Structured Authoring: The Role of the DTD/Schema
- Understanding Rule-based Writing
|
OPCFW_CODE
|
How to detect "Use MFC" in preprocessor
For a static Win32 library, how can I detect that any of the "Use MFC" options is set?
i.e.
#ifdef ---BuildingForMFC---
....
#else
...
#endif
I have always checked for the symbol _MFC_VER being defined.
This is the version number of MFC being used 0x0700 = 7.0
It is in the "Predefined Macros" in MSDN
I've check on Visual Studio 2013, in an original project targeting only Win32 console, so I've to add MFC support (without using of the Project Wizard) in a second time. Following are my findings:
The macro _MFC_VER is defined in the afxver_.h, included by afx.h. So, if you don't include afx.h directly/indirectly in your .cpp file, you don't have the _MFC_VER macro defined. For example, including in a project a source .cpp that doesn't include the afx.h, the file will be compiled WITHOUT the definition of _MFC_VER macro. So it's useless for adapt the c++ code (an external library, for example) to detect usage of MFC library and optionally support the MFC library.
If you manually turn on the use of MFC (Select the project in Solution Explorer, than right click, Configuration Properties -> General -> Use of MFC) you have two possibilities:
A) select "Use MFC in a Shared DLL" option. This actually update the command line parameters adding the definition of _AFXDLL to the preprocessor macro list.
B) select "Use MFC in a Static Library" options. This actually remove the _AFXDLL macro defined, but no macro definition is added, so nothing can told you if MFC is actually used.
So, during my test activity, only the mode A can be used effectively to understand if the MFC library is included or not in the project under building.
I maintain a C++ cross-platform library that support many platforms (Mac OSx, WinX console, WinX MFC, iOS, Unix, Android) and enabling MFC with dynamic DLL is the only way to transparently detect the MFC presence. So for example:
#if defined(_AFXDLL)
# include <afx.h>
#endif
Obviusly, you can add manually a macro definition (_AFX) in the project preprocessor list.
The symbol _AFX is typically defined for MFC projects.
I thought so muhc, too, but when enabling "Use MFC in a dynamic library" for a Win32 Library project (that was created without 'MFC support' option), this macro is not defined.
As @peterchen said, _AFX is not defined if MFC as dynamic library option is active, but _AFXDLL instead.
|
STACK_EXCHANGE
|
We'll revert to you personally inside the minimum doable turnaround time and examine all the necessities such as System on which you need your python code being executed, the expected end result and some time that you've got to have this assignment carried out.
Higher-order functions empower partial application or currying, a method that applies a operate to its arguments separately, with Each and every application returning a fresh function that accepts the next argument.
In this area we go from sequential code that simply runs one line of code right after another to conditional code the place some steps are skipped. It's really a very simple notion - however it is how Laptop software package tends to make "possibilities".
Lambda calculus presents a theoretical framework for describing features as well as their analysis. It's a mathematical abstraction in lieu of a programming language—but it sorts The premise of almost all latest useful programming languages.
Under this classification, we offer help in speaking about your doubts concerning your python programming assignment. It often occurs that you might want to complete your programming assignment by on your own however, you lack the correct implementation understanding that is necessary for doing it.
As affected by Haskell and Some others, Perl six has numerous purposeful and declarative techniques to issues. As an example, it is possible to declaratively Establish up a well-typed recursive version (the kind constraints are optional) by way of signature pattern matching:
A fast assistance rundown: One particular- and two-character variable names are normally far too quick to get significant. Indent with
In the 1st chapter we make an effort to cover the "massive picture" of programming so you will get a "desk of contents" of the remainder of the ebook. Don't be concerned Otherwise all the things would make perfect perception The 1st time you hear it.
It’s extremely-2 straightforward to get Python Homework Help. Just post this contact form your homework and within jiffy your homework will probably be started out.
In this manner the information within the code bins could be pasted with their remark textual content into the R console To guage their utility. Occasionally, quite a few instructions are printed on 1 line and separated by a semicolon ';'. Instructions setting up having a '$' indicator must be executed from a Unix or Linux shell. Windows users can only overlook them.
So hiring us for Python homework helps to address this issue for you personally, as we take care of your work as being a top rated precedence and acquire it accomplished with utter determination.
Australia is among most effective training hub today, quite a bit of students from outdoors international locations are coming to Australia for his or her analyze.
Impure practical languages ordinarily include things like a more direct approach to handling mutable point out. Clojure, for instance, works by using managed references that may be updated by applying pure capabilities to The existing point out.
We need your electronic mail deal with in order that we can send out you an e-mail alert in the event the tutor responds on your concept.
|
OPCFW_CODE
|
Soon to be released, PowerDNS Recursor 4.0.0 includes a lot of exciting features. We have already blogged about some of them, but apart from a post on our mailing-list, the new Response Policy Zone support has not received much love.
Response Policy Zone
First of all, what is RPZ? It’s a way to provide policies to a recursive name server, allowing the use of large, quickly changing data feeds to provide custom responses to queries. RPZ data is supplied as a DNS zone, and can be loaded from a file or retrieved over the network by AXFR/IXFR.
While RPZ was first implemented by the ISC BIND name server, it’s an open and vendor neutral mechanism. You can read more about it on dnsrpz.info.
Why use RPZ?
The most common use case of RPZ is to implement security measures based on a reputation feed provided by another party, such as:
- protect customers from accessing known malware-infected hosts
- prevent known spam sources from reaching your infrastructure
While some users are building their own RPZ feed using internal data, the greatest strength of RPZ is the capacity to use an external feed. Several providers are already offering interesting feeds, for example:
Setting it up in Recursor 4.0.0
PowerDNS Recursor has the capability to get RPZ data from two sources:
- a local file
- a remote server, using IXFR
While the first option is great for mostly static data and testing purposes, the second one is particularly suited for large and rapidly changing feeds.
RPZ support configuration is done via our Lua configuration mechanism, and as such requires a Recursor compiled with Lua support.
First, let’s set up a very basic RPZ zone in a local file, basic.rpz:
This file declares a simple zone, and instructs PowerDNS to respond to queries for example.net with the address 192.168.2.5. To be valid, the zone needs to have a SOA record and at least one NS one at its apex. A more complex example can be found here.
We now need to setup a Lua configuration file for PowerDNS, using the lua-config-file
Finally, in the Lua configuration file, we use the rpzFile() directive to load the RPZ zone:
As soon as the Recursor is started with this configuration, we can check whether it has been correctly loaded:
Loading RPZ from file ‘/etc/powerdns/basic.rpz’
Done loading RPZ from file ‘/etc/powerdns/basic.rpz’
Everything is fine. As expected, a query for http://www.example.net returns 192.168.2.5:
It’s time to try a setup a bit more complicated. To retrieve a RPZ zone rpz.example.net from a remote provider at 192.0.2.1 via IXFR, we would instead use:
The additional parameter policyName has no impact on the response the Recursor will send, but it sets the appliedPolicy of the protobuf message it will output, if configured to. This allows keeping track of the queries that have been modified because of a RPZ policy, for example to be able to detect and act on infected hosts.
This time, the Recursor is a bit more verbose on startup:
Loading RPZ zone ‘rpz.example.net.’ from 192.0.2.1:53
Loaded & indexed 375 policy records so far
Done: 375 policy records active, SOA: need.to.know.only. hostmaster.example.net. 1467033991 60 60 432000 60
We will also be able to check later that IXFR updates are correctly received:
Getting IXFR deltas for rpz.example.net. from 192.0.2.1:53, our serial: 1467033991
Processing 1 delta for RPZ rpz.example.net.
Had 23 RPZ removals, 1 addition for rpz.example.net. New serial: 1467034591
That’s it, we are all set!
We only covered so far the very basic possibilities offered by RPZ. In addition to altering the response, PowerDNS Recursor 4.0.0 is capable of the following actions:
- Forcing the query to TCP by sending a response with TC set
- Sending a NOData response
- Sending a NXDomain response
- Sending a custom CNAME
- Silently dropping the query
It also supports defining a default policy that will use the data found in the RPZ zone, but will override the action, making it possible to use a feed from a provider while customizing the recursor’s behavior.
The rpzMaster example above did not use TSIG to authenticate the server, but of course PowerDNS Recursor 4.0.0 supports that too.
The corresponding documentation can be found in the Response Policy Zone (RPZ) part of our documentation.
|
OPCFW_CODE
|
A programming language is used to control the actions of a machine. Such a language is a properly drafted or constructed language when it is designed in such a way that through it instructions can be communicated to a computer system. It is generally split into two components that are the semantics and the syntax. The syntax is the form or type, the semantics are the meaning of that type or form. Every programming language is different; some may be marked by a specification documents, others may have a dominant implementation or a reference. A programming language thus broadly is a notation that helps to write programs that are identified as an algorithm.
Let’s learn about some of the top languages in detail.
This language is an object-oriented, class-based language that was developed by Sun Microsystems in the 1990s. Since then, the language continues to be the most in-demand language that also acts as a standard platform for enterprises and several mobile and games developers across the world. The app has been designed in such a way that it works across several types of platforms. This means that if a program is written on Mac Operating system then it can also run on Windows based operating systems.
If you’re looking for an open-source, interpreted language that places an emphasis on highly readable code, Python is the general purpose programming language for you. Python has a large standard library loaded with pre-coded functions for every occasion—which allows programmers to do more with fewer lines of code.
Python’s easy-to-learn code has earned it the affection of many within the scientific community, where it can be used to process large datasets. On the back-end, the Django framework excels at rapid prototyping and development, making it a favorite among startups like Pinterest and Instagram.
CSS or Cascading Style Sheets is rather a markup language. When paired with HTML, CSS allows a developer to decide and define how a web page or a website will eventually look or how it will appear to the visitors of the web platform. Some of the elements which CSS has an impact on include font size, font style, the overall layout, the colors and other design elements. This is a markup language that can be applied to several types of documents including Plain XML documents, SVG documents as well as XUL documents. For most websites across the world, CSS is the platform to opt for if they need help to create visually attractive webpages and finds use not just in the creation of web applications but also mobile apps.
Microsoft’s answer to Java, C# is a programming language hybrid of C and C++ used to develop software for their .NET platform — a framework for building and running applications and XML web services. If you’re building websites or apps for the Microsoft ecosystem, C# is the way to go. MSN, Salesforce, and of course Microsoft’s own website are all examples of major sites that use C# and ASP.NET as part of their back-end builds.
The term ‘PHP’ is used to define PHP Hypertext Processor language that is a free server-side scripting language that has been designed for not just web development but also as a general-purpose programming platform. This is a widely used language that was created in the year 2004 and now powers over 200 million websites worldwide. Some popular examples of websites powered by this platform include Facebook, WordPress, and Digg.com.
For world-class website development services, then schedule a meeting with us and we will be happy to help.
|
OPCFW_CODE
|
#!/usr/bin/env node
/*
* A Tiger Compiler
*
* This module contains functions for compiling Tiger programs into JavaScript:
* one takes the Tiger program as a string, and the other takes in a filename.
* It can also be used a script, in which case the filename is a command line
* argument, in addition to options. Synopsis:
*
* ./tiger.js -a <filename>
* writes out the AST and stops
*
* ./tiger.js -i <filename>
* writes the decorated AST (semantic graph) then stops
*
* ./tiger.js <filename>
* fully compiles and writes the compiled JavaScript.
*
* ./tiger.js -o <filename>
* optimizes the intermediate code before generating JavaScript.
*
* Output of the AST uses the object inspection functionality built into Node.js.
* The decorated AST (semantic graph) is printed as text for now, but graphics
* would be really nice to add.
*/
const fs = require('fs');
const util = require('util');
const yargs = require('yargs');
const parse = require('./ast/parser');
const analyze = require('./semantics/analyzer');
const graphView = require('./semantics/viewer');
const optimize = require('./semantics/optimizer');
const generate = require('./backend/javascript-generator');
// If compiling from a string, return the AST, IR, or compiled code as a string.
function compile(sourceCode, { astOnly, frontEndOnly, shouldOptimize }) {
let program = parse(sourceCode);
if (astOnly) {
return util.inspect(program, { depth: null, compact: true });
}
analyze(program);
if (shouldOptimize) {
optimize(program);
}
if (frontEndOnly) {
return graphView(program);
}
return generate(program);
}
// If compiling from a file, write to standard output.
function compileFile(filename, options) {
fs.readFile(filename, 'utf-8', (error, sourceCode) => {
if (error) {
console.error(error);
return;
}
console.log(compile(sourceCode, options));
});
}
// Two nice functions if you'd like to embed a compiler in your own apps.
module.exports = { compile, compileFile };
// Run the compiler as a command line application.
if (require.main === module) {
const { argv } = yargs
.usage('$0 [-a] [-o] [-i] filename')
.boolean(['a', 'o', 'i'])
.describe('a', 'show abstract syntax tree after parsing then stop')
.describe('o', 'do optimizations')
.describe('i', 'generate and show the decorated abstract syntax tree then stop')
.demand(1);
compileFile(argv._[0], { astOnly: argv.a, frontEndOnly: argv.i, shouldOptimize: argv.o });
}
|
STACK_EDU
|
rewrite works for = but not for <-> (iff) in Coq
I have the following during a proof, in which I need to replace normal_form step t with value t as there is a proven theorem that there are equivalent.
H1 : t1 ==>* t1' /\ normal_form step t1'
t2' : tm
H2 : t2 ==>* t2' /\ normal_form step t2'
______________________________________(1/1)
exists t' : tm, P t1 t2 ==>* t' /\ normal_form step t'
The equivalence theorem is:
Theorem nf_same_as_value
: forall t : tm, normal_form step t <-> value t
Now, I can use this theorem to rewrite normal_form occurrences in the hypotheses, but not in the goal. That is
rewrite nf_same_as_value in H1; rewrite nf_same_as_value in H2.
works on the hypothesis, but rewrite nf_same_as_value. on the goal gives:
Error:
Found no subterm matching "normal_form step ?4345" in the current goal.
Is the rewrite on the goal here impossible theoretically, or is it an implementation issue?
-- Edit --
My confusion here is that if we define normal_form step = value, the rewrite would have worked. If we define forall t, normal_form step t <-> value t, then the rewrite works if normal_form step is not quoted in an existential, but does not work if it is in an existential.
Adapting @Matt 's example,
Require Export Coq.Setoids.Setoid.
Inductive R1 : Prop -> Prop -> Prop :=
|T1_refl : forall P, R1 P P.
Inductive R2 : Prop -> Prop -> Prop :=
|T2_refl : forall P, R2 P P.
Theorem Requal : R1 = R2.
Admitted.
Theorem Requiv : forall x y, R1 x y <-> R2 x y.
Admitted.
Theorem test0 : forall y, R2 y y -> exists x, R1 x x.
Proof.
intros. rewrite <- Requal in H. (*works*) rewrite Requal. (*works as well*)
Theorem test2 : forall y, R2 y y -> exists x, R1 x x.
Proof.
intros. rewrite <- Requiv in H. (*works*) rewrite Requiv. (*fails*)
What confuses me is why the last step has to fail.
1 subgoal
y : Prop
H : R1 y y
______________________________________(1/1)
exists x : Prop, R1 x x
Is this failure related to functional extensionality?
The error message is particularly confusing:
Error:
Found no subterm matching "R1 ?P ?P0" in the current goal.
There is exactly one subterm matching R1 _ _, namely R1 x x.
Also, per @larsr, the rewrite works if eexists is used
Theorem test1 : forall y, R2 y y -> exists x, R1 x x.
Proof.
intros. eexists. rewrite Requiv. (*works as well*) apply H. Qed.
What did eexists add here?
The rewrite cannot go under the existential quantifier. You'll need to instantiate t' first before you can do the rewrite. Note that econstructor may be a useful tactic in this case, which can replace the existential quantifier with a unification variable.
EDIT in response to OP's comment
This will still not work for equality. As an example, try:
Inductive R1 : Prop -> Prop -> Prop :=
|T1_refl : forall P, R1 P P.
Inductive R2 : Prop -> Prop -> Prop :=
|T2_refl : forall P, R2 P P.
Theorem Req : forall x y, R1 x y = R2 x y.
Admitted.
Theorem test : forall y, R2 y y -> exists x, R1 x x.
Proof.
intros. rewrite Req. (*rewrite still fails*)
The issue is not actually about equality vs. iff, the issue relates to rewriting under a binding (in this case a lambda). The implementation of exists x : A, P is really just syntax for ex A (fun x => P x), so the rewrite is failing not because of the iff, but because the rewrite tactic does not want to go under the binding for x in (fun x => P x). It seems as though there might be a way to do this with setoid_rewrite, however, I don't have any experience with this.
Thanks. But part of my question is that I don't see why rewrite shouldn't go into existential quantifiers. I imagine if it's an equality instead of iff, rewrite would have worked here, wouldn't it?
@tinlyx I've added an edit that might clear things up.
@Matt, thanks, I added an update using your example.
Require Import Coq.Setoids.Setoid. and then setoid_rewrite Requiv. should work under the existential quantifier.
|
STACK_EXCHANGE
|
All accepted and revision papers are encouraged to participate in artifact evaluation.
Each paper sets up certain expectations and claims of its artifacts based on its content.
This year the Technical Program Committee (TPC) will provide a list of major claims for each accepted paper to the Artifact Evaluation Committee (AEC). The AEC will use this list during the artifact evaluation to ensure that the major claims the TPC deemed important correspond to what the authors indicate and can be reproduced.
Artifacts must be consistent with the paper, as complete as possible, documented well, and easy to reuse. These ideals are implemented through three badges that can be awarded to each paper: available, functional, and reproduced. The goal of the AEC is to help authors achieve these goals, and to award badges to the artifacts that meet the criteria.
Questions about artifact evaluation can be directed to firstname.lastname@example.org.
All AEC deadlines are Anywhere on Earth (AoE)
- Acceptance notification to paper authors: January 20, 2022
- Artifact intent registration deadline: January 26, 2022
- Artifact submission deadline: February 1st, 2022
- Kick-the-tires response period: February 2nd - 11th, 2022
- Artifact decisions announced: March 19, 2022
- EuroSys final papers deadline: March 19, 2022
Note: For an artifact to be considered, at least one contact author for the submission must be reachable and respond to questions in a timely manner during the evaluation period.
Registration and Submission
Please submit your artifacts to HotCRP and follow the two-step process:
Registration: Register your accepted paper for artifact evaluation by providing the paper’s abstract and PDF.
Submission: Submit your artifact for evaluation by providing an URL or a packaged artifact, selecting which artifact badges you apply for, and, new this year, provide an artifact appendix that describes the artifact and links it to the paper’s claims; see the instructions page for details.
The effort that you put into packaging your artifacts has a direct impact on the committee’s ability to make well-informed decisions. Please package your artifacts with care to make it as straightforward and easy as possible for the AEC to understand and evaluate their quality.
Note: If you need permission from your org’s legal or IT department to publish your artifact or give evaluators access to custom hardware, submit that request as soon as possible, otherwise evaluators may not have enough time to audit your artifact.
Authors are invited to submit artifacts after their papers have been accepted for publication or revision. Because the time between paper acceptance and artifact submission is short, the AEC chairs encourage authors to start preparing artifacts while their papers are still under review.
At artifact submission time, authors choose which badges they want to obtain: available, functional, and reproduced. Artifacts can meet the criteria of one, two, or all three of the badges.
After the artifact submission deadline, artifact evaluators will evaluate each artifact, using the corresponding paper and artifact appendix as guides. Evaluators may communicate with authors (through HotCRP to maintain anonymity) to resolve minor issues and ask clarifying questions. Evaluation starts with a “kick-the-tires” period during which evaluators ensure they can access their assigned artifacts and perform basic operations such as compiling and running a minimal working example. Artifact evaluations include feedback about the artifact, giving authors the option to improve their artifact and paper using this feedback.
Artifacts can be software, data sets, survey results, test suites, mechanized proofs, and so on. Paper proofs are not accepted, as evaluators lack the time and often the expertise to carefully review them. Physical objects, such as computer hardware, cannot be accepted due to the difficulty of making them available to evaluators. To the extent possible, artifacts should be able to run on commonly-available hardware, or hardware in community research testbeds such as Emulab, CloudLab, and Chameleon Cloud.
Submitting an artifact for evaluation does not give the AEC permission to make its contents public or to retain any part of it after evaluation. Thus, authors are free to include proprietary models, data files, or code in artifacts. Participating in artifact evaluation does not require the public release of artifacts, though it is highly encouraged.
Please see the submission instructions page for further details.
Review and Anonymity
Artifact evaluation is “single blind”: the identities of artifact authors will be known to evaluators, but authors will not know which evaluators have reviewed their artifacts.
To maintain the anonymity of evaluators, artifact authors should not embed analytics or other tracking tools in the websites for their artifacts for the duration of the artifact evaluation period. If you cannot control this, please do not access this data. This is important to maintain the confidentiality of the evaluators. In cases where tracing is unavoidable, authors should notify the AEC chairs in advance so that AEC members can take adequate safeguards.
|
OPCFW_CODE
|
Why did Apple drop Python?
Apple Drops Support for Python: What Does This Mean for Developers? The news that Apple has dropped support for Python has been a major shock to the software development world. Python has long been a popular language among Apple developers and its removal from the platform has left many scratching their heads. In this article, […]
Can I learn iOS on Windows?
Can I Learn iOS on Windows? It is a common question for most aspiring iOS developers: Can I learn iOS on Windows? The answer is yes, but the process may be a bit more complicated than if you were using an Apple device. In this article, we will discuss the steps you need to take […]
How much does iOS Dev cost per hour?
How Much Does iOS App Development Cost Per Hour? The cost of iOS app development per hour can vary greatly, depending on a variety of factors such as the complexity of the app, the experience of the development team, and the number of hours required. To get a better understanding of the cost of iOS […]
How to create iOS apps?
How to Create iOS Apps Step-by-Step Guide to Building iOS Apps Are you interested in developing iOS apps but don’t know where to start? Creating an app is not as hard as it may seem. Follow this step-by-step guide to get you started in developing your own iOS apps. Step 1: Learn Swift Swift is […]
Does iOS use C language?
Does iOS Use C Language? iOS is the world’s leading mobile operating system, powering millions of devices all over the world. But what language does it use? Does iOS use C language? iOS: An Overview iOS is Apple’s mobile operating system, first released in 2007. It is used on the iPhone, iPad, and iPod Touch. […]
Is iOS a programming language?
Headline 1: Is iOS a Programming Language? Headline 2: What Are the Different iOS Programming Languages? Headline 3: Exploring the Benefits of Programming for iOS Introduction iOS is one of the most popular mobile operating systems in the world, powering the iPhone, iPad, and iPod touch. But is iOS a programming language? The short answer […]
What language is iOS written in?
Headline 1: What Language Does iOS Use? Headline 2: Understanding the iOS Programming Language Headline 3: Exploring the Benefits of iOS Development Introduction Are you curious to learn what language iOS is written in? There are many different programming languages that can be used to create apps for the iOS platform. Knowing what language is […]
Which language is used in iOS development?
Understanding the Language Used for iOS Development What Programming Languages Are Used for iOS Development? When developing for Apple’s mobile operating system, iOS, developers are typically using a combination of Objective-C, Swift, and C++. Objective-C and Swift are the two primary programming languages used for the development of native iOS apps, while C++ may be […]
|
OPCFW_CODE
|
from bokeh.plotting import figure
from bokeh.io import push_notebook, show, output_notebook
from bokeh.models import ColumnDataSource
BACKGROUND_COLOR = "#062539"
LEGEND_TEXT_COLOR = "#DFEDF2"
ROBOT_LINE_COLOR = "#00B0F0"
ROBOT_FILL_COLOR = "#85daf7"
LANDMARK_COLOR = "#f76464"
SENSED_COLOR = "#ffff5b"
def stylize_plot(plot):
#plot.legend.background_fill_color = "navy"
plot.axis.major_tick_line_color = None
plot.axis.major_label_standoff = 0
plot.grid.grid_line_color = None
plot.background_fill_color = BACKGROUND_COLOR
plot.outline_line_color = BACKGROUND_COLOR
plot.border_fill_color = BACKGROUND_COLOR
plot.legend.label_text_color = LEGEND_TEXT_COLOR
plot.legend.label_text_font_size = '8pt'
plot.legend.background_fill_alpha = 0.0
plot.legend.label_text_alpha = 1.0
plot.legend.label_text_font = "courier"
plot.legend.orientation = "vertical"
plot.legend.location = "bottom_right"
def map_figure(x1=-60, x2=310, y1=-110, y2=60, w=900, h=600):
return figure(x_range=[x1, x2], y_range=[y1, y2],
plot_height=h, plot_width=w,
x_axis_location=None, y_axis_location=None, tools="")
def extract_landmarks(landmarks):
x = [landmark.x for landmark in landmarks]
y = [landmark.y for landmark in landmarks]
return x, y
def extract_observations(observations):
x = [observation.x for observation in observations]
y = [observation.y for observation in observations]
return x, y
def plot_landmarks(plot, landmarks):
landmark_x, landmark_y = extract_landmarks(landmarks)
landmark_source = ColumnDataSource({'x': landmark_x, 'y': landmark_y} )
plot.circle('x', 'y', legend="LANDMARKS", source=landmark_source,
size=9, line_color=LANDMARK_COLOR, fill_alpha=0.0, line_width=1)
def plot_initial_robot(plot, robot_source):
robot_triangle = plot.triangle('x', 'y', legend = "GROUND TRUTH LOCATION", source = robot_source,
size=30, fill_color=ROBOT_FILL_COLOR, line_color=ROBOT_LINE_COLOR, fill_alpha=1.0, line_width=1,
angle='theta')
robot_cross = plot.cross('x', 'y', legend="GROUND TRUTH LOCATION", source=robot_source,
size=12, line_color=ROBOT_LINE_COLOR, angle='theta', fill_alpha=0.0, line_width=1)
robot_range = plot.circle('x', 'y', legend="OBSERVATION RANGE", source=robot_source,
radius=50, line_color=ROBOT_LINE_COLOR, line_dash="10 5", fill_alpha=0.0, line_width=1)
return (robot_range, robot_triangle, robot_cross)
def plot_robot(source, x, y, theta):
source.data['x'] = [x]
source.data['y'] = [y]
source.data['theta'] = [theta]
def plot_fixed_vehicle(plot):
robot_source = ColumnDataSource(data = { 'x' : [0], 'y' : [0], 'theta' : [-3.14 / 2] })
plot.triangle('x', 'y', size=30, fill_color=ROBOT_FILL_COLOR, legend="vehicle's perspective",
line_color=ROBOT_LINE_COLOR, fill_alpha =1.0, line_width=1,
angle = 'theta', source=robot_source)
plot.cross('x', 'y', source=robot_source,
size=12, line_color=ROBOT_LINE_COLOR, angle = 'theta', fill_alpha=0.0, line_width=1)
plot.circle('x', 'y', source=robot_source,
radius=52, line_color=ROBOT_LINE_COLOR, fill_alpha=0.0, line_width=1, line_dash="10 5")
class SimpleMapPlot:
def __init__(self, landmarks):
self.plot = map_figure(x1=-60, x2=300, y1=-110, y2=50, h=400, w=900)
plot_landmarks(self.plot, landmarks)
self.robot_source = ColumnDataSource(data = { 'x' : [0], 'y' : [0], 'theta' : [0] })
self.robot_range, self.robot_triangle, self.robot_cross = plot_initial_robot(self.plot, self.robot_source)
stylize_plot(self.plot)
def update(self, g):
theta = g.theta - 3.14 / 2 # -g.theta - 1.1 #FIX
plot_robot(self.robot_triangle.data_source, g.x, g.y, theta)
plot_robot(self.robot_cross.data_source, g.x, g.y, theta)
plot_robot(self.robot_range.data_source, g.x, g.y, theta)
def show(self):
show(self.plot, notebook_handle=True)
class VicinityPlot:
def __init__(self, observations, noisy_observations):
self.plot = map_figure(x1=-70, x2=70, y1=-70, y2=70, h=350, w=350)
plot_fixed_vehicle(self.plot)
observation_x, observation_y = extract_observations(noisy_observations)
observation_source = ColumnDataSource({'x': observation_x, 'y': observation_y} )
self.noisy_observation_circles = self.plot.circle('x', 'y', source=observation_source,
size=9, line_color=SENSED_COLOR, fill_color=SENSED_COLOR, fill_alpha=0.25, line_width=3, alpha=0.25,
legend="noisy observation")
observation_x, observation_y = extract_observations(observations)
observation_source = ColumnDataSource({'x': observation_x, 'y': observation_y} )
self.observation_circles = self.plot.circle('x', 'y', source=observation_source,
size=9, line_color=LANDMARK_COLOR, fill_alpha=0.0, line_width=1, alpha=1.0, legend="ground truth")
stylize_plot(self.plot)
def update(self, observations, noisy_observations):
observation_x, observation_y = extract_observations(noisy_observations)
self.noisy_observation_circles.data_source.data['x'] = observation_x
self.noisy_observation_circles.data_source.data['y'] = observation_y
observation_x, observation_y = extract_observations(observations)
self.observation_circles.data_source.data['x'] = observation_x
self.observation_circles.data_source.data['y'] = observation_y
def show(self):
show(self.plot, notebook_handle=True)
class SimpleVicinityPlot:
def __init__(self, observations):
self.plot = map_figure(x1=-70, x2=70, y1=-70, y2=70, h=350, w=350)
plot_fixed_vehicle(self.plot)
observation_x, observation_y = extract_observations(observations)
observation_source = ColumnDataSource({'x': observation_x, 'y': observation_y} )
self.observation_circles = self.plot.circle('x', 'y', source=observation_source,
size=9, line_color=LANDMARK_COLOR, fill_alpha=0.0, line_width=1, alpha=1.0, legend="ground truth")
stylize_plot(self.plot)
def update(self, observations):
observation_x, observation_y = extract_observations(observations)
self.observation_circles.data_source.data['x'] = observation_x
self.observation_circles.data_source.data['y'] = observation_y
def show(self):
show(self.plot, notebook_handle=True)
|
STACK_EDU
|
Please make edits inline
To capture implementation planning questions, specific to capital planning, in a wiki-style living document where updates from any and all contributors are encouraged. These implementation planning questions, along with other insights may help reduce implementation project risk by ensuring a thorough review of how the business process will be automated is done at the planning stage. We are not trying to explain capital planning. We are trying to reduce risk on the implementation of capital planning tools.
Let's assume we are starting discussions around putting together a tool to help us plan for capital projects or assets in an enterprise. Let's also assume we need the system to allow one group of users (Managers) to make requests for capital projects or assets, and another group of users (Finance) approves or rejects these requests. Each approved asset needs to have key accounts calculated
- Depreciation Expense - The expense account(s) holding the current period depreciation expense attributable to a particular asset.
- Work in Progress - The asset balance sheet account(s) holding the full purchase price of an asset from the time of purchase until the asset goes into service.
- Asset - The asset balance sheet account(s) for each asset class that hold the book value, or purchase price, of an asset until the asset is disposed.
- Accumulated Depreciation - The liability balance sheet account(s) for each asset class that hold the life-to-date sum of all depreciation expense.
- Unit Price
- Purchase Price
- Purchase Date
- In-Service Date
- Disposal Date (Optional)
Each asset class may have a different set of key accounts attributed to it. It may also have a specific life expectancy for all assets in that class, or a formula to be used to derive the life expectancy.
Implementation Planning Questions
- At what grain are assets tracked? Do business units own assets, or are all assets owned by the enterprise?
- What asset classes are relevant, and which accounts are appropriate for each class?
- What methods of depreciation are relevant?
- What is the work for capital asset request approvals? Who submits them? Who approves them? Are there multiple layers of approval?
- Does depreciation start in the same period as the asset purchase?
- Is depreciation in the first period of asset purchase prorated in the period, or assumed to be a full period?
- In addition to asset requests, will we be loading a listing of existing assets on top? In other words, for assets already purchased, do we have a source for the future depreciation expense attributable to those existing assets? If we do, then we can improve our accuracy by including those figures.
- What criteria are used to reject or approve assets? Do we want to automate the approval process for requests that meet predefined criteria?
- What metrics or milestones can we use to mark progress from the start of the capital planning process to the final approvals?
- Do we need to be able to compare asset registers between revisions of our plan?
- Do we have a set of sample assets we can use to test the system calculations to be confident in their accuracy?
- For each of the user groups, what information should they be able to see? ..edit? What information should they be restricted from seeing?
- What visualization is relevant to present key performance indicators? Is it just depreciation expense over time, from purchase to disposal?
|
OPCFW_CODE
|
Software development is often plagued by gaps in documentation and process. Without proper documentation, it is difficult to spot critical paths, capture best practices, and even onboard new talents. As teams grow larger and the design of the system gets more complex, the impact of poor documentation becomes greater. In addition, communication gaps often arise, resulting in disjointed outputs.
Lack of senior leadership
Investing in the leadership of software development teams can save you time and money. A lack of senior leadership in the software development process can increase your turnover rates and cost your company money. A recent study by the Society for Human Resource Management outlines the costs involved with losing an employee and finding a replacement. This doesn’t take into account lost productivity or the costs of institutional knowledge and client relationships.
Lack of management skills. Often, the people who lead software development teams don’t have the managerial skills to effectively lead their teams. As a result, their teams lack direction and can become unmotivated. The lack of motivation and accountability in a software development team can lead to poor quality software.
Lack of proper documentation
Lack of proper documentation in the software development process can be problematic for the software development process. It makes it more difficult to share the details of a project and to communicate it with other people, including end users and companies. Developers are often so close to the code that they fail to see the need for documentation or take it for granted.
Documentation of software is vital to the success of any project. It is an essential element of the development process and should be a priority throughout the software life cycle (SDLC). A developer should never ship a feature without appropriate documentation. It is essential to have a skilled documentation team. There are many ways to promote documentation. Using the same tools as the development team can make the documentation more visible.
Lack of proper documentation in the software development process makes the software more complex and difficult to maintain. Developers may not have the time to create comprehensive documentation. However, documentation allows them to retain information about the system, help them remember how the system works, and educate new users on its limitations.
During the software development process, there are many steps to follow, including code review, unit tests, and automation tests. But documentation receives the least attention. Proper documentation allows for feature changes and saves time and money down the road.
Lack of integration
In the past, organizations have tended to develop applications from a local perspective, but this has begun to change with the advent of global software development. As a result, there are a number of challenges for vendors during the integration of global components into a software product. This study examines these challenges and their impact on software development. This research uses data extracted from six digital libraries to identify the most significant barriers to integration. The data revealed 16 barriers, of which 10 were deemed critical.
Lack of integration is often caused by a lack of proper data exchange between systems. This causes problems in the communication of data between different business units. This lack of integration makes it more challenging to integrate new systems and processes with existing systems. Many of these systems have multiple functions, and are produced by different manufacturers. They are also outdated and do not offer a coherent data structure.
The lack of proper integration in the software development process can also cause problems with the project’s schedule, budget, and resources. Software development teams are often left with too many manual processes and steps. For example, they must prepare weekly status reports, which require numerous emails, check-in calls, and spreadsheets. This causes performance and resource drain.
Lack of testing
Lack of testing is one of the most common problems in the software development process. Despite its importance, many organizations don’t prioritize testing. For instance, in the United States, 61% of companies are resisting a culture of testing, according to a survey by the Institute of Software Engineering. In addition, only 41% of companies practice test-driven development, and only 8% write tests before coding. Further, only 30% of organizations claim to be leaders in the testing process.
In addition, inadequate testing methods and tools are expensive. According to the National Institute of Standards and Technology, inadequate testing costs companies anywhere from $22.2 to $59.5 billion each year. This cost falls on both software developers and users. This is a serious problem. Research into the cost of testing has shown that companies often spend only 25 to 90 percent of their software development budgets on testing.
Often, insufficient testing can lead to strange application behavior. The best way to minimize the risk of inadequate testing is to use continuous testing. This way, you can catch bugs as they occur, even if you’re working with a large software application that is divided into microservices.
Testing is important to the software development process. It leads to a higher quality product and lower support costs. It is important to define the benefits of automated testing and explain them to decision makers. You can do this by preparing a one-page project proposal that describes your objectives and how automated testing will benefit your organization. You can also illustrate the potential cost savings and benefits to your organization.
Software development is a complex mix of details, and developers are often pushed to meet deadlines. Lack of clarity about the team’s role may also increase work stress. Having clear expectations for each team member is essential to creating a productive software development environment. If everyone on the team understands their roles and responsibilities, reducing work stress is possible. Otherwise, team members may bottle up personal issues and become less productive.
A recent study evaluated the factors that impact software development stress in Indian software professionals. They found that the presence of a Self-Organizing Team, a low defect rate, and the use of user stories were associated with less stress. Poor software architecture and lack of on-site customers are also associated with higher stress.
A software developer’s experience level also contributes to their level of stress. While an experienced professional athlete will have a different feeling than a team veteran playing their 800th game, a new developer will experience increased stress because of the pressure. The pressure, though, is usually not enough to make a developer quit his or her current job. Additional factors at work, such as project deadlines, can also increase stress.
When a team member joins a project, they are assigned to the first task. This task is usually a nontrivial part of the system being implemented. This task is an opportunity to prove their worth as a member of the team.
Overtime is a problem that often affects the software development process. Technical people who work on a project are often driven by perfectionism and passion, which means they put in a lot of overtime to meet their deadlines. Unfortunately, this can lead to mistakes and delays, which can negatively impact the quality of the finished product.
The best way to combat overtime is to be flexible and compromise. If you have to cut features, negotiate. If your project is extremely complex, it can be very difficult to meet your deadlines. For example, if you are working on an application that has to launch in a certain time frame, you may want to rewrite the code to make it more efficient.
Overtime is often caused by bugs. Developers may not realize that they are working overtime until they have checked in their verified fixes. This is not only inefficient, but also a major source of stress. But in the case of a project like Louisa’s, overtime often happens in the middle of the night.
Software engineers often work overtime because they are passionate about what they do. Sometimes they are working on side projects or coding in their free time. However, this does not necessarily mean that overtime is a bad thing. Overtime is also often a result of a company’s size.
|
OPCFW_CODE
|
Any recommended hardware (i.e., RAM, # processors, CPU speed, etc.) configurations based on the number of users (simultaneous or other)? The target is language leaarning? (The IT group is fairly UNIX oriented in case it matters.)
How much time would a qualified person need to set it up, upgrade, troubleshoot (beyond the normal OS patches, OS/hardware troubleshooting, backup)?
Hi Bob, it would depend on how many students you expect (server hardware).
With a dual 2.8 ghz 1.5 gb ram, redhat enterprise box, we haven't breached 5% system resources with 350 active users, 20 courses, 1-2k page views (as log entries) per day.
That server is also running a couple postnuke sites that feed to our home page, and our campus student newspaper, figure they get another 2-3k page views (but not logins) per day.
Server admin time is pretty minimal with RedhatE, Moodle pretty much runs on the default settings, and after a few configuration changes, there isn't much to do.
Figure 10-20 hours for start up on a standard linux/unix server and maybe 5 hours per week (mainly just checking on things), again depending on how many courses/users, but that should be pretty safe.
Of course that depends on how MySQL/PHP savvy the admins are, they may have some learning time to get up to speed.
Before that ran Moodle 1.1-1.2 on an old g3 plugged into the 10mb wall socket for a year of testing, with 2-3 classes and 50 or so users, and never seemed to slow down or peak over 25% system resources there.
We are currently running moodle on a server that the computer club built from parts.
It is on an AMD 1100 with 768 megs of ram and and 2 IDE hand-me-down hard drives. One is for backup, and I copy the backup to another linux machine to be paranoid.
The OS is Whitebox Enterpise Linux (RHEL 3 rebuilt from source). The machine houses survey tools, gradebooks, various projects, some web pages from classes, etc. I do not use a gui and administer via WinSCP (or sometimes webmin when I am on campus). I use Zend Optimizer to keep things snappy. I am at about 60 days of uptime.
I figure it was 5 hours to install and initially set up this machine, but I am pretty compulsive. I spend probably 2 hours a week with various moodle-related administrative issues that are not a function of my own classes. I consider myself reasonably adept at basic server administration, but there is no way I am as efficient as a UNIX-oriented IT group.
According to webalizer, I have averaged over 18,000 hits a day on the server for this month. There are about 40 courses in various states of activity. There are about 400 users. Many of the students are enrolled in more than one course. We have multiple moodle sites on the server, but there is one main academic site.
Budget on this is minimal as you can see. Performance has been great.
I am about to make a case for a hand-me-down server that is being upgraded. It is a dual P3 with scsi and a gig of ram.
|
OPCFW_CODE
|