Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Whether you’re looking to place a bet on the latest football match or the next basketball game, you’ll need to know which sportsbook to choose. Before you sign up, make sure the site is licensed and regulated in your jurisdiction. You also want to consider its reputation. It’s not uncommon for a sportsbook to operate illegally, so check to see if the sportsbook in question is legitimate.
The best sportsbook will have a variety of betting options. This includes live betting, as well as the usual bets on both televised and in person events. This is one of the key features of SBOBet, which has become the top sportsbook in Asia. It also has a gamification platform, where players can compete to win money, as well as play games. The sportsbook even has a blog with sports picks.
You should also look into a sportsbook’s customer service. A good sportsbook should be able to answer questions quickly. A good sportsbook will also have a mobile website, so you can access your account from anywhere. This will allow you to wager on the go, increasing your chances of winning.
A sportsbook should also offer a risk-free bet. These bets allow new customers to try out the site without risking any of their own money. These bets are usually equivalent to the first bet, so you may be eligible for a small payout. The site will ask you to provide personal information such as your name and gender, as well as a security answer. You’ll then be directed to your account’s sign-in page. You’ll need to enter a password and confirm that you are of legal gambling age.
The sportsbook should also be able to provide you with an interesting promotional offer. You might be able to find a promo code that gives you a free bet, or a bonus when you place your first bet. Depending on the sportsbook you choose, you might be able to get a special bonus for making a deposit. Some of the best bonuses are ones that are only available to residents of specific countries.
The best sportsbook is likely to have a range of interesting features, from a good mobile site to live streaming. For example, you can see scores from over 20 live feeds on the site, allowing you to watch and place bets from anywhere in the world. You’ll also be able to make a variety of other types of bets, including parlays, teasers, and round-the-clock betting.
You might be surprised to learn that some sportsbooks are operated by organized crime organizations. While it’s possible to find some legit family-owned sportsbooks, you’ll need to be careful. Likewise, there are many sportsbooks that have a bad reputation, so make sure to read up on the best betting sites before deciding to gamble.
The most important feature of a sportsbook is its reputation. Check to see if the sportsbook is licensed and regulated in your jurisdiction, and make sure it has a good customer service department.
|
OPCFW_CODE
|
Enforce style
This is a proposal for style enforcement. As the module grows, more contributors will come and some will like tabs, while others like spaces. This makes the style uniform.
I chose OTBS only because that's what we use for dbatools. This is entirely your decision, of course, and I'm happy to follow the decided style.
ChatGPT said you like Allman style: https://en.wikipedia.org/wiki/Indentation_style#Allman_style
I also love editor.wordSeparators. That one ensures that when we click a variable, it includes the $, making copy/paste way easier for PowerShell files.
Ensure you poke around your project in VS Code to see the results. Claude said that the code was a mix of Allman and OTBS, so if there are any inconsistencies that aren't Allman, it'll be like "This bracket needs to go one line up". It will also complain about plurals in command names as it suggests best practices.
It can be a bit of a pain at first, but if a standard is not decided on early on, you may end up getting frustrated with contributors styles. Like me, I like OTBS. haha. I'd try my best to adhere to your style but it'd be easier if VS Code could just tell me and format my code appropriately.
I fully agree with you on your proposal to clarify the coding rules.
On the other hand, I like the default formatter of the VSCode. Let's make rules that don't significantly change existing code.
I don't know what ChatGPT said 🤣 , but Allman will not adopt it. By convention, Allman is the format of choice for C#, but OTBS or Stroustrup is often used for PowerShell.
Stroustrup is better because OTBS does not have good if-else readability. It also matches well with existing code styles.
You also added the PSScriptAnalyzer rules. Thank you. I would like to make some adjustments and will add comments to the review later.
Let's discuss the plural and singular noun of function names.
Currently, there are two public functions that use the plural noun.
Get-OpenAIModels
Request-Embeddings
When I first started writing PSOpenAI, I used the OpenAI API document as the basis for my decisions, and the API document says "Create embeddings", not "Create embedding".
https://platform.openai.com/docs/api-reference/embeddings/create
Nevertheless, it appears that even within the API document, the rules of notation are not sufficiently uniform. Furthermore, they change frequently.
Now it would be better to change these two functions to singular as well. However, changing function names is breaking change. It cannot do that immediately. For the time being, we will keep both singular and plural functions correctly called, and remove the plural one at the same time as the major version up.
If you have any suggestions, please let me know.
Happy to chat about it! When I first started dbatools, I was trying to reach a non-PowerShell audience so I made things plural so that people knew they could pass more than one database. And SMO uses tableS, databaseS. As the project grew, we agreed to discuss best practices and it was hard for me to come around to singular. Eventually, I was convinced by the team saying we can use documentation and examples to convey plurals.
There are exceptions to the singular rule! And we can put an exclusion at the top of the commands that should be plural.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute('PSAvoidUsingPluralNouns', '')]
param()
You can pluralize a command when it's always plural. We could do Get-OpenAIModel -Model gpt4-turbo-whatever, which makes it singular.
But Embeddings? That's always plural, right? So it's always plural unless you call it an embedding vector or a text embedding 🤣 I had to talk myself into it, but I did end up going with `Get-Embedding in finetuna. I think both plural and singular are justifiable.
But Embeddings? That's always plural, right?
But they call it "embedding object" or "embedding vector". Hmmm... 🤔
https://platform.openai.com/docs/api-reference/embeddings/object
Seems we've got our answer 🤔 What about a standard prefix (like Dba, Sql, etc)? Will you go with any to avoid potential name collisions? OAI is taken by the PowerShellAI* modules.
I am not much interested in common prefixes. If users want it, they can set it themselves.
Import-Module .\PsopenAI.psd1 -Force -Prefix "MyPrefix"
Get-MyPrefixOpenAIModels -Model "gpt-4"
|
GITHUB_ARCHIVE
|
If you have a networked device or devices behind a SOHO or Building-level-type firewall on any of the HSEAS network segments, you are expected to know and abide by the following policy, and ensure that any users of those systems know and abide by this policy:
- Falsifying MAC (hardware/ethernet) addresses
- Providing gateway services to non-registered devices
- Do not provide services to other hosts behind the firewall and gateway those services outside of the firewall ( e.g., do not run a sendmail server to exchange mail with external hosts)
- Do not attempt to provide services to hosts/devices outside your protected LAN (i.e., do not make it possible for external hosts to enter the protected LAN, e.g., via peer-to-peer services, such as, but not limited to, Direct Connect, LimeWire, Kazaa, GNUtella, etc). Note that some forms of video-conferencing (e.g., Skype) may be allowed after discusssions with the CNG. Please be aware that peer-to-peer (p2p) file sharing is not necessary to conduct business within HSEAS. We have safer alternatives.
- Have good antivirus/anti-spyware software that is updated on a regular basis. Updating once a week is insufficient; daily updates are reasonable, and even better are those packages (newer versions of Norton, Trend, Sophos or AVG) that update on an almost real-time basis.
- Keep your systems up-to-date with patches.
- While you are behind a firewall, you are still required to observe privacy concerns. Specifically, you may not read anyone else's mail and may not read any user files that require administrator (root) privileges. To do so is a violation of Federal and State law, and UR policy.
- No pirated software or other copyright violations.
- Provide for backups (and make sure the backup media is secured).
- Good passwords (known only to the account holder), no shared accounts, no accounts without passwords.
- Accounts may only be assigned to members of HSEAS, and any names should not be in conflict with names in use as assigned by the CNG. That is, if 'joe' is the HSEAS login name for 'Joe Smith', then you may not use 'joe' for 'Joe Jones'. There should be no misrepresentation as to the identity of any user.
- We will normally attempt to contact you - but may have to act before we can do so.
- We will attempt to determine the source of the problem behind the firewall. If we are able to do so, we will then alter the firewall configuration so that traffic from the offending device is not passed outside the firewall (hosts behind the firewall may still be affected by the offending device). The firewall configuration will be restored once the problem is resolved by you or your agents.
- If we are unable to determine the source of the problem, and if we deem the problem sufficiently severe, we will disable the switch port providing network connectivity to the firewall. Connectivity will be restored as soon as the problem is resolved by you or your agents.
Last modifed: Thursday, 07-Apr-2011 09:34:45 EDT
|
OPCFW_CODE
|
How do I decide what sort of spatial index to build for a SQL Server table with a Geometry column?
Let’s say an organization manages a lot of GIS data stored in Sql Server and leverages Esri software. To ensure that users can access the data with reasonable speed, I want to make sure the spatial indices are tuned for performance.
What criteria should I use to configure the parameters of a spatial index for a SQL Server table that has a GEOMETRY column?
I am looking for “rules of thumb” that are widely applicable to many data sets of varying specifications.
Please can you [edit] your question to give a bit more background detail about what you're trying to do, and why?
As requested, more details were included in the question, which I also answered. The response includes the details that were meant to be addressed. It is my hope that someone looking for this information won’t have to work as hard as I did to find it. Better still, others might offer suggestions to further improve the recommendations.
The criteria developed might be a bit primitive in its assumptions, but it was refined based on the results gained from an Esri ArcGIS Desktop testing tool, PerfQA. There are other approaches to measuring the performance of spatial indices.
As the data in question is all used in GIS processes, two key considerations are rendering time and selection time.
The data sets used followed the Esri rules for geometry data:
Only one geometry type per table (feature class): either point, line or polygon
All rows have the same coordinate system (SRID)
There are only three categories of spatial data that are considered, which should capture any data set that follow the stipulated rules:
Point data - treat all point data sets the same, regardless of row count
Line data of any row count; or, polygon data with fewer than 50,000 rows
Polygon data with 50k or more rows
For each category, above, the basic parameters for the spatial index applied include:
GRIDS = (HIGH,HIGH,HIGH,HIGH)
GRIDS = (LOW,MEDIUM,HIGH,HIGH) and CELLS_PER_OBJECT = 8192
GEOMETRY_AUTO_GRID and CELLS_PER_OBJECT = 20
So, for example, the T-SQL code to build a spatial index "SIdx_Points_shape" on a point table "Points" with the GEOMETRY column "Shape" would look like:
Create Spatial Index SIdx_Points_shape On
Points (Shape) With (
BOUNDING_BOX = (xmin=2406292.490931, ymin=6884084.490682,
xmax=2598087.473509, ymax=7096654.967680),
Grids = (HIGH, HIGH, HIGH, HIGH),PAD_INDEX = OFF)
;
Some settings applied to all spatial indices include:
PAD_INDEX = OFF
Define a BOUNDING_BOX
I am sure that these categories and settings could be refined. But after using these settings for the past three years, performance has been decent.
In PowerShell or python (using arcpy and pyodbc), one could build a generic create index function to drop and rebuild spatial indices, incorporating these criteria.
For example, in python, the arcpy.Describe() shapeType option can give you the geometry type. Or, you can get it from STGeometryType(), via pyodbc. An example of the arcpy approach is:
shpTyp = (arcpy.Describe(fc).shapeType)
This information, plus the row count (again, using T-SQL or arcpy methods to acquire), can be used to construct a straight-forward test for a data set in python code:
if shpTyp == 'Point':
cursor.execute(qryPt)
cnxn.commit()
elif shpTyp == 'Polyline' or (shpTyp=='Polygon' and ct < 50000):
cursor.execute(qryLP)
cnxn.commit()
elif shpTyp == 'Polygon' and ct >= 50000:
cursor.execute(qryP)
cnxn.commit()
else:
The cursor.execute() statement run a particular set of T-SQL code, depending on the geometry type and the row count. Each query implements the spatial index criteria described in (1)-(3), above.
Thanks to boomphisto for a deep dive into testing spatial index performance.
|
STACK_EXCHANGE
|
This is part 3 in a series on How Americans Live.
The US Labor Department released the 2011 Time Use Survey on June 22.
A few facts should raise questions:
In 2011, each day, at the highest aggregated level, on average, an American spends:
- 2.75 hours watching TV
- 43 minutes buying goods and services
- 18 minutes exercising, playing sports, and recreating
- 10 minutes on telephone calls, mail, and email
- 7 minutes on leisure computer use (excluding games)
2.75 hours watching TV, 7 minutes computer use
That 2.75 hour watching TV figure ought to stick out like a craw for many analysts, because, by some estimates, 19.25 hours a week is a really low figure. The same goes for 7 minutes of leisure computer use (excluding games). Indeed, Forrester publicly stated in 2010 that TV watching and Internet use are roughly equal at 13 hours a week.
The devil is in the details
The ATUS definition and coding guide are very specific*.
If the respondent was doing many things at the same time, they’re asked what they were primarily doing, (with the exception of simultaneously taking care of a child). Simultaneous activities that are secondary are not systematically recorded.
“Using the computer” is coded as what the respondents primary activity. If they were using the computer to look for a job, it would be coded as looking for a job. If they were using the computer to pay bills, it would be coded as household activity/financial management. If they were shopping online, it would be coded as buying goods and services. Playing games on the computer (including Internet games) is coded into the generic ‘playing games’ code, which includes card and board games. One notable exception is if the respondent said they were ‘chatting on the Internet’, which would be coded as ‘computer use for leisure’. If the respondent wouldn’t state what they were doing on the computer, then it would be coded as ‘computer use for leisure’. (Editorial deleted).
The goal is to record the intent of the activity, the medium is purely secondary.
This effectively hides computer and Internet use in the aggregate results of the ATUS survey.
Forrester’s definition of Internet use includes using it at work, as well as social and video gaming. As a result, and Forrester is careful to say this repeatedly, the Internet isn’t just for leisure use.
Nielsen states that TV usage includes leaving the TV on. In other words, the definition of watching TV extends to secondary activities.
Their understanding how American live is measured differently, generating alternative figures.
TV and Internet
The ATUS survey was launched in 2003, long after the Internet had become established. It wasn’t novel anymore. ATUS focused on the intent of what people were trying to do.
That said, the 2.75 hour figure of watching TV is a fair bit more stringent than the Nielsen definition of it being turned on, say, while people are paying bills, cleaning the house, preparing food, or consuming it. That 7 minute figure is extraordinarily rigid, with a code book designed to minimize the amount of of time reported into it as possible.
As such, ATUS destroys a huge amount of information about digital exposure.
This is an important fact in understanding how Americans live.
I’m Christopher Berry.
Follow me @cjpberry
I blog at christopherberry.ca
|
OPCFW_CODE
|
"""Temporal boolean algebra."""
__version__ = "0.4.2"
from heapq import merge
from itertools import groupby
from collections import deque
import operator
from sortedcollections import SortedDict
class TimingDiagram:
"""Two-state (True/False or 1/0) timing diagram with boolean algebra operations."""
def __init__(self, time_state_pairs):
"""Creates a timing diagram out of a series of (time, state) pairs.
Notes
=====
The input states can be any truthy/falsey values.
The input times can be any type with a partial ordering.
The input sequence does not need to be sorted (input is sorted during initialization).
Compresses duplicate sequential states and stores them in the `timeline` attribute.
Example
=======
>>> diagram = TimingDiagram([(0, True), (1, False), (5, False), (10, True)])
>>> print(~diagram)
TimingDiagram([(0, False), (1, True), (10, False)])
"""
self.timeline = SortedDict(
_compress(time_state_pairs, key=operator.itemgetter(1))
)
def __getitem__(self, item):
return self.timeline[item]
def __matmul__(self, time):
"""Alias for at()"""
return self.at(time)
def __eq__(self, other):
"""Returns a new timing diagram, True where the two diagrams are equal."""
return self.compare(other, key=operator.eq)
def __ne__(self, other):
"""Returns a new timing diagram, True where the two diagrams are equal."""
return ~(self == other)
def __and__(self, other):
"""Returns a new timing diagram, True where the two diagrams are both True."""
return self.compare(other, key=operator.and_)
def __or__(self, other):
"""Returns a new timing diagram, True where either diagram is True."""
return self.compare(other, key=operator.or_)
def __xor__(self, other):
"""Returns a new timing diagram, True where the two diagrams are not equal."""
return self != other
def __invert__(self):
"""Returns a new timing diagram with states flipped."""
return TimingDiagram(((t, not s) for t, s in self.timeline.items()))
def at(self, time):
"""Returns the state at a particular time. Uses bisection for search (binary search)."""
idx = max(0, self.timeline.bisect(time) - 1)
return self.timeline.values()[idx]
def compare(self, other, key):
"""Constructs a new timing diagram based on comparisons between two diagrams,
with (time, key(self[time], other[time])) for each time in the timelines.
"""
# TODO: Implement linear algorithm instead of .at() for each time, which is O(n log n).
return TimingDiagram(
(
(k, key(self.at(k), other.at(k)))
for k in merge(self.timeline.keys(), other.timeline.keys())
)
)
def __repr__(self):
return f"{self.__class__.__qualname__}({list(self.timeline.items())})"
def _compress(sorted_iterable, key):
"""Yields the first value from each sequential group (grouped by key function).
In other words, returns state changes. Also, always yields the last element
(if it wasn't already yielded), even if it isn't a state change.
"""
final = ()
for _, g in groupby(sorted_iterable, key=key):
yield next(g)
final = deque(g, maxlen=1)
yield from final # yield final state if not already yielded
|
STACK_EDU
|
Elite Looter tutorial:
The basic concept of Elite Looter is creating a cross-server currency. If you get money on server A, you will have the same amount on server B and the other way around. Elite Looters currency consists of gathering crates, opening them and getting coins out of them. With coins you can play games, or upgrade your shop for better chances for better crates or more coins. This tutorial will step for step explain how Elite Looter works, but this is only the beginning. If you for example want to know how a specific game works, you can take a look at the documentation. Everything will be explained there, with all possible error messages and a small tutorial on how to use that specific command.
The start and basic Elite Looter knowledge:
With Elite Looter you can get crates by typing messages in chat. All messages suffice, unless they are empty or are from a bot. Bots can't use Elite Looter
at all. There are 6 standard types of crates: basic, normal, rare, epic, legendary and mythic. You get the least money out of a basic crate and the most
out of a mythic of course. The message that you have gotten a crate looks like this:
You can open it with ;crate open <cratetype>, so for example ;crate open rare. You can see an overview of your coins and crates with ;profile. Patreon crates can be collected by typing ;weekly, however only patrons can use that command. With the ;tutorial command you can claim a free mythic crate. It also gives you a basic in-bot tutorial. If you are lucky, we have just hit a milestone like 3750 guilds and you can claim ;free. This is a crate that contains 5000 to 10000 coins. Also, you can claim a ;daily crate. This will give you a random crate (from the 6 cratetypes)
When you have opened your first crates
You should have around 2000 or 8000 coins when you have opened your free crate. With this money you can upgrade your shop. You can see the shop with ;shop:
The white bar will fill up if you near the max level of the upgrade. You can upgrade something with ;shop buy <upgradename>. An overview of all the upgradenames can be found in the documentation You can also type ;shop buy and check out the list the bot should spit out.
Playing some games
Currently, elite looter also has a few games, but there are plans to add more in the future. The games you can play are a gamble game, with which you can gamble crates, a dice game, a bet game and a slotmachine. I also made a little video to show the slotmachine off:
You can see a list with all the games by using the command ;games. A tutorial on all the games can be found in the documentation.
For server owners and moderators
You can customize elite looter a little bit with the ;options menu. It has a handfull of options, including dmcommands, dmmessages and customcurrency. Dmcommands sends the output of commands to people on dm, instead of in the server chat. The same goes for dmmessages, but this sends messages that you got a crate in dm to people. CustomCurrency activates custom currency. A tutorial about that will follow when the custom currency update goes live (countdown)
|
OPCFW_CODE
|
Why not demand President's/candidate's financial records instead of tax returns?
What significance could the information in Donald Trump's tax return have to his campaign? addresses what information could be gleaned from Donald Trump's tax returns if he allowed them to be released. But I wonder why there's so much focus on the tax returns, rather than other financial records.
While there's certainly some useful information there, and you can get a rough picture of a person's overall financial situation, it's still missing quite a bit. A person or corporation's tax return only reports those aspects of their finances that affect their taxes. For instance, you only have to report investments that produced income that year, as well as investments that you've sold so you can report capital gains and losses.
Why not demand access to candidates' more detailed financial records (e.g. net worth statements, budgets, corporate balance sheets)? Is it simply because there's no requirement that individuals keep detailed personal records like this, but tax returns are required, and falsifying them is illegal? But most wealthy people do have accountants and money managers who track their finances, so the records most likely exist and they could be requested. And if they have much of their wealth in businesses, like Trump does, those corporations will have detailed books.
Or is it even simpler: is there no law authorizing Congress to demand this information? And in the current political climate, it would be impossible to pass such a law -- the Republican Senate would not vote for it, and even if it did manage to pass, it wouldn't be by a veto-proof super-majority. On the other hand, there's a 1924 law that allows some congressional committees to obtain the tax returns of any taxpayer, so they go with what they can legally get (Trump is challenging that law, and it will probably end up in the Supreme Court). And even this law doesn't allow the returns to be disclosed publicly, unless the taxpayer consents.
Frankly we don't have any rights to demand somebody's personal information may it be taxes or balance sheets, it is voluntarily that candidates disclose their taxes, it is a practice now followed.
@Up-In-Air Actually, there is a 1924 law that allows the House Ways and Means Committee to obtain anyone's tax returns.
Put that in an answer @Barmar - we'd all like to learn more about this!
@cyber101 I added a link to the question, it's not really an answer.
As a practical matter, not everyone has "balance sheets" of any sort, but everyone with income files tax returns. Compiling a balance sheet might be burdensom (would be for me, anyway), whil supplying copies of tax returns would be as simple as emailing a few pdf files.
@jamesqf I addressed that in the third paragraph. If you use a program like Quicken to manage your personal finances, it can produce a balance sheet.
This request is not so shocking - in Poland all our elected politicians and top civil servants are required by law among other to list their all major assets and liabilities, however it does not take form of balance sheet.
Everyone seems to be focused on "balance sheets", I just meant that as a an example of detailed financial records. I've edited the question to be more general.
FWIW, something like that is indeed required in some (many?) European countries. Obviously it can be intrusive and burdensome but nobody is forced to run for president.
@jamesqf If you were running for president, I can't imagine the difficulty of putting together financial records would be a considerable burden next to the effort of campaigning in the first place.
@Barmar: Sure, IF I used a program like Quicken (or GnuCash: https://www.gnucash.org/ since I don't do Windows). But I don't, and see no need to. And yes, if I were running for President, I could probably pay staff to do that sort of thing, but how about my state legislature or county commission?
@jamesqf We're talking about multi-millionaires, not random county clerks. If you can get by without keeping detailed financial records, there's probably nothing interesting in your finances to begin with. But I doubt that would be the case for anyone running for President, or even Senators. OTOH, AOC was a bartender before she became a Congresswoman -- she might not have any financial records worth reporting (but she has an economics degree and has run a business, so she might keep good personal finances).
@jamesqf And if you have a significant investment portfolio, you practically have to keep records in order to fill out your tax returns properly.
Another relatively straightforward pragmatic reason is that a tax return is, at some level, an objective concept and a formal document; while there are, for instance, regulatory requirements on what goes into a balance sheet for a corporation, there's no real formal definition of a balance sheet, particularly on a personal level. Since one of the current fronts in the fight for financial disclosure from candidates is legal (e.g. California's bill SB-27 requiring tax returns for candidates to appear on a primary ballot), using a format for that information that's already codified in law makes it much easier to create new legislation requiring that information.
This is perfect. I have upvoted accordingly. I'm deleting my old comments as they are no longer relevant.
This also seems like a very good reason.
Because there is a custom that presidential candidates share their tax returns.
Most presidential candidates are politicians not business people, so demanding their balance sheets would be meaningless. Beyond that, the few that have been business people, excepting Donald Trump, have put their money in blind trusts. With a blind trust, they don't actually know their balance sheet except at the highest level. And financial disclosures already cover that high level view.
Politico
Trump financial disclosure, 2016 (PDF)
Note that they complain about the financial disclosure information being self-reported. This contrasts with the income tax information, which is usually based on other forms. E.g. salary information is based on W-2 forms.
They don't usually put their money in the blind trust until after they're elected, do they?
Cater was a buisness man (Peanut Farmer) and put his Buisness in a (poorly run) blind trust while he was President. I'd also like to point out that releasing tax returns is a very new tradition in the Presidential race (I believe it only started after World War II) and was never required by office.
Also while no President was charged with tax fraud while in office, at least one Vice President (Spiro Agnew) did resign as Vice President after he was charged with Tax Fraud among other crimes, commited while he was governor of Maryland, so there is precedence that any fraud with Trump's taxes would certainly be grounds for his removal from office. A more benign speculation is that the returns would reveal Trump isn't worth billions and his status as a billionaire was part of his campaign. It's a crime to the IRS... not the voting public.
@hszmv Actually even newer than that. Truman released his, but the next was Nixon, and that apparently started the tradition.
|
STACK_EXCHANGE
|
My esteemed colleague Flo and I recently worked on a television guide bot using api.ai + Amazon's AWS backend service to experiment with the capabilities of Voice User Interfaces (VUI). In this blog post I’ve covered the design process we followed to enable voice interfaces in an existing product and how we used api.ai to build natural dialogues.
Defining the features
Our aim was to explore what it takes to enable voice capabilities in an existing product and find out what we can do with it. There are some great design resources on VUIs, such as Google's Crafting a Conversation, but there is no mention of integrating voice with an existing product. Having clear information architecture for your project will help you create relevant user journeys that you want your assistant to support, without having to worry about platform-specific navigation patterns.
Visual vs audio
The main difference between a visual interface and an audio one is that dialogues are a single, linear communication channel. Here is an example on the current Netflix app compared to a potential audio assistant:
How long does it take to get to all possible features of the GUI vs the VUI?
It might take a split second to scan a screen and get a good idea of what to do next. But explaining every action verbally can take much longer. Dialogues need to be short and clear so the user will not lose interest and walk away.
On the other hand, voice input can be simpler and more direct than touch. When you need something you just speak your mind. You don't have to worry about finding the feature you need on the screen, navigating around the app, waiting for animations to end and so on. It’s easier and more natural to say "Send a message to Mum: I’m writing a blog post", instead of finding the 'New Message' icon, typing Mum's number, tapping the text input, typing the message and hitting the send button.
Vocabulary and branded words
In the field of computer science there are many metaphors and synonyms already being used (a computer can have a mouse, keep information on its memory, Material design talks about the metaphor of paper , the list goes on). It’s likely that you’re already using several metaphors in your own apps and systems to represent multiple features with a single word. All 4's 'Catchup' feature, for example, allows the users to see which episodes aired in the past so they can find any updates from the shows they’re following. But how can the user be expected to know this feature exists, let alone know the correct name? How do you teach people detailed vocabulary or should this only be available for ‘power’ users?
It’s worth creating alternative ways of prompting such features rather than expecting the user to know the branded word (in this case Catchup). Think about how someone would describe that feature conversationally. This can help when thinking about how a user would ask the chatbot for information such as "What happened in the latest episode of The Big Bang Theory?".
Designing dialogues for old and new users
Many users may not have used a voice assistant before, so you need to guide them through the process. One way to do that is through audio onboarding. A good way to approach this is with an introductory conversation with first time users. Let the user know what the assistant can do and how the user can access the features. Teach your user what to do if they’re not sure how to move forward. Let them know if they’re stuck at any given point they can ask for help. Remember, there are no visual hints at this point. You can’t expect a user to know what they can do without giving them hints.
In terms of experienced or power users, things need to move a bit faster. They already know the ins and outs of your assistant so they know how to act and what to say. For these users, you might want to include voice commands instead of dialogues, removing hints or guidance on features. For example a new user would be more likely to speak naturally eg "Could you please tell me more about X", but an experienced user may just ask for "Details".
Flexible scenarios via api.ai
When implementing your scripts into code, you need to consider which part of the discussion your inputs will be in. A phrase such as "Tell me more about X" is something a user could say at the beginning or in the middle of a conversation. There may be several variations of that question, such as "I would like to know more information about X" or the user may not refer to the show by its full name or use “it”. Luckily, all those scenarios can be easily handled with api.ai.
The way input dialogues are defined in api.ai is done through
intents. These are the questions the user can ask the chatbot, the chatbot’s responses back to the user and how those two statements are connected.
What is really cool about api.ai is that you can define contexts around those intents. This means that if you start a conversation about a show, the bot will be able to remember which show you are talking about if you use the word "it" ('Tell me more about it', or 'Remind me to watch it?').
Lastly, api.ai makes it really easy to create flexible conversations without having to tie specific parts of the discussion together. By creating two versions of the same intent, one that requires a context of discussion and one that doesn't, you can place the intent at any point in the conversation.
Voice is not just for assistants
Even though voice interfaces are often associated with assistants such as Google Home or Amazon Alexa, this is just the tip of the iceberg in terms of possible applications for this medium.
How can your existing application be enhanced by giving it voice capabilities? Maybe your app could allow users to listen to news on-the-go, or allow the user to navigate through content by verbally telling the app which sections they are interested in?
How about replacing annoying guide screens with a chatbot that informs the user of potential ways of interacting with the app instead? You could even combine different technological mediums together? A great example of this is Starship Commander, a virtual reality game, which allows the player to control the narrative of the story by talking to the in-game characters. If you’re interested in tinkering with Arduino or Android Things then why not build a robot and give it a personality through conversation?
When designing dialogues, take into account the time it takes to say the words out loud and consider how direct the voice input might be. It’s advisable to cater for new and experienced users with input phrases and commands that will be meaningful to both groups.
Working on a VUI project can be fun and interesting from a design and a development perspective. I particularly enjoyed working with api.ai on this as it provides the freedom to create flexible scenarios. I’m looking forward to exploring how to improve the discoverability of features through dialogue structure amongst other techniques.
If you are interested in building your own voice assistant, Fransesco's blog on Everything Assistant at Google Developer Days 2017 would be a great place to get started.
Listing image by Jason Rosewell
|
OPCFW_CODE
|
By default, ImageMeter saves your images only locally on the device. This means that if you lose your device or it gets broken, you might also lose your photos. This page describes best practices to set up backups.
If you use a dedicated backup software on your device, make sure that it really backs up all the app data. Many backup apps only save which apps were installed and reinstall them, but they do not store the actual data. The Google Drive backup only restores the app settings, but not the data.
Because of increased Android security policies, most backup apps may not access the data of other apps anymore. This means that many backup apps only work if they have full root access to your device. Please check your backup app's documentation what it actually saves and test it before you rely on it.
Here we describe how you can backup your data with ImageMeter's own tools.
You probably have a workflow where you are working on a project for some time and want to archive it afterwards. In that case, it is easiest to export the whole project as an IMF file and store it at a safe place. At a later time, you can re-import the IMF file into ImageMeter if needed.
With the cloud-sync function built into ImageMeter (part of the Business subscription plan), you can automatically mirror all your data to a cloud server. Even if you lose your device, you can simply reconnect to the same cloud server account with another device and the data stored there will be synced back. As a bonus, you can also sync several devices to one cloud account and work on the same photos concurrently on all devices.
When you have set up your backup, you may want to test whether it is functioning properly. You may do so like this:
Initiate a sync with the cloud icon in the bottom right corner of the main screen. It should now upload all your images to the server. You can see the upload progress in your notification message area (pull down from top of screen).
Once the upload has finished, have a look into the cloud server directory using the respective app or the web interface of your service, e.g. on the Google Drive or OneDrive web pages. You should see the folder structure of your photos in your cloud backup-directory. There will also be a separate folder for each of your images. You should find your original images in there, without any annotation, and there should be a file annotation.imm for each image.
If you have a second Android device, you can test data recovery from the backup by connecting the second device to the same cloud account. When you start a cloud sync on the second device, it should download all images in the backup (note that it will also upload the images from the second device into the backup).
If you don't have a second Android device, you can test data recovery like this:
The backup directory on the cloud server should solely be used by ImageMeter for the backup. Please do not use this directory for other purposes. The directory will contain data in its own, internal format. Please do not modify or delete any files in this directory manually.
If you connect several devices to the same cloud account and the same backup directory, the data will sync between all devices.
For more information on the cloud sync function, see here.
|
OPCFW_CODE
|
This week Microsoft renewed my MVP status meaning I am an MVP for another 12 months. This is my third year of being an MVP so I thought it might be a good opportunity to write about my experiences with the programme and the kinds of things I do to stay within the programme.
What Is An MVP?
An MVP (Most Valuable Professional) takes its name from the US sporting accolade of Most Valuable Player. For those of us outside of the USA, this is broadly equivalent to a ‘Man/Player of the Match’ or a ‘Best and Fairest’ award. The Wikipedia article sums it up pretty well:
The Microsoft Most Valuable Professional (MVP) is the highest award given by Microsoft to those it considers “the best and brightest from technology communities around the world” who “actively share their … technical expertise with the community and with Microsoft”.
The key thing to note is the reference to community contribution. What the award does not recognise is elite programming skills. As some of you may know I am not a programmer. I used to code C++ a long time ago but I am not a .Net programmer and yet I am an MVP for Dynamics CRM; a program built on .Net and extended using .Net .
Also, the award does NOT recognise those that exclusively drink the Microsoft Kool Aid. MVPs are often the most outspoken critics of the flaws in the products they work with. Microsoft welcomes this because, to stay competitive, they need to know what is not working with their products. While MVPs do not often post scathing criticisms on forums or in their blogs they do, behind closed doors, let Microsoft know in no uncertain terms where the problems are with their products. I will talk more about these closed doors a little later.
How Do I Become An MVP?
This is a question that is often asked and it is difficult to answer because there is no specific ‘track’ to getting the award. There is no set of certifications or qualifications which are needed. One thing that is required is nomination. In my case I was nominated by another CRM MVP and this was seconded by a Microsoft employee working with Dynamics CRM. Traditionally, this was how prospective MVPs were put forward (one external, often an MVP themselves, and seconded by a Microsoft employee). However, this is not necessary. Anyone can e-mail someone they believe is deserving of the award (including themselves). The details are here.
Once nominated, a panel within Microsoft reviews the application. I have no idea who is on this panel, nor where they are located (although I assume it is in Redmond). Community contributions from the previous 12 months and technical knowledge are considered. There are no official levels of activity required and it is presumably a subjective decision weighed against the relative merit of other candidates and existing MVPs. Intakes into the programme are quarterly (January, April, July and October).
Once successful, MVPs are reviewed on an annual basis and must be able to demonstrate community activity on par with that which got them into the programme in the first place. If an MVP stops contributing, they will not be renewed.
What Is Meant By ‘Community Contributions’?
Occasionally, Microsoft do release a document talking about the activities considered to be contributing to the understanding and appreciation of the product by the broader community. Typically, the kinds of things mentioned include:
- Participating in the online Microsoft forums
- Giving talks at user groups or conferences
- Organising events for the public such as user groups or public demonstrations
Of these, the forums are the easiest one for Microsoft to measure. You need a live ID to login, meaning it is easy to track how long you are in the forums, and the forums track who proposes answers and whether they are acknowledged as an appropriate answer to the question being asked. The most difficult of these for Microsoft to measure are public appearances. If you are running a user group in a remote foreign land, this is much harder to verify than your forum activity.
What Are The Benefits Of The Program?
Certainly there is no money in it so if you are looking for some kind of monetary reward for getting on the forums and blogging excellent code, you will be sorely disappointed. In my opinion, the biggest benefit is an invitation to the MVP Summit held in Seattle each year around February-March. While it is up to the MVP to pay for flights, accommodation is subsidized and Microsoft keep all attendees fed and watered for the entire time. You get to meet the product team, you get to tell them what you really think and you also get to find out where the product is heading (under an NDA agreement, of course). You also get to go to the Microsoft Shop at the Redmond Campus and buy Microsoft goodies at staff rates.
Throughout the year MVPs also get access to exclusive email groups where they can raise issues they may be having and get ideas from other MVPs and from the Microsoft product teams. The MVPs also use these channels to provide feedback on improving the product. With the sheer volume of communication that occurs in these channels, it would be fair to say the number of messages I try to get across has probably doubled since getting into the programme. However, for understanding the finer aspects of a product, there is no better source of information.
Other benefits include a subscription to MSDN/TechNet, free Microsoft support tickets and free or discounted software from third parties.
What Does Leon Specifically Do To Remain Active In The Community?
Obviously there is this blog. I try to write an article once a week but will often give myself one weekend off so that I generally put out three articles per month. Articles mostly fall into one of three types:
- Codeless solutions or handy, lesser-known features of Dynamics CRM
- Commentary on how CRM is stacking up to its competitors (you know who you are)
- General thought leadership of marketing and business practice
I also tweet when I come across something I feel would be of interest to non-coders involved with Dynamics CRM (users, non-technical administrators, buying decision makers etc.)
I try to propose answers for at least ten forum questions a month but, with the friendly rivalry between the forum participants, it is difficult to get to a question before someone else has answered it. It is really surprising if a question does not get some kind of response within an hour or two.
I will talk at any event about Dynamics CRM and often do so for free. A great example of this are the online Decisions conferences. A number of CRM MVPs regularly present at Decisions with no compensation other than the satisfaction of getting a soapbox to stand on for 30 minutes. If you are looking for a speaker on a Microsoft product, I strongly recommend approaching an MVP. Generally they present well, are knowledgeable and very friendly. As I often say, I will attend the opening of an envelope if it means I get to speak on Dynamics CRM.
To provide content for my blog and tweets, I read a lot of articles on Dynamics CRM and the CRM industry in general. These come to me, almost exclusively through Outlook and are sourced from RSS feeds, Google alerts, LinkedIn groups and tweets. I also read the posts to the forums, via an RSS feed in Outlook.
These are my personal Outlook folders I read every day:
Using Outlook rules, messages get diverted to ‘holding bays’ for reading when I have time. As you can see, there are literally thousands of messages I have waiting to be read and while I will not get to read them all tonight, they are in my PST waiting for me when I get bandwidth (airport terminals and plane flights are excellent for this). For the tweets, I use TwInbox, an excellent product for tracking tweets in Outlook.
All of the above I generally do outside of working hours as I have a full time job. I also have a wife and two kids so I often do things like read articles once the little ones have gone to bed. As an example, I am writing this blog at midnight on a Friday night. The television is on (showing Conan) but I am watching it over the top of my laptop screen.
My Experience With The Programme
My experience has been very positive. I am yet to meet an MVP I did not like. By their very nature, the are smart, eloquent and willing to share information or talk to others, especially if it is about the product they got awarded for.
In terms of the work involved in maintaining the award, to be honest, I would be doing these things anyway. I tend to be a little obsessive-compulsive when it comes to knowledge and learning so squeezing as much information as possible into my brain at every possible opportunity is kind of who I am.
Also, getting to see the human side of the Microsoft ‘machine’ in the form of online discussions involving the CRM product team is great. It is all too easy to consider Microsoft as a faceless engine pumping out software and making a few bucks along the way but, like every organisation, Microsoft is made up of people and getting to know these people is a rare and welcome experience.
If I did not enjoy it, I could simply resign; an option that is available to every MVP but I have no motivation to do that at this time.
If you are looking to becoming an MVP as a badge of honour, you will struggle. The fact is, to become an MVP and keeping the MVP status requires a lot of work in terms of maintaining relevancy and expertise. It also requires a paradigm of being willing to share this hard sought knowledge at the drop of a hat. My advantage in this regard is I did a physics degree, not an IT degree so the academic philosophy of sharing knowledge for the benefit of the many is hard wired into me. There are many people in IT who are experts but who hoard their knowledge to maintain an advantage over others. This is not the way of the MVP.
If, after all this you think the MVP programme is for you, I wish you the best of luck. It is a lot of work but I enjoy it immensely and I look forward to seeing you at ‘Summit’.
|
OPCFW_CODE
|
//**************************************************************************************
//
// Command Line Input Checking Function
//
// Kristian Zarebski, October 30th 2015
//
// Checks user input and also returns help if requested
//
//**************************************************************************************
// Header files
#include <string>
#include <iostream>
#include <stdlib.h>
#include "processCommandLine.hpp"
bool processCommandLine(char* argv[],CommandLineInfo &info_args){
std::string input{""}, input2{""}; //Strings for arguments
//Check that an option has been stated when program is initiated
if(info_args.num_args == 1){std::cout << "ERROR: Program option required, type 'mpags-cipher -h'"\
" for details \n"; return false;}
input = argv[1]; // Set input to be the value of argument 1
//Ensure that the first argument is either Code/Decode/Help option
if(input != "-c" && input != "-d" && input != "-h"){std::cout << "ERROR: Program" \
" option required, type 'mpags-cipher -h' for details \n"; return false;}
// Print instructions if 'help' selected as program option
if(input == "-h" || input == "--help"){ std::cout << "*****************************"\
"************************************* \n" \
"\t \t \t CaesarCipher v0.1.7 \t \t \t \n" \
"****************************************************************** \n \n" \
" -c \t \t \t Code a Message \n -d \t \t \t Decode a Message \n -h \t \t \t"\
" Display this help message \n -i \t [filename] \t Use input file for" \
" message, must be followed by a valid file name \n" \
" -o \t [filename] \t Specify an output file to print translated text to,"\
" must be followed by a valid file name \n \n Program requires either '-c',"\
" '-d' or '-h' flags to work. \n \n"\
"E.g. 'mpags-cipher -d -i inputfile.txt -o outputfile.txt' \n \n"; exit(0);}
if(input == "-c"){info_args.ciphermode = CipherMode::Encrypt;}
else{info_args.ciphermode = CipherMode::Decrypt;}
// Check if file options selected by user and ensure that where an input/output file
// has been requested, the user specifies a valid file name (cycles through arguments)
for(int i{2}; i<info_args.num_args; ++i){
input = argv[i];
if(i != info_args.num_args-1){input2 = argv[i+1];}
if(input.front() == '-'){if(input != "-o" && input != "-i"){
std::cout << "ERROR: Invalid argument '" << input << "' \n";
return false;}}
if(input == "--Cipher"){if(i==info_args.num_args-1){
std::cout << "ERROR: Cipher Type Not Specified! \n";
return false;}
else if(input2 == "caesar"){info_args.ciphertype = CipherType::Caesar;}
else if(input2 == "playfair"){info_args.ciphertype = CipherType::PlayFair;}
else{std::cout << "ERROR: Invalid Cipher Type! \n"; return false;}
}
if(input == "-o"){if(i ==info_args.num_args-1){
std::cout << "ERROR: No Output File Defined! \n";
return false;}
else if(input2.front() == '-'){
std::cout << "ERROR: Invalid File Name \n"; return false;}
info_args.outfile = argv[i+1];
}
if(input == "-i"){
if(i == info_args.num_args-1){
std::cout << "ERROR: No Input File Defined! \n";
info_args.infile = input; return false;}
else if(input2.front() == '-'){
std::cout << "ERROR: Invalid File Name \n"; return false;}
info_args.infile = argv[i+1];
}
}
return true;
}
|
STACK_EDU
|
Foobar2000:Components/Playback Statistics v3.x (foo playcount)
|Stable release||3.1.5 (March 15, 2023)|
|UI module(s)||Default UI; Columns UI|
|View all components|
This component collects and maintains statistics for played songs.
The statistics include:
- Time/date first played
- Time/date last played
- Playback count
- Time/date added to the Media Library
New playback statistics data pinning scheme introduced in version 3.0
Playback statistics are now pinned to a combination of artist + album + disc number + track number + track title information, contrary to pre-3.0 versions which would pin data to file paths.
The consequences of this behavior are:
- Statistics are shared between redundant copies of the same tracks - useful when you keep separate copies of your music in different formats such as lossy + lossless.
- Automatic carrying over of statistics when acquiring the same music in another format, as long as tags match.
- No risk of data loss when moving files around or between computers.
When editing tags, affected playback statistics records are transferred accordingly.
Starting from version 3.0, collection of playback statistics is no longer restricted to your Media Library content. You can use this component without using Media Library at all, however, you should keep your non-ML music referenced from a playlist for foobar2000 to maintain the statistics.
Playback statistics data is no longer dropped when the tracks are removed from the media library. A record gets removed when no matching track has been seen by foobar2000 (in Media Library or in any playlist or in an imported XML backup of playback statistics) for four weeks.
Title formatting fields
- Date/time at which the song was played for the first time.
- Date/time at which the song was played last time.
- How many times the song has been played.
- Estimate how many times per day the song has been played, based on time first played, time last played and times played.
- Date/time at which the song was added to the Media Library.
- Song's rating, on a 1–5 scale.
- Song's rating, formatted as up to five stars, e.g. ★★★
- Song's rating, formatted as five stars, e.g. ★★★☆☆
Note: You may need to change your fonts for the stars-formatting fields to produce readable output.
Detailed information about Title formatting for Playback Statistics can be found here.
XML backup functionality
You can export playback statistics to an XML file and import them later, through Library => Playback Statistics menu commands, or through context menu on specific tracks. This can be used to easily transfer playback statistics between different foobar2000 installations or profiles.
As with all previous releases, this component is fully backwards compatible with databases created by any versions released publicly before. If you have used an earlier version of the Playback Statistics component before, your existing data will be automatically imported on first run.
If you wish to keep the ability to revert to an older 2.x version, please back up your PlaybackStatistics.dat file before running the new component first time.
Common sources of confusion
Please note that foo_playcount takes over
%PLAY_COUNT%, and all other mappings corresponding to its native fields listed above, using these mappings to return the fields from its own database rather than from files themselves. If you want to display such fields from the tags of files rather than from the database, use
$meta(rating) and so on in your title-formatting. This behavior has changed in version 3.0: previous versions would fall back to reading the field from the file's tags when the corresponding field was not present in the database.
It is important to note that the component cannot retrieve data from before it was installed; therefore, it initializes statistics when it is installed. Most uses of the
%ADDED% field, for example, will at first return the entire media library; this is because foo_playcount has no way of guessing when the files were actually added because it was not present at the time. Thus, the default value of
%ADDED% is set to the time that the component was first loaded. Any items added after foo_playcount is installed will have the field set properly because it is present to detect them.
- on foobar2000.org
|
OPCFW_CODE
|
Make Segment Compile On Windows
Adjust the Segment codebase to be buildable on Windows.
Current dependencies on/for this PR:
develop
PR #7 👈
This stack of pull requests is managed by Graphite.
@compnerd when I test this PR in CI and on my local I see the same test failures (yay consistency), but the errors that I see if I turn on additional logging locally seem to be focused around libcurl and ssl, is this possibly just an issue with this toolchain build?
Test Case 'FlushPolicyTests.testIntervalBasedFlushPolicy' started at 2023-11-28 14:39:54.022
noneCleaned up 0 non-running uploads.
noneUploads in-progress: 0
noneProcessing Batch:
0-segment-events.temp
noneError uploading request Protocol "https" not supported or disabled in libcurl.
errorAn internal error occurred: networkUnknown(Error Domain=NSURLErrorDomain Code=-1002 "(null)")
noneProcessed: 0-segment-events.temp
noneCleaned up 0 non-running uploads.
noneCleaned up 1 non-running uploads.
noneUploads in-progress: 0
noneProcessing Batch:
0-segment-events.temp
noneError uploading request Protocol "https" not supported or disabled in libcurl.
errorAn internal error occurred: networkUnknown(Error Domain=NSURLErrorDomain Code=-1002 "(null)")
noneProcessed: 0-segment-events.temp
noneCleaned up 0 non-running uploads.
libcurl shouldn't be exposed - that is indirectly only accessible through FoundationNetworking. We do disable SSL as we use WinSSL so that the system configuration is made available.
@brianmichel and I paired a bit on this. This is a regression since 5.9.1. There is a fix in flight for this.
@brianmichel and I paired a bit on this. This is a regression since 5.9.1. There is a fix in flight for this.
Link to the fix in question, https://github.com/apple/swift/pull/70077
@compnerd looks like the Swift PR merged! Do you happen to know which snapshot it might start appearing in so I can adjust CI on this PR?
@brianmichel since you are using thebrowsercompany/swift-build: 20231129.2
@compnerd @darinf I believe this is ready for review. There are two tests that seems pretty flaky around storage, but I don't think those should hold us back on merging this personally.
@compnerd the testPurgeStorage uses the purge functions on the SDK to basically just remove files on disk. I know we've seen other issues with file manipulation and foundation, could this just be another bug?
@compnerd the testPurgeStorage uses the purge functions on the SDK to basically just remove files on disk. I know we've seen other issues with file manipulation and foundation, could this just be another bug?
Plausible! I wouldn't count out any bugs in Foundation. Would be good to understand the exact usage and make sure that it isn't that case.
Merge activity
Dec 4, 5:13 PM: @brianmichel started a stack merge that includes this pull request via Graphite.
|
GITHUB_ARCHIVE
|
Can I use two 2x8 beams instead of one 4x8 beam for deck?
I am trying to "optimize" materials purchase when building a deck. So I have two questions:
Is two 2x8x16 beams the same as one 4x8x16 beam for deck support? Do I need to bolt / nail them together or just sit them side by side on top of the posts?
<question about pricing removed since it's off topic>
If you pair the beams up, you should glue them with construction adhesive. Just nailing or screwing them together won't really distribute the load across both beams. Although; you're probably not planning on hanging or resting anything on just one beam and expecting the other beam to help with the load anyway.
A pair of 2x8 beams are going to be 3" wide (typically 1/4" is planed off each side of dimensioned lumber to give it the finished surface).
A 4x8 beam is going to be 3-1/2" wide. The extra half inch is going to add some strength to the 4x8 beam.
The 4x8 might be more prone to warping than a couple of 2x8's if it's a solid beam (but not if it's a laminated beam).
If this was for indoor use, you could sandwich a strip of plywood between the 2x8 beams and glue (not nail) all three pieces together to create a laminated beam that would be stronger and less prone to warping than the 4x8. But since you're building a deck, the sandwiched plywood would soak up water and be a problem.
This is a good answer. Look like I will not use two 2x8 then because I do plan to use the joist hangers on both sides. Plus, dealing with adhesive gonna take more time than pay additional $10 for one 4x8 ;-) Just thinking out loud
Nowadays they don't actually plane 1/4" off each side of dimensional lumber. They cut it much closer than that. That's one reason the thicker boards cost more, they require more timber to make.
Sure. Tomato/Tamawto, though. The net result either way is that a "2 by" piece of lumber is 1-1/2" wide, so two "2 by" pieces of lumber together will be 3" wide, and a "4 by" piece of lumber will be 3-1/2" wide.
2 2x8 12% stronger than 4x8 beam if those are true dimensions, but, 2 2x8 = 3x7.5, and 4x8 = 3.5x7.5 (22% bigger)....beam wins by 7%. (look out for warping)
this needs a reference
|
STACK_EXCHANGE
|
dynamics: singular or plural
Does the word dynamics take a verb in singular or plural form? In Google search, it looks like both are equally used. For example, which one is more appropriate?
Population dynamics is influenced by a number of factors.
Population dynamics are influenced by a number of factors.
If both are ok, are there any differences in their interpretations?*
"Dynamics" can be either singular or plural, depending on usage.
http://i.word.com/idictionary/dynamics
The science of Dynamics is singular.
"Dynamics is important for Physics majors to study."
"Group Dynamics is a useful managerial tool."
a specific instance of "a pattern or process of change, growth or activity" can be called a dynamic. So in contrasting two or more of these you would use "dymamics" as plural.
"Your family dynamic was different from mine." "Our family dynamics were different."
So I would say that your first example refers to the study of population dynamics, whereas the second refers to the varying dynamics of two or more populations.
It is useful to see how "dynamic" and "dynamics" are used in the scientific writings of English-speaking masters and great writers like Maxwell and Truesdell, in a classic like Abraham & Marsden's Foundations of Mechanics, and in the Oxford English Dictionary.
• In The Classical Field Theories and A First Course in Rational Continuum Mechanics Truesdell uses "dynamics" in the following sense (A First Course... p. 6), and as a singular noun:
Mechanics rests upon three substructures: a universe of bodies, a geometry with its kinematics, and a theory of forces. These substructures provide the concepts mechanics is to connect. Relations among places, the shapes of bodies, forces, and times are of two kinds: the general ones, common to all bodies in an assigned universe, appropriate to a branch of mechanics, and the particular ones, which within a given branch distinguish one class of such bodies from another. The general relations are of two kinds: statics, which compares putative equilibria; and dynamics, which describes motions.
So that "dynamics" means the general discipline studying such relations; or sometimes such relations in more restricted contexts, e.g. "dynamics of viscometric flows" (p. 290), or as in the following passage (p. 24):
When it comes to systems of forces, the classical dynamics of mass-points offers a peculiar variant, to which we now turn for the nonce. In describing that dynamics we shall use [...]
He always uses "dynamic" (without final "s") only as an adjective, and never uses "dynamic" or "dynamics" to mean a specific process or motion. For the latter meaning he uses "dynamic process" (p. 198).
• A text scan of the two volumes of Maxwell's Scientific Papers shows that Maxwell also uses "dynamics", as singular noun, as Truesdell does (or I should say, Truesdell does as Maxwell does). The only two possible exceptions are in On physical lines of force, where he says "It remains that we should investigate the dynamics of the system, and determine the forces necessary to produce given changes in the motions of the different parts"; and in On the mathematical classification of physical quantities, where he says "the dynamics of the two systems are different". But again from the contexts it's clear that he's speaking about "general relations", in Truesdell's sense above, not of specific motions or processes. He always uses "dynamic" as an adjective only.
• In Abraham & Marsden's Foundations of Mechanics we again find "dynamics", as a singular noun, in the general sense of Maxwell and Truesdell; except possibly for one passage (p. 424): "the dynamics is generated by the constraints" [italics in the original]. But again from the context I'd personally say that it's meant as a set of possible motions rather than a particular motion.
• If we look for "dynamic" as a noun in the Oxford English Dictionary we are forwarded to "dynamics", except for one meaning: "Energizing or motive force", for which only a couple of poetic, non-scientific examples are given. Under "dynamics" (noun) there is no meaning equivalent to "process" or "motion", though – only the general sense of Maxwell, Truesdell, and Abraham & Marsden.
• Finally, I've never seen the cousin terms "kinematic(s)" or "mechanic(s)" used in the sense of a specific kinematic process or mechanical process. Also, none of the works above ever uses "a dynamic" or "a dynamics".
From the points above I personally would not use "dynamic(s)" to mean a specific process or motion. But language is fluid. If you want to use it to mean a specific process, and you think your readers will not misunderstand you, and you do not care if some of them criticize you for misusing the term... Then why not, go ahead :)
[PS: sorry for the scarcity of links: my "reputation" allows me max two.]
My understanding is that words such as kinetics, dynamics, thermodynamics are nouns and are singular. If they are used without the 's' at the end, they become adjective, such a thermodynamic system, a dynamic enterprise.
I would use "are" in this case, but it wouldn't seem too weird to use "is". It depends on the scenario: If you're talking about population dynamics in general, then you would use "is". If you're talking about a specific case with multiple different dynamics in it, then you would use "are".
"Dynamics" is not typically plural, just as "hydraulics", "phonics" or "physics" is not typically plural. It can be either, as this shows. https://books.google.com/ngrams/graph?content=dynamics+is%2Cdynamics+are&case_insensitive=on&year_start=1900&year_end=2012&corpus=15&smoothing=3&share=&direct_url=t4%3B%2Cdynamics%20is%3B%2Cc0%3B%2Cs0%3B%3Bdynamics%20is%3B%2Cc0%3B%3BDynamics%20is%3B%2Cc0%3B.t4%3B%2Cdynamics%20are%3B%2Cc0%3B%2Cs0%3B%3Bdynamics%20are%3B%2Cc0%3B%3BDynamics%20are%3B%2Cc0
@BrianHitchcock My mistake - I was assuming that we were talking about a specific instance. Fixed my answer.
|
STACK_EXCHANGE
|
Computer distributing system
A computer network is a collection of seperate but interconnected computers , connected by a single technology even a distributed system is a collection of. Reprinted in several collections, including distributed computing: concepts and implementations, mcentire et al, ed ieee press, 1984. A (hopefully) curated list on awesome material on distributed systems, inspired by other fallacies of distributed computing, expect things to break, everything . Distributed systems: 9781543057386: computer science books @ amazoncom.
2 on distributed systems a distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable. This text is focused on distributed programming and systems concepts you'll need to there are two basic tasks that any computer system needs to accomplish. Distributed computing is a model in which components of a software system are shared among multiple computers to improve efficiency and performance.
In this chapter, we turn to the problem of coordinating multiple computers and processors first, we will look at distributed systems these are. Q: an alternative definition for a distributed system is that of a collection of independent computers providing the view of being a single system, that is, it. Hpdc is the premier computer science conference for presenting new research relating to high performance parallel and distributed systems used in both. This course covers abstractions and implementation techniques for the design of distributed systems topics include: server design, network programming,.
Cloud computing systems today, whether open-source or used inside companies , are built using a common set of core techniques, algorithms,. Distributed systems what is a distributed system a collection of autonomous computers a) linked by a network b) using software to produce an integrated. Distributed computing is a new form of online collaboration such projects distributed computing participants who are network or system. Distributed computing is the field in computer science that studies the design and behavior of systems that involve many loosely-coupled components. Local-area distributed systems his current research focuses primarily on computer secu- rity, especially in operating systems, networks, and large wide- area.
What is an operating system • an operating system is a resource manager • provides an abstract computing interface • os arbitrates resource usage between. So the talk is the eight fallacies of distributed computing the eight so, there are ways to add bandwidth to the system so, what have we. Distributed systems are by now commonplace, yet remain an often difficult area of research this is distributed computer system networked computer systems. And queueing systems: computing in distributed networked environments computer several popular message passing systems such as pvm (35), express .
International journal of parallel, emergent and distributed systems on developing theory of reservoir computing for sensing applications: the state weaving. Distributed cloud computing, distributed systems dagstuhl seminar 1 introduction most of the focus in public cloud computing technology over the last. This invited editorial appeared in the parallel and distributed computing an increasing number of distributed object computing systems, for example, must. In this paper we propose a clustered load balancing policy for a heterogeneous distributed computing system our algorithm estimates different system.
The distributed systems group at the eth zurich, led by prof friedemann mattern, pursues research in the areas of distributed computing. We argue that objects that interact in a distributed system need to be dealt with in ways that are intrinsically different from objects that interact in a single address. Our distributed computing systems engineering msc is run in germany this course covers a range of essential topics related to distributed computing systems,. Distributed computing is a computing concept that, in its most general sense, refers to multiple computer systems working on a single problem in distributed.
New systems engineers will find the fallacies of distributed computing and the cap theorem as part of their self-education but these are. Distributed computing involves the breaking down a computational problem into by two or more computers in a network which form a distributed system.Download computer distributing system
|
OPCFW_CODE
|
This tutorial explains how to create a Scavenger game in Loquiz where teams start each from the different starting field and then continue in a circle. Next task only appears on a map when the previous is answered. When all the tasks are answered FINISH pin will appear. You might want to use this case to accommodate traditional team exercise sets to make teams follow a certain order.
1. Choose a game type and add tasks
On Loquiz PRO webpage click “New game” and choose “Scavenger” as a game type.
Next you can add tasks on “Tasks” screen. If you do not want to use tasks, use “No answer” type of task (you can even briefly display the exercise there). You can use all the tasks’ rich attributes that Loquiz provides.
You need 1 piece of content for every location and you need an additional piece of content for finish point. So if your exercise set is 4 assignments, add those 4 and then create one specific “This is finish” task.
2. Assign locations to the tasks
In case you do not use location-specific tasks assign locations to each task. You can do it on “Locations” screen. Location is the place where this piece of content (e.g. task) pops open. Make sure that locations marked in PRO are exactly where your exercises will take place in the game (if using physical props).
Finish location should be where you want the teams to come when they have visited all the tasks’ locations.
3. Set activation rules and mark starting field + finish point
You can do it on “Activation” screen. First, you should create a circular activation sequence for the tasks. So that answering task no. 1 (Q1) brings task no. 2 (Q2) to the map. Q2 should activate Q3. And answering Q3 makes Q4 visible. You also need to make sure Q4 activates Q1.
Next, mark all actual tasks as starting fields (Q1-4 should be all marked as starting points).
Make sure that finish location does not have any activation rule set to it and that is marked as finish (F) and not marked as start (S).
4. Rotate start fields
Do it on “Activations” page. The most important setting you should mark is: “Rotate start fields”. Then all teams will get separate starting field.
If you can not see this setting you have not set the starting fields, go back on “Activation” screen and set them.
Now, what will happen is this.
When team starts the game, they will get one task from the list of starting fields and it is shown on the map. Starting fields are assigned to teams in sequence, so that 1st team to start gets task no 1, 2nd gets 2, 3rd gets 3, 4th gets 4. Fifth team to start gets 1 again etc.
At the beginning of the game only one starting point is shown on the map. After team visits the first task, the next task will appear on a map. For team 1, second point is 2, but for team 2 the second point is 3 etc. For team 4 the second point is 1. Players do not actually see the task numbers, they see task scores (difficulties) on the map.
Finish will become visible only when all the checkpoints have been visited or when game time runs out (if game time limit is set).
Check this tutorial if you want to make teams to follow different routes.
|
OPCFW_CODE
|
QT - embedding translations works on Windows, not on Linux
In the SQLiteStudio I started using CONFIG += lrelease embed_translations for automatically embedding all translations into the app's resources. I did so by declaring:
CONFIG += lrelease embed_translations
QM_FILES_RESOURCE_PREFIX = /msg/translations
TRANSLATIONS += $$files(translations/*.ts)
This is done for all modules (in their pro files). Modules are compiled into shared libraries (such as coreSQLiteStudio, guiSQLiteStudio and then there is a executable module sqlitestudio, which is the application to run and it's dynamically linked to others, so it looks like:
sqlitestudio <- executable (contains *.qm files)
`- guiSQLiteStudio.so (contains *.qm files)
`- coreSQLiteStudio.so (contains *.qm files)
Then in runtime I'm using translation files with Qt's resources system (by call to QTranslator::load() with :/msg/translations/coreSQLiteStudio_pl_PL.qm, :/msg/translations/sqlitestudio_pl_PL.qm, etc).
This works well under Windows, but - for some reason - not under Linux. The problem is that under Linux only files from sqlitestudio module (i.e. sqlitestudio_pl_PL.qm) are visible under the :/msg/translations prefix, while under Windows also other module translations (i.e. coreSQLiteStudio_pl_PL.qm, guiSQLiteStudio_pl_PL.qm) are visible under the same prefix.
I've debugged TRANSLATIONS += $$files(translations/*.ts) and it is resolved properly for all modules (under Linux too). Then I have debugged runtime contents of :/msg/translations and confirmed that only sqlitestudio qm files are visible under Linux, while under Windows all qm files (from all modules) are visible.
What could be causing this weird behavior?
(For wider code context you may refer to SQLiteStudio's code base - it's open source and available at GitHub)
EDIT - Further analysis:
A qrc file is generated properly by Qt, I can see it and it has expected contents. I also see the rcc to compile it to the cpp file and make to compile it to the object file, then I see the object file linked into the final shared library. I can see all these intermediate files in the build directory.
It seems that the problem is in runtime. I've listed all resources visible using function:
void printResources(const QString& path, int indent)
{
QDir d;
d.setPath(path);
for (QString& f : d.entryList(QStringList({"*"})))
{
qDebug() << QString(" ").repeated(indent) << f;
if (!f.contains(".")) {
printResources(path + "/" + f, indent + 4);
}
}
}
and then calling printResources(":/", 0);, which printed various resources, but it DID NOT contain QM files from the shared library resources, while it does contain QM files from the executable resources. It also has all resources that were explicitly added to another resources file in the shared library (some static resources, not auto generated qm files).
Why does Qt have problems accessing QM auto-generated resources from shared library and only under Linux?
Why do you still qmake when even Qt itself dropped it?
The project started using Qt in 2014 and qmake was pretty common back then. There are no resources (time) at the moment to make a transition. I barely manage to work on features that are much more important than build tool migration.
It should be very easy to port and the fact that you do not get support for this is due to using a dead and unmaintained technology. You will spend time more time on qmake than your actual work.
First of all I would need to learn CMAKE - I have never used it, even a little. Secondly - in some plugins (which have their own pro files) I did some hacks, which might not be as easy to port as it seems to. Nevertheless, I still think a lot of projects still use qmake and a lot of people know it.
I got the solution.
Short answer
Add QMAKE_RESOURCE_FLAGS += -name coreSQLiteStudio_qm_files to all pro files (and replace the coreSQLiteStudio_qm_files to unique name in each case).
If you have other (explicit) resource files in the project, you will need to have dynamic, but predictible names, like: QMAKE_RESOURCE_FLAGS += -name ${QMAKE_TARGET}_${QMAKE_FILE_BASE}, so you can pass it to the Q_INIT_RESOURCE() macro.
Long answer
The auto-generated (by qmake) resource file with qm files inside is compiled by the rcc into cpp file with a default initialization function qInitResources_qmake_qmake_qm_files(), which then is repeated over all other modules (shared libraries) and causes only one of those to be used in the runtime. Solution is to make initialization functions unique for each module, therefore you need to pass unique name of the resource initialization function to the rcc. By using statement like above (in the short answer) you will get initialization function qInitResources_coreSQLiteStudio_qm_files().
It seems like conflicting name of the function doesn't matter under Windows, but it does under Linux.
|
STACK_EXCHANGE
|
Ryan Corces, PhD
ryan.corces (at) gladstone.ucsf.edu
I graduated from Princeton University in 2008 with a major in Molecular Biology and a minor in Computer Science. While at Princeton, I worked under the mentorship of Coleen Murphy, studying C. elegans aging. During the summers I had relatively foundational scientific experiences studying learning and memory (with Cristina Alberini), and epigenetics (with Or Gozani).
After graduation, I spent a year living with family in Spain and teaching science to bilingual elementary schools students. In 2009, I started my Ph.D. in the Cancer Biology program at Stanford University under the mentorship of Ravi Majeti. Together with Max Jan and Thomas Snyder, we provided the first genetic and cellular proof that AML evolves from sequential acquisition of mutations in a hematopoietic stem cell. We went on to identify patterns to this mutational evolution, with mutations in epigenetic modifiers such as DNMT3A or TET2 occurring universally during the early “pre-leukemic” phase of the disease.
These findings led me to pursue postdoctoral training in epigenetics with Howard Chang at Stanford University. With Jason Buenrostro, we applied the assay for transposase-accessible chromatin using sequencing (ATAC-seq) to understand normal hematopoietic differentiation and leukemic transformation. This highlighted the utility of this technology for understanding complex cellular processes and we subsequently applied ATAC-seq to a cohort of 410 different tumor samples spanning 23 cancer types in collaboration with The Cancer Genome Atlas.
At about the half-way point of my postdoctoral work, I switched gears to study the genetic and epigenetic underpinnings of neurodegenerative diseases. Co-mentored by Thomas Montine, I used multi-omic epigenetic approaches to dissect the role of inherited variation in Alzheimer’s and Parkinson’s disease. This work serves as the launching point of the lab, driving our interest in using the epigenome to better understand neurological disease.
Fiorella Grandi, PhD
fiorella.grandi (at) gladstone.ucsf.edu
I grew up in Idaho, born to Argentinian parents. After falling in love with biology late in high school thanks to Campbell Biology’s Tour of the Cell chapter, I went to Washington State University (WSU) to pursue a bachelor’s in biochemistry. While at WSU, I fell in love with genome regulation while working on transposable elements and the variety of ways they create genomic and epigenetic variation. In 2014, I graduated and went on to pursue a Ph.D. at Stanford University in the lab of Nidhi Bhutani. There, I studied DNA methylation, specifically the regulation of 5-hydroxymethylcytosine dynamics by the TET family of enzymes in the context of skeletal development and disease, including dissecting the relationship between 5hmC deposition and SOX9 transcription factor activity. During graduate school, I also became interested in pursuing the use of single-cell technologies to understand how the epigenome shapes cell fate, and specifically how it can store information about environmental conditions like chronic inflammation that cause disease. Now, I’m combing these two interests to study mechanisms of resilience to AD in the Corces lab.
lucas.kampman (at) gladstone.ucsf.edu
I grew up in the Bay Area and studied Molecular & Cell Biology and German at the University of California, Berkeley. My introduction to biology research was in Oskar Hallatschek’s lab, where I worked with Dr. Jona Kayser on recombination-based tools for studying mechanical effects in microbial evolution. Since then, I’ve studied transcriptional regulation in mammalian development, the role of evolution in tumor metastasis, and the ecology of benthic cyanobacterial mats. I’m broadly interested in the way evolutionary processes have shaped mechanisms for gene regulation.
hailey.modi (at) gladstone.ucsf.edu
From a young age, I was always interested in science, especially the science behind different diseases. I majored in Biomedical Engineering at The University of Texas at Austin to learn more about the real-world applications of scientific and medical research. At UT, I joined Dr. Aaron Baker’s lab and studied nerve damage after traumatic injury. There, I helped engineer and test neural microelectrodes, and I also published an undergraduate thesis on drug delivery methods that could speed up nerve regrowth. Now, in the Corces lab, I work on neurodegeneration in a different context as it relates to the epigenetics of neurodegenerative diseases, like Alzheimer’s and Parkinson’s. My research experience has ignited in me a fascination with the brain, and I aim to join a neuroscience PhD program in the future.
We are recruiting!
Research Technician / Graduate Student / Postdoctoral Fellow
your.email (at) ucsf.edu
Interested postdoctoral fellows should e-mail Ryan at ryan.corces (at) gladstone.usf.edu with the following information: (i) a summary of their current and past research experiences, (ii) a short statement on the types of projects that they are interested in pursuing in the Corces Lab, and (iii) contact information for 3 references. Interested and motivated Graduate and Undergraduate students should contact Ryan to talk about potential projects.
|
OPCFW_CODE
|
include of non-modular header inside framework module 'Bolts BFCancellationToken'
I just downloaded the Parse sample project, installed LiveQueries pod and nothing else. When I try to build it, it pops the two errors below.. What's wrong?
add your codes please
@IvanBarayev I haven't written any code yet... I just downloaded the pods and tried building it...
Just in case you missed it: http://parse.com/migration
@EricD If you are referring to the "have to be in a self-hosted MongoDB to use LiveQueries", I have already migrated.. ;)
@SotirisKaniras I was referring to I just downloaded the Parse sample project and I was afraid you were just beginning with Parse. :)
@EricD Hahahaha!! No no... I started a while ago... :)
Go to Build Settings under "Target" and set "Allow Non-modular Includes in Framework Modules" to YES
AND
Select the BFCancellationToken.h file in the project navigator. In the target membership area on the right there side of xcode there will be a drop down menu next to the target. Select "public" there (default will be "project").
Try cleaning both the project and the build folder and try again
Fixed with https://github.com/ParsePlatform/ParseLiveQuery-iOS-OSX/pull/50
The first pat of the answer solved the issue. Thank you!
If you have project in objective-c add use_frameworks! to your pod file at start
I had exactly this issue and found it just resolved by using
gem install cocoapods --pre
Which give me cocoapods 1.1.0.beta.1.
I'm very happy!!
I also did a
rm -rf Pods TargetName.xcworkspace Podfile.lock after quitting xcode and then rerunning pod install
I did that and now I'm facing the below errors:
https://cloud.githubusercontent.com/assets/9467442/17254633/3c587a68-55be-11e6-8acd-7a7c2acbb384.png
Please get the latest version of ParseLiveServer, I fixed it with pull request #50
You might need to add this to your podfile:
pod 'ParseLiveQuery', :git => 'https://github.com/ParsePlatform/ParseLiveQuery-iOS-OSX.git'
I solved it removing Modules folder from the framework.
Browse to your framework location which is present in the App Project using finder
Go inside Test.framework folder (In the above case it will be 'Bolts.framework) & Delete Modules folder.
Clean and Re Build the app, it will solve the problem.
|
STACK_EXCHANGE
|
Being safe on the internet (was Re: Here we go again - ISP DPI, but is it interception?)
adrianhayter at gmail.com
Wed Aug 4 13:01:20 BST 2010
>> Consider that the url http://example.com/stuff/morestuff/ pointed to
>> the location /var/www/example.com/public/stuff/morestuff/ on a server.
>> Doing a directory traversal on the url (such as:
>> http://example.com/stuff/morestuff/../../../ ) would (on some insecure
>> servers) get the location /var/www/example.com/. Now we know from the
>> previous location that the directory 'public' is contained here, but
>> so could some other directories, such as 'logs' or even important
>> private information.
>> As you can see, this would matter to the host, since a lot of
>> webservers are configured to display the contents of directories when
>> they do not come across a specified index file (such as index.html or
>> index.php). If you have a folder that is meant to be publicly
>> accessible, you do not want people to be able to traverse out of that
>> directory and into one that contains private data.
> Most helpful - thank you.
> Taking the above example, could you explain the difference in effect
> between http://example.com/stuff/morestuff/../../../ and
> http://example.com/ <http://example.com/stuff/morestuff/>? Do they not
> lead to the same location on the server, namely /var/www/example.com/?
> Contact and PGP key here <http://www.ernest.net/contact/index.htm>
Since ../ means "go up one directory in the tree", it is perhaps simpler to imagine that you are at the url http://example.com/stuff/morestuff/ and are applying these ../ 'commands' one by one. So we are at the url, and we are going to apply ../ three times. Currently we are in the directory 'morestuff', and so applying the first ../ will take up up one directory to 'stuff'. The second ../ will take us up another level to the root directory of example.com. The third ../ will then take us up a further directory, but this can't be represented as a url, because we are going above the url root as it were, and into the realm of the actual filesystem itself.
If the url http://example.com/ points to /var/www/example.com/, then the following is true (assuming the webserver is set up in a simple manner):
http://example.com/stuff/morestuff/ => /var/www/example.com/stuff/morestuff/
http://example.com/stuff/morestuff/../ => /var/www/example.com/stuff/
http://example.com/stuff/morestuff/../../ => /var/www/example.com/
http://example.com/stuff/morestuff/../../../ => /var/www/
So whilst http://example.com/stuff/morestuff/../../ points to the same thing as http://example.com/, three directory traversals will go up even further.
On most webservers I've come across, there are systems in place to prevent this, and it doesn't matter how many times you add an extra ../, the furthest you can traverse is to the root of the actual URL (i.e. http://example.com). As a matter of interest, I applied this to my own website, and if you visit this link: http://adrianhayter.com/documents/../../../../ you should get the homepage (i.e. http://adrianhayter.com). Adding extra ../ doesn't change this behaviour.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ukcrypto
|
OPCFW_CODE
|
Before starting any project, even a small personal one, it’s important to start with its ‘why?’. There are several reasons for that.
First, generally speaking, for every project you need to answer three questions: why, what, and how. It is always better to start with ‘why’ because while answering it you will better define its requirements, scope, features, etc. (‘what’ of a project) and without that, it will be difficult to work on ‘how’.
Second, every project has a ‘price’, even if you are doing it yourself without spending a penny. It is called opportunity cost: instead of this project, you may be doing something else with a better return on investment (your time). It could be a different project or an altogether different kind of activity.
Third, due to optimism bias, we tend to think that the project will be easier to do and take less time than it will in reality. Sometimes it is true with very small tasks, but I have yet to see a project that was as straightforward as it seemed at the beginning.
Finally, you will understand whether you need to do it. I have seen dozens of projects that either lost steam before completion or after being completed they were hardly used. In most cases, it turned out that either there was no real need for this project in the first place or the problem was much more difficult and nuanced than expected and the cost/benefit analysis was not in favor of finishing it.
There are many ways to get to the ‘why’ of a project. For instance, the customer development process involves a deep dive into this topic, although it is involved. If we are talking about a relatively small project (be it something new or a feature in an existing project), I tend to use the questions below to get to the ‘why’. They start a conversation and I go deeper where necessary. General observations:
Depending on a project it may take from a couple of hours to days and weeks to answer these questions and the depth will vary a lot. The larger the project, the deeper you need to go, and answering one question will lead to new ones. This exercise is beneficial even for very small projects, which can be completed in several hours or days, since answering these questions will likely yield multiple insights and direct the development.
A rule of thumb: it is ok to spend at least 5% of the project’s time on planning. Depending on a project and the consequences of miscalculations at the beginning, this bar can be much higher. In any case, I would be suspicious if it takes less than that.
It is important to write them down. Something that is not written down has a much higher risk of being misinterpreted, forgotten, or distorted later (by others and even by you).
1. What problem does the project solve?
Give a short description of the problem and its context.
2. Whose problem does it solve? / Who are the target users?
Who will use it? For a pet project audience of one (you) can be good enough, otherwise, it is better to hypothesize about users' main characteristics, his or her portrait. For a new feature in an existing project, it is often a subset of existing users.
3. How do you know that the problem exists?
The best way to confirm that the problem exists (and that the project is indeed has a right to exist) is to see it in data, issues of interpretation aside.
The next best thing is users' feedback: if users repeatedly tell you that something is a problem, it is better to listen. Caveats:
Users and use cases differ. Sometimes user can represent a very small part of the user base but be very vocal about his problem, inflating its importance. It does not mean that the problem does not deserve attention, but there can be more pressing problems.
Users may try to tell you not only what the problem is, but how to solve it. Both pieces of information are useful, but they are not absolute: if users tend to identify their problem correctly, they are not always able to assess how to solve the problem or its difficulty (from the users' perspective it is always ‘fast and easy’ fix), plus they do not know limitations of your system.
Do not ask users if something is a ‘good idea’ or if they find it useful. In general, it is better to ask users about how they solved the problem in the past and not how they will do it in the future.
4. How do users solve this problem now or solved it in the past?
Beware if the answer here is that they do not solve the problem now. It usually means one of two things: either you have not researched enough or the problem does not exist. Even absolutely novel products solve an existing problem.
Note: Questions 5-7 pinpoint requirements for the new project.
5. What is bad in current/previous solutions to the problem?
Describe how the existing solution is not optimal for the users. If it is, do you really need this new project?
6. What is good in previous/existing solutions?
Even bad solutions have something good in them. These are the minimal requirements for the product, what not to lose.
7. What kind of improvement in users' life do we expect from the project?
This is the expected utility of the project for the end-user. Usually, it is based on inefficiencies of existing solutions (question 5 above).
8. How will you know that the project is successful?
This is an important question, which defines the target result of the project. For instance, for a pet project the bar can be as low as “I use this project daily for …, which saves me … hours per …” or “I add this project to my portfolio to make it more interesting to clients or potential employers”. For other projects, the success metric may look like the number of daily users, subscriptions, revenue, etc.
9. Can the problem be solved without developing a new project (manually or with an existing project)?
There are two sides to this question:
Prototyping and validating: can this project idea be tested and researched deeper without building the product (manual or semi-manual solutions, mockups, no-code platforms, etc.).
Competition: can the same expected utility be achieved manually, by modifying use cases in existing solutions or augmenting them? If so, it may be better to try them, they may prove to be a sufficient solution and eliminate the need for a project.
10. What will be the return on investment?
This topic deserves its own post, which I will get to in the future. Here is just a brief overview.
At any given point in time you can do different things. Doing one thing means you are not doing another (opportunity cost mentioned at the beginning). So you need to prioritize, choose between the projects. To do it you need to calculate their return on investment (ROI). There is no one size fits all solution for ROI, sometimes you need to go with your gut, but you still should try quantifying the project as much as reasonably possible.
You need to quantify two parts: costs and returns. The best option is to do it in money, but other options with the same units of measurement can also work. If you are going to compare different projects it is much more precise to use one unit of measurement.
It does not hurt to be more pessimistic. Imagine the project requires twice the planned resources or has two times smaller utility. This will make the ROI two times smaller. Will the project make sense then? Should you still do it?
If after answering these questions the project still looks good, it passed the project passes the ‘why’/‘what for’ test, then it is time to plan and prototype it properly.
|
OPCFW_CODE
|
[Samba] SMB Windows ACL functionality
ballison at 45drives.com
Tue Jul 12 22:10:51 UTC 2022
First of all, thank you very much everyone for the replies the insight is
Rowland, per your:
> I carried out the same tests on another machine (this time using
> 'rid'), but this computer did not map Administrator to root with a
> Everything else was the same.
> Logged into Win10 as Administrator, I couldn't change anything, I
> expected this.
> Logged in as myself, I could alter the permissions on the share that
> didn't have 'acl_xattr:ignore system acls = yes' set, but on the other,
> I got:
> An error occured while applying security information to
> Failed to enumerate objects in the container. Access is denied.
I can verify I am seeing the same behaviour, without the 'acl_xattr: ignore
system acls = yes' set I can modify permissions from the root of the share
I can't modify at the root of the share using myself (DOMAIN\bailey) with
'acl_xattr: ignore system acls = yes' set, but if I am to create a
file/folder within the share using 'acl_xattr:ignore system acls = yes' set
I can then modify the permissions on that folder/file as I please, I suspect
due to that created folder having full control for my user assigned when
creating it, whereas it seems the root of the share my "DOMAIN/Domain
Admins" group only has read write execute access not full control despite
permissions being set to 0770.
These permissions then do not present as extended ACLs on the Ubuntu server
checking with getfacl on the directory, however they do show up when
querying with getfattr which I assume is due to:
> so the module can implement the permission evaluation in userspace based
on the contents of the NT ACL stored in an xattr, without interference of
Which makes sense as we're reading the permissions in userspace from the NT
ACL as you have mentioned.
In addition, per the manpage for acl_xattr:
"acl_xattr:ignore system acls = [yes|no]
When set to yes, a best effort mapping from/to the POSIX ACL layer will not
be done by this module. The default is no, which means that Samba keeps
setting and evaluating both the system ACLs and the NT ACLs. This is better
if you need your system ACLs be set for local or NFS file access, too. If
you only access the data via Samba you might set this to yes to achieve
better NT ACL compatibility."
Which makes sense as this is the behaviour we are seeing.
I guess the curious thing at this point is that both methods do appear to
work just in separate ways? As a quick tl:dr would it be fair to say that:
With 'acl_xattr: ignore system acls = yes' set, the Windows ACLs are applied
in userspace and read from that. Downside is this allows anyone who has
local access to the filesystem full control as everything is written with
With 'acl_xattr: ignore system acls = no' set, the Windows ACLs are applied
directly to the actual filesystem/kernel similar to if we were to manually
set permissions with setfacl, in addition making the samba server aware of
the ACL changes directly as well as Windows clients.
Both options do allow setting and modifying of Windows ACLs but have
different results on the filesystem we're sharing out from the samba server.
So I guess perhaps it's almost up to user preference which method to use?
Obviously as mentioned before there's some caveats to using the 'acl_xattr:
ignore system acls = yes' options.
Again, really appreciate the insight on this from everyone as I feel I've
gotten a much better understanding of what is going on.
To unsubscribe from this list go to the following URL and read the
More information about the samba
|
OPCFW_CODE
|
kaboom()
// Load custom bitmap font, specifying the width and height of each character in the image
loadFont("unscii", "/fonts/unscii_8x8.png", 8, 8)
// List of built-in fonts ("o" at the end means the outlined version)
const builtinFonts = [
"apl386o",
"apl386",
"sinko",
"sink",
]
// Make a list of fonts that we cycle through
const fonts = [
...builtinFonts,
"unscii"
]
// Keep track which is the current font
let curFont = 0
let curSize = 48
const pad = 24
// Add a game object with text() component + options
const input = add([
pos(pad),
// Render text with the text() component
text("Type! And try arrow keys!", {
// What font to use
font: fonts[curFont],
// It'll wrap to next line if the text width exceeds the width option specified here
width: width() - pad * 2,
// The height of character
size: curSize,
// Transform each character for special effects
transform: (idx, ch) => ({
color: hsl2rgb((time() * 0.2 + idx * 0.1) % 1, 0.7, 0.8),
pos: vec2(0, wave(-4, 4, time() * 4 + idx * 0.5)),
scale: wave(1, 1.2, time() * 3 + idx),
angle: wave(-9, 9, time() * 3 + idx),
}),
}),
])
// Like onKeyPressRepeat() but more suitable for text input.
onCharInput((ch) => {
input.text += ch
})
// Like onKeyPress() but will retrigger when key is being held (which is similar to text input behavior)
// Insert new line when user presses enter
onKeyPressRepeat("enter", () => {
input.text += "\n"
})
// Delete last character
onKeyPressRepeat("backspace", () => {
input.text = input.text.substring(0, input.text.length - 1)
})
// Go to previous font
onKeyPress("left", () => {
if (--curFont < 0) curFont = fonts.length - 1
input.font = fonts[curFont]
})
// Go to next font
onKeyPress("right", () => {
curFont = (curFont + 1) % fonts.length
input.font = fonts[curFont]
})
const SIZE_SPEED = 32
const SIZE_MIN = 12
const SIZE_MAX = 120
// Increase text size
onKeyDown("up", () => {
curSize = Math.min(curSize + dt() * SIZE_SPEED, SIZE_MAX)
input.textSize = curSize
})
// Decrease text size
onKeyDown("down", () => {
curSize = Math.max(curSize - dt() * SIZE_SPEED, SIZE_MIN)
input.textSize = curSize
})
// Check out https://kaboomjs.com#TextComp for everything text() offers
|
STACK_EDU
|
File Indexing Software WinCatalog 2024 will scan disks (HDDs, DVDs, and other) or just specific folders you want to index, index files, and create an index of files.
WinCatalog will automatically index ID3 tags for music files, Exif tags and thumbnails for image files and photos, thumbnails and basic information for video files, contents of archive files, thumbnails for PDF files, ISO files, and much more.
You can set an automatic update of an index, using command line and task scheduler.
You can organize your files perfectly, using tags (categories), virtual folders and user defined fields, and find files in seconds, using advanced powerful search, including search for duplicate files and filtering search results, even without inserting or connecting disks to a computer!
"Superb software, very intuitive and stable. Makes indexing and then accessing large numbers of media files exceedingly easy and efficient."
"Excellent program for indexing all the files across the PC and external and backup drives. Makes keeping track of everything and finding where things have been stored just so much easier. Found photos and files I didn't realise I still had. Also great for finding unneccessary duplicates and freeing up disk space."
WinCatalog 2024 file indexing software is database driven. It uses the industry standard SQLite database engine in the core. This helps to index file collections of any size. No matter how many disks, files, or folders are stored in your collection – WinCatalog 2024 will handle all of them!
WinCatalog 2024 file indexing software can index picture thumbnails for most popular image types (like jpeg, png, bmp, and others) and video files, and store them inside an index. The thumbnail previews are available even without the link to the original files, so if you search for a photo, image or video, you can preview it to find a specific original disk quickly!
WinCatalog 2024 file indexing software fully supports Unicode. No matter in what language your disks, files, and folders are named. WinCatalog 2024 will correctly index all files.
In addition to Contact manager and Keyword manager, WinCatalog 2024 has a new Location manager that helps to manage physical locations easier. You can add all locations, say “box 1” or “cd wallet 2” and associate every item in the catalog with the location. It will help to find files faster.
WinCatalog 2024 file indexing software has a tabbed interface that allows you to keep several various search results simultaneously.
* Screenshots were made under Windows 10 (WinCatalog as a desktop app)
"Very pleased with WinCatalog 2017. It's been the answer to properly indexing and accessing years worth of jobsite photos and videos we've amassed. As 've told the company owner, if you can't locate it, you don't have it. The bottom line: given the vast storage capacity available to consumers today, indexing must go hand-in-hand with storage"
"Best thing that has happened for LTFS LTO5 - 7 tape offline indexing! Highly recommended! Combined with automation software and tape library and windows 7 - PERFECT! Best of all - I can make copies of the catalog file and install the software on other computers to browse a copy of the catalog whenever I need it! Thank you very much for that!"
"The program fulfills all functional expectations according to the purpose for which it was created and designed, ie cataloging and indexing data."
|
OPCFW_CODE
|
Saving graph of objects that had CRUD changes, back to database via NHibernate
I have a problem saving a graph of objects into database via NHibernate. Specifically, I read the data from database, initialise the objects using the data, make changes to the objects, and try to save the object into database, via saveorupdate on the root report instance, in one transaction. The changes might include add, update, and delete operation to the objects.
I want to find out Whether or not Nhibernate be able to figure out what types of sql commands (ie, update, delete, insert) to generate, and the sequence to execute based on database foreign key constraints, all in one transaction.
Below is my code:
public class Report{
public virtual int Id {get;set;}
public virtual IList<Report> Children {get;set;}
public virtual Report Parent {get;set;}
public virtual IList<Parameter> Parameters {get;set;}
}
public class Parameter{
public virtual int Id {get;set;}
public virtual Report Report {get;set;}
}
A report can contains a collection of parameters, reports, and a parent report.
After initialised based on the data retrieved from database, changes are made to the graph of objects, and try to save it into database by using NHibernate Session.SaveOrUpdate() passing the root report instance. However, it throws exception, saying cannot insert null into id column.
Edit
I only want to find out if it is possible for NHibernate to generate sql commands (CRUD). I will open another question if I have problem in my code.
Any ideal would be very much appreciated.
-1 this is a terrible question- no code or mapping is shown, nothing is said as to what changes are made, no details of the exception are given... in spite of all that- I think @Ryan Stewart managed to give a helpful answer
@sJhonny If you know the answer, you will know what area needs changes. Please give the answer or go away!
@sJhonny read it carefully, the exception has been provided. Please dont make terrible vote on someone's post.
the possibilty to vote down exists in order to indicate that a question is not informative. no one here works for you, so the attitude of 'give the answer or go away' is not what SO is about.
I meant to indicate that your question is not clear and makes it hard to help you (therefore- voted down). so you can go ahead and argue the vote down, or you can try to actually provide the missing information so that i can give you the answer.
@sJhonny Thank you for your advice. For my question, i think i provided enough information. FYI, I only want to find out if it is possible for NHibernate to generate sql commands (CRUD). I will open another question if I have problem in my code.
You haven't shown much code to go off of, but with Hibernate, generally speaking, all you have to do is:
start transaction
load your objects
make whatever changes to the loaded objects
commit transaction
Note the lack of any calls to Save() or SaveOrUpdate(). After loading an object using a Hibernate Session, that object is still attached to the session, and any changes made to it will propagate back to the database upon commit. New objects added into an existing (persistent) graph will need to be saved, either directly by passing the object to Session.Save() or by having it cascade along a relationship.
Thank you for your advice. Do I need to perform all the operations (load data from db, make changes to the graph of objects) within a session.
You can only do things with a session in Hibernate. If you meant "within a transaction", then yes, many people consider it best practice to put all persistence-related operations, including reads, in a transaction.
|
STACK_EXCHANGE
|
August 18th, 2007, 12:40 PM
MS Office XP problems
Hi, this is the scenario:
Windows 2000 SP4 Fully patched.
MS Office XP Professional (2002) Fully patched.
Attempting to start any Office application brings up the (Office) "safe mode" prompt. If you accept the app opens in safe mode, if you decline it opens in normal mode.
Looking in "event viewer" only shows the Error Events 2000 and 2001 depending on whether you accept or reject the safe mode start prompt. Clearing the event logs has no effect.
Going into "Accessories" and "System Information" lets you display information about the Office 10 suite modules that have been installed. The last item is "Office Event/Application Fault" This is empty.
Going into "Microsoft Office Tools" then "Office Application Recovery" has no effect.
Strangely, when you ask it to start "detect and repair" for an application from within the application call, it installs MS Publisher 2002, no matter what the original application was!
Now, I am no expert on the "safe mode" that was introduced with Office XP (or 2002 or 10.0, they are the same suite), but my understanding is that it is to work around start up errors and omits some plug-ins and features. Going into "help about" and then "disabled items" shows that there are no disabled items, which sort of implies that it didn't find anything in an error condition
Anyone got any ideas?
The weird thing with it constantly wanting to install Publisher (which was already there) got me wondering if it wasn't some sort of access path corruption?
More out of hope than experience or expectation, I opened Windows Explorer and navigated to C:\ Program Files\Microsoft Office\Office 10\ and fired up the executables for each module.
That seems to have fixed the problem, as I can open them from there, and by going into <Start> <Programs>. I did not consciously reset any access paths or Registry values, but it seems to have fixed itself?
Can it withstand a reboot is the next question
Last edited by nihil; August 18th, 2007 at 01:12 PM.
August 19th, 2007, 08:29 PM
I guess the first place to start would be the path statement to the office apps, and to make sure the /safe is not appended at the end
August 19th, 2007, 09:30 PM
I can see where you are coming from, but it leaves a few unexplained issues:
1. What caused it to happen to all the Office apps in the first place?
2. Why did it always try to reinstall just the Publisher app?
3. Why if you rejected the start in safe mode did it open normally, rather than exiting?
August 20th, 2007, 02:09 PM
1) It's MS - it does strange things be default.
2) See 1
3) That's what the option is for - safe mode or the alternative is normal.
Just back from hols so will check those errors in more detail later.
August 20th, 2007, 08:59 PM
Yeah your right, it was the only thing I could think of at the time.....probably the best bet will be to extract the hklm\software\microsoft\office and cross compare....or just rip it out and replace with a working reg file.
August 22nd, 2007, 05:44 AM
I was under the impression that when the application prompted for safe mode it was either "yes" or it exited? It would seem to go against the whole concept if it let you open an unstable system in normal mode?
Microsoft touted the idea that it would let you continue working safely until you fixed the problem.
I think I now have an idea of the probable cause. It is my Wife's machine and she was going through one of those CD based Office courses. Now the CD is interactive read only, and for Office 2000.
OK, office XP can handle that in its natural environment, but may well have been confused by the CD medium which it couldn't offer to convert?
By gore in forum Newbie Security Questions
Last Post: December 29th, 2003, 07:01 AM
By Striek in forum The Security Tutorials Forum
Last Post: December 16th, 2003, 08:30 PM
By spools.exe in forum Miscellaneous Security Discussions
Last Post: October 20th, 2003, 12:40 AM
By gore in forum AntiOnline's General Chit Chat
Last Post: January 21st, 2003, 12:54 PM
By Noble Hamlet in forum AntiOnline's General Chit Chat
Last Post: March 17th, 2002, 08:38 AM
|
OPCFW_CODE
|
1 OF 8 DECODER LOGIC DIAGRAM
SN54/74LS138 1-OF-8 DECODER/ DEMULTIPLEXER
The LS138 is a high speed 1-of-8 Decoder/Demultiplexer fabricated with the low power Schottky barrier diode process. The decoder accepts three binary weighted inputs (A 0 , A 1 , A 2 ) and when enabled provides eight mutually exclusive active LOW Outputs (O 0 –O7). The LS138 features three Enable in-
MC74HC138A: 1-of-8 Decoder/Demultiplexer
8. Indemnification. Licensee acknowledges and agrees that Licensee is solely and wholly responsible and liable for any and all Modifications, Licensee Products, and any and all of Licensee's Products other products and/or services, including without limitation, with respect to the installation, manufacturing, testing, distribution, use, support and/or maintenance of any of the foregoing.
Designing of 3 to 8 Line Decoder and Demultiplexer Using
1 to 2 Demux 3 Line to 8 Line Decoder . This decoder circuit gives 8 logic outputs for 3 inputs and has a enable pin. The circuit is designed with AND and NAND logic gates. It takes 3 binary inputs and activates one of the eight outputs. 3 to 8 line decoder circuit is also called as binary to an octal decoder.
Encoder and Decoder in Digital Electronics with Diagram
Oct 16, 20188 to 3 Lines Encoder Truth Table: From the above truth table of the encoder, the only one input line is activated to logic 1 at any particular time. Otherwise, the circuit has no meaning. There are possible 2 8 = 256 combination, but only 8 input combinations are
Digital Circuits - Decoders - Tutorialspoint
The block diagram of 3 to 8 decoder using 2 to 4 decoders is shown in the following figure. The parallel inputs A 1 & A 0 are applied to each 2 to 4 decoder. The complement of input A 2 is connected to Enable, E of lower 2 to 4 decoder in order to get the outputs, Y 3 to Y 0 .
Decoder | Combinational Logic Functions | Electronics Textbook
In a similar fashion a 3-to-8 line decoder can be made from a 1-to-2 line decoder and a 2-to-4 line decoder, and a 4-to-16 line decoder can be made from two 2-to-4 line decoders. You might also consider making a 2-to-4 decoder ladder from 1-to-2 decoder ladders. If you do it might look something like this: For some logic it may..
Encoder | Combinational Logic Functions | Electronics Textbook
An encoder is a circuit that changes a set of signals into a code. Let’s begin making a 2-to-1 line encoder truth table by reversing the 1-to-2 decoder truth table. This truth table is a little short. A complete truth table would be.
Digital Circuits - Encoders - Tutorialspoint
Octal to binary Encoder has eight inputs, Y 7 to Y 0 and three outputs A 2 , A 1 & A 0 . Octal to binary encoder is nothing but 8 to 3 encoder. The block diagram of octal to binary Encoder is shown in the following figure. At any time, only one of these eight inputs
Logic Diagram For 3 8 Decoder - Wiring diagram, what else?
Logic Diagram For 3 8 Decoder The name "Decoder" means to translate or decode coded information from one format into another, so a binary decoder transforms "n" binary input signals into an equivalent code using 2 n outputs. Binary Decoders are another type o
3:8 decoder || | very easy - YouTube
Apr 29, 20174 to 16 line decoder 1 to 2 decoder 3 8 decoder encoder truth table n scale decoder 2x4 decoder 3 to 8 line decoder decoder logic circuit electronic decoder decoder 2 to 4 decoder chip decoder
Related searches for 1 of 8 decoder logic diagram
decoder logic diagramdecoder logicdecoder logic circuitdecoder digital logiclogic gates decoder3 8 decoderdecoder circuit diagram3 to 8 decoder table
|
OPCFW_CODE
|
Why does Angular 2 have JiT compilation for templates?
What is the rationale behind having JiT compilation for Angular 2 HTML templates in the browser during run-time?
I know, that Ahead-of-time compilation exist to address this problem, and it improves the start-up performance drastically.
I'm not asking if I should use JiT or AoT compilation.
TypeScript compiler is capable of compiling JSX, does that mean, that someday we are getting the same support for Angular 2 templates as a replacement for @angular/compiler-cli?
production
This is required if components are created dynamically at runtime, for example when the template markup is loaded from a database.
I think such an approach should be avoided but there are use cases that are difficult to solve otherwise.
There were also discussions that AoT causes larger code size for some applications which eats up the shorter initialization time required with AoT compiled components.
What is the best option for your use case depends on your application and also on the optimizations the Angular2 team will be able to accomplish (I'm pretty sure there are lots of ideas they are experimenting with to get smaller build output and shorter initialization time)
See also How to realize website with hundreds of pages in Angular2
development
It is also convenient during development because edit-reload cycles are faster, but for production (deployment) you usually want AoT.
Thanks for the answer! This sounds like a very specific use-case for having JiT as a default way to serve templates.
Its convenient during development because edit-reload cycles are faster but for production (deployment) you usually want AoT.
This way of putting it sound more reasonable to me, could you please update your answer?
This is all great, but I still do not get why JIT cannot be simply used in the bundling process. Why do we need a completely different stack for AoT? Also for AoT you have to write your code different than for JIT. I think this is a bad developer experience. I just cannot get any info on the background of this added (seemingly unnecesary) complexity. So is there an explanation?
@Szobi You are correct. JIT is a term borrowed from the programming language space (not the framework space) that Angular uses by analogy. Unfortunately, the analogy is incorrect because, as you point out, you have to write different code. Long story short is it is a design question with a subjective answer. Since this bifurcation is by no means a common practice, it is all the more incumbent on you to ask yourself if you agree with the tradeoffs of this design decision and how you feel about the implications as to the overall design that it speaks to.
That's because Angular also uses the term "compilation" (for compiling templates). "different code" is IMHO also misleading. AoT only supports a subset of TS/JS because it needs to evaluate the code result without actually executing the code. TS/JS doesn't provide any support for that. In Dart for example, requiring the expressions to be const which means it can be evaluated statically, which makes it clear which subset of the language can be used.
|
STACK_EXCHANGE
|
can we use dokdo without qiime files
Hi @sbslee ,
Dokdo works great with qzv files,
Can we also use Dokdo for relative abundance data from txt files, instead of qiime's qzv files?
Thanks,
@khemlalnirmalkar, that's an interesting suggestion.
Q1. If it's not a visualization file from QIIME 2, may I ask how you are generating your "txt files"?
Q2. I suppose a user can provide a pandas.DataFrame object as input instead of a qiime2.Visualization object. Do you think that will be sufficient for your case?
Ans1 data is from shotgun sequences but it's already in relative abundance for taxonomy. It Should not matter if it is for 16 or shotgun
Ans2 I was thinking trying the same. I will give a try and see if it works or not. Dokdo is simple and easy to use... thinking to use for plots from all my shotguns relative abundance data.
Thanks
@khemlalnirmalkar, I get it now. Thanks for the answers. As for Q2, the current dokdo.taxa_abundance_bar_plot method won't accept pandas.DataFrame yet. I will create a development branch and try to implement the function to do just that. Will let you know here when it's done. In the meantime, you're more than welcome to tweak it around yourself as well. You may find better solution :)
@sbslee That will be great, thank you so much.
@khemlalnirmalkar,
Great new! I was able to update the dokdo.taxa_abundance_bar_plot method to accept pandas.DataFrame as input. This was actually pretty easy because internally the method already extracts a .csv file from the QIIME 2 visualization and then converts it to a pandas.DataFrame. Therefore, all I needed to do was skipping this part when the input is already a pandas.DataFrame. One thing to note is that the level option will be ignored and the user should know which taxonomic level their input file was created from.
This update has been implemented in the 1.11.0-dev branch. In the future, I think it's possible to extend some of the Dokdo methods to support shotgun data in the same manner. Give it a try and let me know what you think.
import pandas as pd
import dokdo
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
dokdo.taxa_abundance_bar_plot('taxa-bar-plots.qzv',
figsize=(10, 7),
level=6,
count=8,
legend_short=True,
artist_kwargs=dict(show_legend=True,
legend_loc='upper left'))
plt.tight_layout()
plt.savefig('Input_Visualization.png')
df = pd.read_csv('level-6.csv', index_col=0)
dokdo.taxa_abundance_bar_plot(df,
figsize=(10, 7),
count=8,
legend_short=True,
artist_kwargs=dict(show_legend=True,
legend_loc='upper left'))
plt.tight_layout()
plt.savefig('Input_DataFrame.png')
test2.csv
Hi @sbslee ,
Thanks for making this change and your support.
I tried this with my data and didnt go well,
i got an error
TypeError: no numeric data to plot
this error doesnt make sense, probably something i am missing,
Here i attached one of my test file,
I have ~5k taxa but that didnt work and not even with test file,
Please can you have a look? i have the same format for my entire data as this test file,
Thanks,
Khem
@khemlalnirmalkar,
That's because in your current file, it's difficult to distinguish between data columns (e.g. 'Prevotella_species') vs. metadata columns (e.g. 'Group'). In the QIIME 2 visualization file, data columns are indicated by the presence of two consecutive underscores (__). For example, your Prevotella_species column would be s__Prevotella_species for species and g__Prevotella_species for genus.
That being said, when I changed the column names (e.g. Prevotella_species to s__Prevotella_species), it worked:
import pandas as pd
import dokdo
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
df = pd.read_csv('test2-modified.csv', index_col=0)
dokdo.taxa_abundance_bar_plot(df)
plt.savefig('test.png')
test2-modified.csv
Can you try this and let me know if it works?
Here's the another example CSV file which you can use as template.
level-6.csv
@sbslee Thanks for checking the file. I will make these changes to my original dataset and will let you know you soon.
I already tried with qiime2 examples file, and it worked. Sorry i forgot to mention earlier. If you want you can close the issue.
Thanks a lot,
Khem
No worries. Please feel free to reopen this issue if there is any problem.
@sbslee its working, thank you so much,
I hope in future you can add more cool plots and type of analyses for the shotgun data :)
Khem
|
GITHUB_ARCHIVE
|
Needlessly Large Theater:
This site was created for Riot Games Api Challenge 2.0. This project compares the item purchase data from the patch before the ap item change (patch 5.11) to data of the newly altered ap (patch 5.14) for both ranked and normal games. I used the tools Php and mysql to gather the data from riots servers, along with using the toolsets of angular, js/css/html in order to create the front-end(visual side) of the site.
Needlessly Large Theater not only presents data of purchased items before and after the item update focusing on ap items that had raw stat changes (ap increase or decrease), I took the most purchased items from 120,000 games throughout the regions KR/NA/EUW and patches, Needlessly large rod, and provided an area where the user can get a more accurate visual representation of how much damage their champion will be doing after one spell rotation now that NLR has lost 20 ap. Knowing how much damage your champion can output is an important tool in league, not only does it let you go aggressive to try and kill your opponent but it also helps you not take unnecessary risks that could result in the enemy getting away a sliver of health while the enemy jungler comes into lane to take advantage of your aggression. I think this is a helpful tool to many of those who may have taken a break from league and are coming back to find that they are doing a fair amount less to their enemies, unfortunately I was only able to start off with a small selection of ap champions, so I choose a few of my favorites for examples. I choose to model Needlessly Large Theater after a movie theater as a way to keep things interesting for myself and the user as I didn’t want to copy the standard formula for many other league stat sites out there but rather try a new fun approach to it.
(There’ll be sections throughout that are listed as “FOR THE UNRANKED:” these are just explanations aimed at those who may not be as familiar with coding, but are still interested in the process of how the site was build)
Tech Stack (front end)
Jquery, Twitter Bootstrap, Animate.css, Chart.js, Scrolling js, Angular jquery.dataTables.css (back end) Php with mySql
Basic php template
The php files were each matched with a specific json file from the regions listed above. After being created and matched, the php file would then decode the json file and plug in the match Id to riot api url. After the matchId and data are found I stored all the items purchased by each member in the game in an array that would then check and store every item in one table, and if the item code matched a specified ap then it would mark that change in another table in mysql. Each time the php file was called it checked and stored the data of 300 games. In order to increase the amount of games one could simply change the $scv for loop to whatever number they find appropriate. (another table was used to keep track of which game the php file last read, and will plug the next game in line in after the next to the code runs) Getting the ranked stats for specific devisions does things very similarly, just checking the rank of the player first before sorting the data into each table, (very few master and challenger players, but expected I guess.)
FOR THE UNRANKED p1:
I basically created automated jobs that run search the match history for games riot gave me and tally up each time specific ap items (decided by myself) are purchased, this isn’t always as easy as looking for the name of the item to pop up but rather codes that correspond to specific items, by going to riot api and checking static data you can see the corresponding code for each item in the game.
Example: Rabadon’s Death cap = 3089 Example: Luden’s Echo = 3285 so my code will take all the item numbers for a player, then check if it matches any of the specified ap items, then if an item does, essentially add a Tally mark on a table, on a separate data basing tool called mySql.
I used angular views for each framed picture loaded in the lobby, and scrolling nav.js as a way to let the user navigate, throughout the different regions/pages of the site. The majority of the scripts run across both pages are functions stored in the “changeController” scope, using the ng-click tool I called the functions from the scope of said controller. Animate.css was used to create the animations for each time the pictures changed. All the views used can be found within the partials folder, whenever I wanted to add another navigation poster I would add another html file to the partials folder and add js code to the “updatedAng” file, a simple counter is used to keep track of which view is currently selected and loops when it reaches the highest number with a corresponding view, (for this project 3). The same concept is used for the graphs only with graphSelector and gameType Vars tracking which region the data is from and whether it is from normal or ranked games. Chart js was then used to plug the data in and organize it.
Using scrollnav I wanted to let the users navigate around the page as though they were in an actual movie theater lobby, and when they are ready to view footage of champions they can move into the main theater where they can find footage of the comparisons of the old NLR (with a bonus 20 ap) to the new one (with 20 less ap). Originally I was going to do it for every item that had altered ap, but I realized that wouldn’t be possible with the current timeframe and decided to just do some core mid champions and needlessly large rod as a proof of concept.
FOR THE UNRANKED p2:
For those reading who aren’t as familiar with coding: To make the scroll effect that happens each time you click on a link on the page I used a library that lets me dictate the location I want the browser to scroll too, as well as the specifications of the location, like how tall or short I want it. It then talks about how I used the angular framework (think of it as an extra tool box) as a way to load in completely separate html files within the page without leaving/refreshing using a tool called the angular view. This Is not the only way to achieve the effect of the champion posters changing, but I wanted to practice a bit with this as I wasn’t as familiar with it beforehand. angular also let me store commands, that would activate when a specific event occurred, in the case of the first page, I have a list of commands that activate if the button beneath the posters are clicked, then the commands go off which boil down to move the first poster/view one way, once that action is complete proceed to load next poster/view, then move that poster up from the bottom of the page. I did this to attempt to create the effect of actually moving around the theater from poster to poster.)
#Video Footage Getting the video’s was supposed to be the easier part, but I actually had trouble locking down people to get footage (I understand completely as it was a pretty boring process) I wanted things to be as consistent as possible, so I had my friend stay on Ashe with Adc runes (flat mr blues). And 21-9 masteries taking health when he could (fairly standard). I put my masteries and runes on Veigar’s Movie room section (at the top). There is one mistake as my friend took flask instead of dorans on the fizz video by mistake and I noticed too late. I used probuilds.com as a reference for which skills to max for each champion. It was interesting to see the differences between some champions dmg (yet as an adc main, pretty sad.) Once the footage was done I cut it all into small clips using Premiere pro and uploaded them to youtube. Each champion is anchor that links to their corresponding youtube videos.
Overall I learned a lot and got a lot of practice with tools I wasn’t familiar with so that already feels like a win :D, anyway I hope everyone enjoys!
|
OPCFW_CODE
|
// Tests: Algorithms for basic mathematical operations
using System;
using FluentAssertions;
using NUnit.Framework;
namespace AlgoLib.Maths
{
[TestFixture]
public class MathsTests
{
#region GCD
[Test]
public void GCD_WhenNumbersAreComposite_ThenGCD()
{
// when
int result = Maths.GCD(161, 46);
// then
result.Should().Be(23);
}
[Test]
public void GCD_WhenNumbersArePrime_ThenOne()
{
// when
long result = Maths.GCD(127L, 41L);
// then
result.Should().Be(1L);
}
[Test]
public void GCD_WhenNumbersAreMutuallyPrime_ThenOne()
{
// when
int result = Maths.GCD(119, 57);
// then
result.Should().Be(1);
}
[Test]
public void GCD_WhenOneOfNumbersIsMultipleOfAnother_ThenLessNumber()
{
// given
int number = 34;
// when
int result = Maths.GCD(number, number * 6);
// then
result.Should().Be(number);
}
[Test]
public void GCD_WhenOneOfNumbersIsZero_ThenAnotherNumber()
{
// given
int number = 96;
// when
int result = Maths.GCD(number, 0);
// then
result.Should().Be(number);
}
#endregion
#region LCM
[Test]
public void LCM_WhenNumbersAreComposite_ThenLCM()
{
// when
int result = Maths.LCM(161, 46);
// then
result.Should().Be(322);
}
[Test]
public void LCM_WhenNumbersArePrime_ThenProduct()
{
// when
long result = Maths.LCM(127L, 41L);
// then
result.Should().Be(5207L);
}
[Test]
public void LCM_WhenNumbersAreMutuallyPrime_ThenProduct()
{
// when
int result = Maths.LCM(119, 57);
// then
result.Should().Be(6783);
}
[Test]
public void LCM_WhenOneOfNumbersIsMultipleOfAnother_ThenGreaterNumber()
{
// given
int number = 34;
// when
int result = Maths.LCM(number, number * 6);
// then
result.Should().Be(number * 6);
}
[Test]
public void LCM_WhenOneOfNumbersIsZero_ThenZero()
{
// when
int result = Maths.LCM(96, 0);
// then
result.Should().Be(0);
}
#endregion
#region Multiply
[Test]
public void Multiply_WhenFirstFactorIsZero_ThenZero()
{
// when
int result = Maths.Multiply(0, 14);
// then
result.Should().Be(0);
}
[Test]
public void Multiply_WhenSecondFactorIsZero_ThenZero()
{
// when
int result = Maths.Multiply(14, 0);
// then
result.Should().Be(0);
}
[Test]
public void Multiply_WhenFactorsAreZero_ThenZero()
{
// when
int result = Maths.Multiply(0, 0);
// then
result.Should().Be(0);
}
[Test]
public void Multiply_WhenFactorsArePositive_ThenResultIsPositive()
{
// when
long result = Maths.Multiply(3, 10);
// then
result.Should().Be(30);
}
[Test]
public void Multiply_WhenFirstFactorIsNegativeAndSecondFactorIsPositive_ThenResultIsNegative()
{
// when
int result = Maths.Multiply(-3, 10);
// then
result.Should().Be(-30);
}
[Test]
public void Multiply_WhenFirstFactorIsPositiveAndSecondFactorIsNegative_ThenResultIsNegative()
{
// when
int result = Maths.Multiply(3, -10);
// then
result.Should().Be(-30);
}
[Test]
public void Multiply_WhenFactorsAreNegative_ThenResultIsPositive()
{
// when
long result = Maths.Multiply(-3L, -10L);
// then
result.Should().Be(30L);
}
[Test]
public void Multiply_WhenModuloAndFactorsArePositive()
{
// when
int result = Maths.Multiply(547, 312, 10000);
// then
result.Should().Be(664);
}
[Test]
public void Multiply_WhenModuloIsPositiveAndFirstFactorIsNegative()
{
// when
int result = Maths.Multiply(-547, 312, 10000);
// then
result.Should().Be(9336);
}
[Test]
public void Multiply_WhenModuloIsPositiveAndSecondFactorIsNegative()
{
// when
int result = Maths.Multiply(547, -312, 10000);
// then
result.Should().Be(9336);
}
[Test]
public void Multiply_WhenModuloIsPositiveAndFactorsAreNegative()
{
// when
long result = Maths.Multiply(-547L, -312L, 10000L);
// then
result.Should().Be(664L);
}
[Test]
public void Multiply_WhenModuloIsNegative_ThenArithmeticException()
{
// when
Action action = () => _ = Maths.Multiply(547, 312, -10000);
// then
action.Should().Throw<ArithmeticException>();
}
#endregion
#region Power
[Test]
public void Power_WhenBaseIsZero_ThenZero()
{
// when
int result = Maths.Power(0, 14);
// then
result.Should().Be(0);
}
[Test]
public void Power_WhenExponentIsZero_ThenOne()
{
// when
int result = Maths.Power(14, 0);
// then
result.Should().Be(1);
}
[Test]
public void Power_WhenBaseAndExponentAreZero_ThenNotFiniteNumberException()
{
// when
Action action = () => _ = Maths.Power(0, 0);
// then
action.Should().Throw<NotFiniteNumberException>();
}
[Test]
public void Power_WhenBaseAndExponentArePositive_ThenResultIsPositive()
{
// when
int result = Maths.Power(3, 10);
// then
result.Should().Be(59049);
}
[Test]
public void Power_WhenBaseIsNegativeAndExponentIsEven_ThenResultIsPositive()
{
// when
int result = Maths.Power(-3, 10);
// then
result.Should().Be(59049);
}
[Test]
public void Power_WhenBaseIsNegativeAndExponentIsOdd_ThenResultIsNegative()
{
// when
long result = Maths.Power(-3L, 9L);
// then
result.Should().Be(-19683L);
}
[Test]
public void Power_WhenExponentIsNegative_ThenArithmeticException()
{
// when
Action action = () => _ = Maths.Power(3, -10);
// then
action.Should().Throw<ArithmeticException>();
}
[Test]
public void Power_WhenModuloAndBaseArePositive()
{
// when
int result = Maths.Power(5, 11, 10000);
// then
result.Should().Be(8125);
}
[Test]
public void Power_WhenModuloIsPositiveAndBaseIsNegativeAndExponentIsOdd()
{
// when
int result = Maths.Power(-5, 11, 10000);
// then
result.Should().Be(1875);
}
[Test]
public void Power_WhenModuloIsPositiveAndBaseIsNegativeAndExponentIsEven()
{
// when
long result = Maths.Power(-5L, 12L, 10000L);
// then
result.Should().Be(625L);
}
[Test]
public void Power_WhenModuloIsNegative_ThenArithmeticException()
{
// when
Action action = () => _ = Maths.Power(5, 11, -10000);
// then
action.Should().Throw<ArithmeticException>();
}
#endregion
}
}
|
STACK_EDU
|
What’s big data? How helpful is training data? What can make it problematic?
Professors Carl T. Bergstrom and Jevin D. West are so concerned about misinformation that they wrote a book about it. In Calling Bullshit, they argue that big data can foster bullshit because it can incorporate poor training data and find illusory connections by chance.
Continue reading to learn about a problem with big data that should have everyone’s attention.
The Bullshit Problem With Big Data
Big data refers to a technological discipline that deals with exceptionally large and complex data sets using advanced analytics. Bergstrom and West explain how big data generates computer programs. They relate that researchers input an enormous amount of labeled training data into an initial learning algorithm.
For instance, if they were using big data to create a program that could accurately guess people’s ages from pictures, they would feed the learning algorithm pictures of people that included their age. Then, by establishing connections between these training data, the learning algorithm generates a new program for predicting people’s ages. If all goes well, this program will be able to correctly assess new test data—in this case, unfamiliar pictures of people whose ages it attempts to predict. But, a major problem with big data can arise when training data is flawed.
(Shortform note: ChatGPT, a chatbot launched by OpenAI in November 2022, is itself a byproduct of big-data-fueled machine learning, as it processed an immense amount of training text to create coherent sequences of words in response to test data (inquiries from users). The widespread success of ChatGPT—and other related large language models—suggest that although Bergstrom and West may be correct that big data can propagate bullshit, it can also create revolutionary forms of artificial intelligence whose impact is felt worldwide.)
Bergstrom and West argue that flawed training data can lead to bullshit programs. For example, imagine that we used big data to develop a program that allegedly can predict someone’s socioeconomic status based on their facial structures, using profile pictures from Facebook as our training data. One reason why this training data could be flawed is that people from higher socioeconomic backgrounds typically own better cameras and thus have higher-resolution profile pictures. Thus, our program might not be directly identifying socioeconomic status but rather camera resolution. In turn, when exposed to training data not sourced from Facebook, the big data program would likely fail to distinguish between socioeconomic status.
(Shortform note: These bullshit programs can perpetuate discrimination in the real world, as illustrated by Amazon’s applicant-evaluation tool that consistently discriminated against women in 2018. As training data, Amazon had exposed AI to resumés from overwhelmingly male candidates in the past, leading its AI program to display prejudice towards male applicants in the test data—that is, when reviewing current applicants’ resumés. For instance, it punished resumés that even included the term “women,” as in “women’s health group,” and it learned to discredit applicants from certain all-women’s universities.)
In addition, Bergstrom and West point out that, when given enough training data, these big data programs will often find chance connections that don’t apply to test data. For instance, imagine that we created a big data program that aimed to predict the presidential election based on the frequency of certain keywords in Facebook posts. Given enough Facebook posts, chance connections between certain terms may appear to predict election outcomes. For example, it’s possible that posts including “Tom Brady” have historically predicted Republican victories, just because the Patriots have happened to win on the verge of Republican presidential elections.
(Shortform note: One way to identify chance connections versus genuine causal connections is to seek out confounding variables—a third factor that explains the chance connection between two variables. For example, the number of master’s degrees issued and box office revenues have been tightly correlated since the early 1900s, but this correlation is likely due to a third factor—population growth—that is driving increases in both master’s degrees and box office revenues.)
|
OPCFW_CODE
|
//
// Window.swift
// UIKit-Plus
//
// Created by Mihael Isaev on 13.08.2020.
//
#if os(macOS)
import Cocoa
public class Window: AppBuilderContent {
public var appBuilderContent: AppBuilderItem { .windows([self]) }
public var windows: [Window] { [self] }
public let window: NSWindow
lazy var _backgroundColorState: State<UColor> = .init(wrappedValue: UColor.init(window.backgroundColor))
// public init (_ viewController: () -> ViewController?) {
// if let viewController = viewController() {
// window = .init(contentViewController: viewController)
// } else {
// window = .init()
// }
// }
public init (_ viewController: (() -> NSViewController)? = nil) {
if let viewController = viewController?() {
window = .init(contentViewController: viewController)
} else {
window = .init()
}
}
@discardableResult
open func body(@BodyBuilder block: BodyBuilder.SingleView) -> Self {
window.contentView?.body { block() }
return self
}
// MARK: Frame
public func size(_ value: NSRect, display: Bool = true) -> Self {
window.setFrame(value, display: display)
return self
}
// MARK: Size
public func size(_ value: NSSize, display: Bool = true) -> Self {
size(value.width, value.height, display: display)
}
public func size(_ value: CGFloat, display: Bool = true) -> Self {
size(value, value, display: display)
}
public func size(_ width: CGFloat, _ height: CGFloat, display: Bool = true) -> Self {
window.setFrame(.init(origin: window.frame.origin, size: .init(width: width, height: height)), display: display)
return self
}
// MARK: Point
public func origin(_ value: NSPoint) -> Self {
window.setFrameOrigin(value)
return self
}
// MARK: Title
public func title(_ value: String) -> Self {
window.title = value
return self
}
// MARK: Title Visibility
public func titleVisibility(_ value: NSWindow.TitleVisibility = .visible) -> Self {
window.titleVisibility = value
return self
}
// MARK: Title Transparency
public func titlebarAppearsTransparent(_ value: Bool = true) -> Self {
window.titlebarAppearsTransparent = value
return self
}
// MARK: Represented URL
public func representedURL(_ value: URL?) -> Self {
window.representedURL = value
return self
}
// MARK: Represented Filename
public func representedFilename(_ value: String, asTitle: Bool = false) -> Self {
window.representedFilename = value
if asTitle {
window.setTitleWithRepresentedFilename(window.representedFilename)
}
return self
}
// MARK: Excluded From Windows Menu
public func excludedFromWindowsMenu(_ value: Bool = true) -> Self {
window.isExcludedFromWindowsMenu = value
return self
}
// MARK: Movable
public func movable(_ value: Bool = true) -> Self {
window.isMovable = value
return self
}
// MARK: Movable by Window Background
public func movableByWindowBackground(_ value: Bool = true) -> Self {
window.isMovableByWindowBackground = value
return self
}
// MARK: Hides On Deactivate
public func hidesOnDeactivate(_ value: Bool = true) -> Self {
window.hidesOnDeactivate = value
return self
}
// MARK: Can Hide
public func canHide(_ value: Bool = true) -> Self {
window.canHide = value
return self
}
// MARK: Center
public func center() -> Self {
window.center()
return self
}
// MARK: Make Key And Order Front
public func makeKeyAndOrderFront() -> Self {
window.makeKeyAndOrderFront(nil)
return self
}
// MARK: Order Front
public func orderFront() -> Self {
window.orderFront(nil)
return self
}
// MARK: Order Back
public func orderBack() -> Self {
window.orderBack(nil)
return self
}
// MARK: Order Out
public func orderOut() -> Self {
window.orderOut(nil)
return self
}
// MARK: Order
public func order(_ place: NSWindow.OrderingMode, relativeTo otherWin: Int) -> Self {
window.order(place, relativeTo: otherWin)
return self
}
// MARK: Order Front Regardless
public func orderFrontRegardless() -> Self {
window.center()
return self
}
// MARK: Miniwindow Image
public func miniwindowImage(_ value: NSImage?) -> Self {
window.miniwindowImage = value
return self
}
// MARK: Miniwindow Title
public func miniwindowTitle(_ value: String) -> Self {
window.miniwindowTitle = value
return self
}
// MARK: Document Edited
public func documentEdited(_ value: Bool = true) -> Self {
window.isDocumentEdited = value
return self
}
// MARK: Make Key
public func makeKey() -> Self {
window.makeKey()
return self
}
// MARK: Make Main
public func makeMain() -> Self {
window.makeMain()
return self
}
// MARK: Become Key
public func becomeKey() -> Self {
window.becomeKey()
return self
}
// MARK: Resign Key
public func resignKey() -> Self {
window.resignKey()
return self
}
// MARK: Become Main
public func becomeMain() -> Self {
window.becomeMain()
return self
}
// MARK: Resign Main
public func resignMain() -> Self {
window.resignMain()
return self
}
// MARK: Prevents Application Termination When Modal
public func preventsApplicationTerminationWhenModal(_ value: Bool = true) -> Self {
window.preventsApplicationTerminationWhenModal = value
return self
}
// MARK: Allows Tool Tips When Application Is Inactive
public func allowsToolTipsWhenApplicationIsInactive(_ value: Bool = true) -> Self {
window.allowsToolTipsWhenApplicationIsInactive = value
return self
}
// MARK: Level
public func level(_ value: NSWindow.Level) -> Self {
window.level = value
return self
}
// MARK: Depth Limit
public func depthLimit(_ value: NSWindow.Depth) -> Self {
window.depthLimit = value
return self
}
// MARK: Dynamic Depth Limit
public func dynamicDepthLimit(_ value: Bool = true) -> Self {
window.setDynamicDepthLimit(value)
return self
}
// MARK: Has Shadow
public func hasShadow(_ value: Bool = true) -> Self {
window.hasShadow = value
return self
}
// MARK: Alpha
public func alpha(_ value: CGFloat) -> Self {
window.alphaValue = value
return self
}
// MARK: Opaque
public func opaque(_ value: Bool = true) -> Self {
window.isOpaque = value
return self
}
// MARK: Sharing Type
public func sharingType(_ value: NSWindow.SharingType) -> Self {
window.sharingType = value
return self
}
// MARK: Allows Concurrent View Drawing
public func allowsConcurrentViewDrawing(_ value: Bool = true) -> Self {
window.allowsConcurrentViewDrawing = value
return self
}
// MARK: Displays When Screen Profile Changes
public func displaysWhenScreenProfileChanges(_ value: Bool = true) -> Self {
window.displaysWhenScreenProfileChanges = value
return self
}
// MARK: Disable Screen Updates Until Flush
public func disableScreenUpdatesUntilFlush() -> Self {
window.disableScreenUpdatesUntilFlush()
return self
}
// MARK: Can Become Visible Without Login
public func canBecomeVisibleWithoutLogin(_ value: Bool = true) -> Self {
window.canBecomeVisibleWithoutLogin = value
return self
}
// MARK: Min Size
public func minSize(_ value: NSSize) -> Self {
window.minSize = value
return self
}
// MARK: Max Size
public func maxSize(_ value: NSSize) -> Self {
window.maxSize = value
return self
}
// MARK: Content Min Size
public func contentMinSize(_ value: NSSize) -> Self {
window.contentMinSize = value
return self
}
// MARK: Content Max Size
public func contentMaxSize(_ value: NSSize) -> Self {
window.contentMaxSize = value
return self
}
// MARK: Min Full Screen Content Size
public func minFullScreenContentSize(_ value: NSSize) -> Self {
window.minFullScreenContentSize = value
return self
}
// MARK: Max Full Screen Content Size
public func maxFullScreenContentSize(_ value: NSSize) -> Self {
window.maxFullScreenContentSize = value
return self
}
// MARK: Color Space
public func colorSpace(_ value: NSColorSpace?) -> Self {
window.colorSpace = value
return self
}
// MARK: Toolbar
// open var toolbar: NSToolbar? // TODO: with block builder
public func toolbar(_ value: NSToolbar?) -> Self {
window.toolbar = value
return self
}
// MARK: Shows Toolbar Button
public func showsToolbarButton(_ value: Bool = true) -> Self {
window.showsToolbarButton = value
return self
}
// MARK: Tabbing Mode
public func tabbingMode(_ value: NSWindow.TabbingMode) -> Self {
window.tabbingMode = value
return self
}
// MARK: Tabbing Identifier
public func tabbingIdentifier(_ value: NSWindow.TabbingIdentifier) -> Self {
window.tabbingIdentifier = value
return self
}
// MARK: Style Mask
public func styleMask(_ value: NSWindow.StyleMask...) -> Self {
window.styleMask = .init(value)
return self
}
// MARK: Backing Type
public func backingType(_ value: NSWindow.BackingStoreType) -> Self {
window.backingType = value
return self
}
// MARK: Hide Standard Button
public func hideStandardButtons(_ type: NSWindow.ButtonType..., hide: Bool = true) -> Self {
type.forEach {
window.standardWindowButton($0)?.isHidden = hide
}
return self
}
}
extension Window: _BackgroundColorable {
func _setBackgroundColor(_ v: NSColor?) {
window.backgroundColor = v
}
}
#endif
|
STACK_EDU
|
EclipseLink Documentation Requirements
This page captures the requirements for future versions of the EclipseLink User Guide (ELUG). This is a work in progress. See Release 1.0 Doc Plan for background on the original ELUG plan.
These requirements are for the consumers of the EclipseLink documentation which includes both the community using EclipseLink from eclipse.org as well as companies who redistribute EclipseLink and want to include or link to the ELUG.
U1. Version Specific
Each release and patch-set of EclipseLink should be able to offer ELUG content that is version specific.
- Provide links to ELUG for a specific version that navigates only to pages with content for this version
- Each release notes page (wiki) will link to the main
U2. Technology Specific Documents/Books
The documentation must be organized into technology (persistence service) specific document sets/books.
- JPA: Content focused on using EclipseLink JPA along with its advanced features
- Native API usage shown only in the
- MOXy: Content focused on EclipseLink MOXy usage including JAXB annotations, native eclipselink-oxm.xml and native API configuration and usage
- SDO: Content focused on EclipseLink SDO
- May make sense to combine with MOXy
- DAS (SDO-JPA bridge using MOXy) should be covered and can link to JPA content as necessary
Common Requiresments for all sets/books:
- The content of each set/book can share content but should be presented in the context of the technology being covered and avoid generic content containing links to its use in a variety of technologies - we should discuss this one
It would be nice to enable the version specific ELUG content to be consumable into other documentation sets. Products and other projects that include EclipseLink should be able to consume the specific version of the ELUG matching the version they include.
Example: Oracle TopLink, Oracle WebLogic, GlassFish, and SAP NetWeaver do or will ship EclipseLink. Ideally these projects/products want to embed the version specific ELUG into their overall documentation set so that their end users have a consistent experience using their documentation.
- Support producing an HTML and/or PDF documentation output for a specific EclipseLink version that consumers can use
- License the content under a license that enable consumers to include--the Eclipse Foundation has endorsed a Creative Commons license for non-code artifacts--need to use approved license.
Nice to Have:
- Enable consumers to skin the documentation for their own Look & Feel
- Enable consumers to include and cross-link into the content from their own content that wraps the ELUG
- Enable packaging as Eclipse IDE on-line help for inclusion with distributions that include EclipseLink components.
- Support producing an HTML and/or PDF documentation for a specific EclipseLink functional area.
This section captures the requirements of the content producers. The goal is to streamline content development addressing the complex nature of the software spanning multiple technologies with shared infrastructure and thus common functionality.
The content must be developed in an open fashion so that any interested party can participate in the development and ongoing maintenance of the ELUG.
- Content must be developed in a repository that is generally accessible over the internet
- Content stored in an "open" format (i.e., XML)
- Content format conforms to an established standard (DITA or Docbook?)
- Approved content developers from any organization must be able to have write access to the repository. Generally this involves contributors providing documentation fixes and enhancements through bugs and the document authors being project committers having write access to the content repository
- Alternative is to allow project committers to provide doc updates directly to the the repository, similar to how we currently allow any committer to update the end-user wiki content.
The content authors need to be able to manage the content in an efficient manor.
Nice To Have:
- Single repository where all content is managed and authors simply add and enhance without duplication of effort
- Content is tagged with the versions it is relevant to. This enables new content to be added for older functionality and have it picked up in all applicable versions of ELUG when next generated.
- Also allows end users to dynamically query the repository and build "custom" deliverables on-the-fly
- Content is stored by topic (i.e., information-centric) rather than by page or chapter (i.e., book-centric)
- Content authors (contributors) should be able to use any tool that can produce valid XML for storage in the repository.
- Need a tool that can query the repository and "build" end-user deliverables
Open Source tools
- EclipseLink User Guide (ELUG) documentation will be stored in SVN as XML files.
- Files should conform to DocBook DTD.
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd">
Contributors can use any application to edit or update the documentation files, as long as the application's XSLT is compatible with the DTD. The recommended tool is OpenOffice.
- Install the necessary import and export XSLT stylesheets (available from OpenOffice.org).
- Obtain the OpenOffice.org Template required for DocBook Article and Chapter documents from OpenOffice.org.
- Create a new DocBook filter:
- Go to Tools -> XML Filter Settings...
- Set Filter Name and Name of File Type to DocBook (Chapter)
- Go to the Transformation tab
- Set DocType to <chapter>
- For XSLT for Export browse to the chapter export stylesheet (sofftodocbookheadings_chapter.xsl).
- For XSLT for Import browse to the chapter import stylesheet (docbooktosoffheadings.xsl).
- For Template for Import browse to the style template (DocBookTemplate.stw).
- Click OK and close the XSLT Filter Setting dialog
- Obtain the need XML files from SVN (i.e., the chapters).
- Create a new OpenOffice Master Document (.ODM).
- Add the chapters to the Master Document.
- Edit the individual chapters, including the styles, as necessary. When saving the chapter files, OpenOffice may display the following message:
Be sure to select Keep Current Format.
- Alternatively, you can select Save As... and select DocBookXML from the Save As dialog.
- Check-in the XML files to SVN.
Do not check-in your Master Document file (.ODM)
|
OPCFW_CODE
|
No document available.
[en] The complexity of critical systems our life depends on (such as water supplies, power grids, blockchain systems, etc.) is constantly increasing. Although many different techniques can be used for proving correctness of these systems errors still exist, because these techniques are either not complete or can only be applied to some parts of these systems. This is why fault and intrusion tolerance (FIT) techniques, such as those along the well-known Byzantine Fault-Tolerance paradigm (BFT), should be used.
BFT is a general FIT technique of the active replication class, which enables seamless correct functioning of a system, even when some parts of that system are not working correctly or are compromised by successful attacks. Although powerful, since it systematically masks any errors, standard (i.e., ``homogeneous'') BFT protocols are expensive both in terms of the messages exchanged, the required number of replicas, and the additional burden of ensuring them to be diverse enough to enforce failure independence. For example, standard BFT protocols usually require 3f+1 replicas to tolerate up to f faults.
In contrast to these standard protocols based on homogeneous system models, the so-called hybrid BFT protocols are based on architectural hybridization: well-defined and self-contained subsystems of the architecture (hybrids) follow system model and fault assumptions differentiated from the rest of the architecture (the normal part). This way, they can host one or more components trusted to provide, in a trustworthy way, stronger properties than would be possible in the normal part. For example, it is typical that whilst the normal part is asynchronous and suffers arbitrary faults, the hybrids are synchronous and fail-silent. Under these favorable conditions, they can reliably provide simple but effective services such as perfect failure detection, counters, ordering, signatures, voting, global timestamping, random numbers, etc. Thanks to the systematic assistance of these trusted-trustworthy components in protocol execution, hybrid BFT protocols dramatically reduce the cost of BFT. For example, hybrid BFT protocols require 2f+1 replicas instead of 3f +1 to tolerate up to f faults.
Although hybrid BFT protocols significantly decrease message/time/space complexity vs. homogeneous ones, they also increase structural complexity and as such the probability of finding errors in these protocols increases. One other fundamental correctness issue not formally addressed previously, is ensuring that safety and liveness properties of trusted-trustworthy component services, besides being valid inside the hybrid subsystems, are made available, or lifted, to user components at the normal asynchronous and arbitrary-on-failure distributed system level.
This thesis presents a theorem-prover based, general, reusable and extensible framework for implementing and proving correctness of synchronous and asynchronous homogeneous FIT protocols, as well as hybrid ones. Our framework comes with: (1) a logic to reason about homogeneous/hybrid fault-models; (2) a language to implement systems as collections of interacting homogeneous/hybrid components; and (3) a knowledge theory to reason about crash/Byzantine homogeneous and hybrid systems at a high-level of abstraction, thereby allowing reusing proofs, and capturing the high-level logic of distributed systems. In addition, our framework supports the lifting of properties of trusted-trustworthy components, first to the level of the local subsystem the trusted component belongs to, and then to the level of the distributed system. As case studies and proofs-of-concept of our findings, we verified seminal protocols from each of the relevant categories: the asynchronous PBFT protocol, two variants of the synchronous SM protocol, as well as two versions of hybrid MinBFT protocol.
|
OPCFW_CODE
|
var ResultView = function(){
//this.resultData = {};
this.render = function() {
var user = service.currentUser;
var lang = new Lang(user.language);
var isLoggedIn = user.name !== "";
this.$el.html(this.template({lang:lang, user: user, isLoggedIn: isLoggedIn, header:{main:lang.resultHeader}}));
$('main', this.$el).html(this.innerTpl({user:user, lang:lang, header:{main:lang.resultHeader}}));
return this;
};
this.renderSideNav = function(){
var user = service.currentUser;
var lang = new Lang(user.language);
var isLoggedIn = user.name !== "";
return this.sideNavTpl({lang:lang, user: user, isLoggedIn: isLoggedIn});
};
this.initialize = function () {
// Define a div wrapper for the view (used to attach events)
this.$el = $('<div class="content-holder"/>');
//this.$el.on('keyup', '.search-key', this.findByName);
//this.render();
};
this.continueRendering = function(){
var user = service.currentUser;
var lang = new Lang(user.language);
var results = this.resultData || JSON.parse(window.localStorage.getItem("elefindResult"));
var that = this;
console.log("render results");
//result: [{src, title, author, score, vis, date, ect}]
$("tbody#result-list").html("");
for(var x in results){
var entry = results[x];
if(entry.vis == "public"){
entry.src = "../server/storage/"+"public_photos/thumbnail/"+entry.filename;
entry.orig = "../server/storage/"+"public_photos/"+entry.filename;
}else{
entry.src = "../server/storage/"+"users/"+user.email+"/photos/thumbnail/"+entry.filename;
entry.orig = "../server/storage/"+"users/"+user.email+"/photos/"+entry.filename;
}
var score = parseFloat(results[x].score);
results[x].score = score.toFixed(3);
$("tbody#result-list").append(ResultView.prototype.resultEntryTpl(results[x]));
}
$("tbody#result-list tr").click(function(){
var img = $(this.children[0].children[0]);
//var img = $(this).children('img');
//console.log(img);
var tmpl = that.imageViewTpl({
src: img.attr("data-orig"),
author: img.attr("data-author"),
title: img.attr("data-caption"),
date: img.attr("data-date"),
lang:lang //could be a shorthand here... I didn't even know I was using ES6
});
$("#image-view-wrapper").html(tmpl);
renderImageView();
});
$( window ).resize(function() {
resizeGallery(that);
});
};
this.initialize();
};
//ResultView.
var renderResults = function(results){
console.log("render results");
//result: [{src, title, author, score, vis, date, ect}]
$("tbody#result-list").html("");
//var tpl = Handlebars.compile($("#result-tr").html()); can't compile? why?
for(var x in results){
var entry = results[x];
if(entry.vis == "public"){
entry.src = "../server/storage/"+"public_photos/"+entry.filename;
}
$("tbody#result-list").append(ResultView.prototype.resultEntryTpl(results[x])); //it seems that I can;t use jquery selectors here... Hmmm, why?
}
};
|
STACK_EDU
|
This is reloaded version of my previous blog - or continuation of the fork.
First thing to start is to explain why did I do something quite contra-productive as relocate the blog and loose all the audience - however small it could have been.
It would seem that I like restarts and things reloaded. I did pick a new country to become new home for me and my family (if emigration is not a restart, what else is ?). Back in 2004 we have started Thinknostic to be "Montage Reloaded" - a new incarnation of the company I worked for before and liked it a lot. The same story repeats with blogging.
When I started blogging back in 2006, it was Thinknostic second year and we started to really grow: we got our own space beyond small sales office in downtown we have had before, built some serious hardware infrastructure, employee numbers started to go into two digit territory and we landed our first 1 M$+ project. My blog at http://thinkwrap.wordpress.com/ was our unofficial presence in the social space. In 2006 I picked the login and account name "thinkwrap", because it was - at that time - a word that kind-of expressed the approach we were using: something between methodology, best practices and a toolset.
In the three years, the same word ThinkWrap was selected as the new brand when Thinknostic in Ottawa and Pentura Solutions in Toronto (both "second life" companies of Montage origin) merged. Now suddenly, with "thinkwrap" in URL, my blog became whole lot more company-bound that I wanted. As everybody can see on dropping rate of contributions, I found pretty hard to post. I was never quite sure whether I really want to present my personal opinion to appear under ThinkWrap brand as something that the company would be saying. Even worse, now we were several times the size as before, with headcount of about 50 - who was I to speak for all these people, when there were so many smarter, more talented and more experienced than myself ?
As result, the whole blog thing came to a big halt - no posts for over 6 months. The only solution I could come up with was restart. Thus, a new, real corporate blog was created that is way more than single guy's opinion. I am one of the contributors, meaning that you will find posts by myself but also by Nael, Milos, Mike and few more great and talented guys we are happy to have as part of the ThinkWrap. And more will certainly come. See for yourself at http://blog.thinkwrap.com/.
For my personal stuff, I have exported and reimported the content into different WordPress blog and hooked it under domain that clearly indicates that it is my personal blog and personal opinion. Occasionally, I may decide to crosspost some to corporate blog as well, but most of my opinions, suggestions, rants, jokes, book recommendations and crazy ideas will not go beyond this ones :-).
So, I have now new place that completes my virtual existence. You can also find me on Twitter and Facebook (albeit I try hard to keep my Facebook friends to be subset of people I did meet in real life).
Have a good rest of the year and - as my German speaking friends would say: "Einen guten Rutsch"
Author Miro Adamy
License (c) 2006-2020 Miro Adamy
|
OPCFW_CODE
|
Javascript inside XSLT possible?
Is it possible to insert Javascript code inside XSLT file?
XSLT:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<html>
<head>
</head>
<body>
<span id="Intro" style="font-family:Calibri;" ></span>
--OTHER XSLT CONTENT HERE--
<script>
document.getElementById('Intro').innerHTML= new Date();
</script>
</body>
</html>
</xsl:template>
The resulting HTML file doesn't have the date displayed inside the span.
UPDATE: this is not to be rendered by a client. Rather, it's being sent as the body of an email. Does that change things?
is not self-closing, and adding a /> doesn't change that...
Thanks dandavis. I also tried the case with
It's not quite clear if you expect the result of your transformation to contain the date output or if the script does not get executed when the html document is rendered in a browser.
Well, if the XSLT posted were executed anywhere then <xsl:template match=""> surely would give an error. As for the script, it is script in the HTML result of the XSLT transformation, it is currently not clear whether you use the result in a browser where script is executed.
Updated, Martin. In the program it's actually specified, but I took that piece out to make the question more general
"The resulting HTML file doesn't have the date displayed inside the span." The resulting HTML file is not supposed to have the date displayed inside the span.That will be inserted by the browser dynamically when rendering the file to screen. If you want the resulting file to contain an actual date, it will by necessity be a static date - the date when the XSL transformation took place.
Re: "The resulting HTML file doesn't have the date displayed inside the span." XSLT doesn't know anything about JavaScript, so no one expects an XSLT processor to perform a second pass on the final result and invoke JavaScript. Neither would an XSLT processor invoke a browser passing to it the result of the transformation -- this, if necessary at all, is the responsibility of the invoker of the transformation.
No self-respecting mail client is going to run javascript on a received email, otherwise spammers wouldn't need to trick you into clicking links.
Yes, it is possible to have JavaScript inside of your XSLT.
However, there can be complications when the JavaScript contains characters such as <. You can jump through hoops and play games with CDATA to overcome, but when possible, it is best to externalize the JavaScript and reference it with <script src="url/to/your.js"></script>.
Do that for static JS and have small sections of dynamic JavaScript content with variables and arrays of objects that will be used to invoke the static externally loaded JS.
|
STACK_EXCHANGE
|
A quick update on WYSIFTW, my "augmented wikitext" editor. (Please see
Wikitext support is nearing completion. I added bold/italics a few
days ago, and yesterday it got some buttons to apply/remove such
markup from a selection. Just a few minutes ago, I finished wikitable
support - you can now edit text in table cells, in the same table
layout and style you see in the real article (though you cannot alter
the table or cell markup itself, add/remove rows, etc., which can
later be achieved through buttons in the sidebar or similar).
As of this moment, lists, indentations, <nowiki>, <pre>, and "---"
(<hr>) are not supported. These shouldn't be too difficult, compared
to the things already done.
I have taken great care to avoid unnecessary changes in the wikitext
being introduced through the parsing/unparsing process. I am not 100%
successful, but after test-loading dozens of random pages, as well as
a few of my standard tests (including [[Paris]] and [[Berlin]]), these
events seem to be rare, and do not appear to break valid wiki syntax.
If you find a page (it will warn you in the sidebar after parsing),
please add it to
The editing components have improved as well, but are far from the
usual Word-like capabilities. No cut/copy/paste, and no undo. The
former should be easy to do, at least for plain text; the latter will
require "recording" of all editing actions, which sounds like work to
Speed has become an issue. I work with Chrome 10 on a not-too-old
iMac, so even behemoths like [[Paris]] are parsed in <20sec. However,
I have heard reports about times of >200sec, which is clearly too much
(20 sec is as well, IMO). A large chuck of the time seems to come from
bold/italics parsing, which can include up to four separate parsing
steps in my implementation. There is clearly room for improvement, but
I hesitate to optimize until all major features (e.g. lists) are
implemented, and I have some standard test pages available. I am also
thinking about using Selenium once WYSIFTW is feature-complete (as far
as wikitext goes).
There is the question of what browsers/versions to test for. Should I
invest large amounts of time optimising performance in Firefox 3, when
FF4 will probably be released before WYSIFTW, and everyone and their
cousin upgrades? As a one-man-show, I have to think about these
Finally, there are, undoubtedly, a large number of bugs hidden in the
code. I assume they will be weeded out, given enough eyeballs (testers
That wasn't as quick as I said in the first line of this mail. OTOH,
it's past midnight here (again!), and I'm getting too old for this...
|
OPCFW_CODE
|
What is the real position on the pilegesh expressed in the Shulchan Arukh?
Here is what Rabbi Yoel Lieberman says in the article present at the link
https://www.yeshiva.co/ask/8800
"We must immediately say, that the issue of Pilegesh is far from an issue which just has fallen out of custom, but rather one which is brought down in Shulchan Aruch who sees the option of Pilegesh as an absolute prohibition. The Rema who on the one hand quotes a more lenient position, also quotes a more severe opinion based on the Rambam, Tur, and Rosh, that this is an absolute Torah prohibition "There must not be any prostitutes among Israelite girls" (Devarim 23:18). "
However,it does not seem to me that this is the way it is.
We read in fact in Shulchan Arukh (translation into English from Sefaria site;I do not know if it is correct about the original Hebrew text) :
Even HaEzer 26:1
"A woman is not considered to be married except by way of betrothal in which the kosher betrothal was done appropriately. However, if he were to lie with her by way of harlotry, without the name of betrothal, it is nothing (towards her status as being a married woman). Even if he lies with her with the intent of marriage, mutually agreed between him and her, she is not considered as his wife and even if she dedicated herself only for him, rather the opposite is true and he must be forced (by Beis Din) to send her away from his home."
Rema:" For certainly she would be considered an embarrassment for immersion in a mikveh and he will lie with her in ritual impurity (niddah); however, if she dedicates herself exclusively for him as his wife and she immerses for him, there are those who would say that this is allowed and she would be a Pilegesh as described in the Torah and there are those who say that this is forbidden and they should both get whiplashes from the Torah as they have transgressed the precept "don't be a kedesha"
The Rema clearly explains that this passage is related to the risk of transgressing the laws on the niddah, while the case of the pilegesh, albeit controversial, does not concern this question since it is assumed that the pilegesh goes to the mikveh, being the her a public relationship.
The following passage seems to me to confirm that the passage by Even HaEzer 26: 1 has the meaning that the Rema attributes to it:
Even HaEzer 15:30
"One who had a pilegesh (commonly translated as concubine, a woman with whom one lives but does not necessarily have a full contractual, formal marriage lacking kiddushin, acquisition and engagement, or ketuba, contract of marital obligations, which of the two precisely is a dispute in the Talmud and Rishonim. The permissibility of a pilegesh is also a dispute. In this case, the pilegesh is a partner, a woman one is living with and the binding legality of the relationship between the two is in question) and it was not testified that he betrothed her (via formal kiddushin) - she is permitted to his relatives. However, if there were witnesses that the (pilegesh) woman (herself) said: "he betrothed me before two witnesses," she is forbidden to his relatives. However, if she said: "he betrothed me," simply, and she did not say: "before two witnesses," there is nothing (of relevance to court) in her words."
Here Maran points out that the halachic lawfulness of this relationship is controversial,but he recognizes as a concrete hypothesis that a Jewish man has a pilegesh ("One who had a pilegesh"),without stating that this is halachically prohibited.
Therefore, in my opinion, Shulchan Aruch cannot be said to prohibit pilegesh,since Maran leaves this question open.
However, I ask myself :is correct this English translation of Even HaEzer 15.30 present in Sefaria site? I think in primis about the passage from "commonly translated as concubine" to "between the two is in question".
The parenthetical explanation of a pilegesh is the Sefaria text is not in the text, and it was just added by the translator to explain what a pilegesh is.
The law in 15:30 is discussing the case of someone who did have a pilegesh, which is obviously something people do, allowed or not. You cannot conclude from this passage that a pilegesh is allowed.
That said, I do agree with you that the Rema in 26:1 is explaining the reasoning of the Shulchan Aruch, and that it is incorrect to say that the Shulchan Aruch rules that a pilegesh is categorically prohibited.
Thanks a lot for your important clarification. I consider it very serious that Sefaria has inserted this comment without making it clear that it is not part of the text of Shulchan Arukh. I wrote to them to ask for rectification.
I obtained from Sefaria the elimination of the part not present in the text.
|
STACK_EXCHANGE
|
In this post I am going to explain how to concatenate two or more videos. This operation sometimes referred as “join videos” or “merge videos” is an operation that can be a bit complex depending on which videos we want to merge, depending on their formats (codecs), or their resolution.
Simple Join of two videos
We start with this “simple join”. We call it because it is an easy operation. We can join two videos with this command if we have two (or more) input videos of avi or mpg format.
ffmpeg -i “concat:video1.avi|video2.avi” output_video.avi
With other formats (like the popular mp4, webm or mkv) it is not possible to merge videos like this because of the formats. That's why we prefer using another method using the ffmpeg concat filter.
Using the concat filter
We can call this the complex way of merging files as the command we use is a little longer, but with it you can merge video files of any type (it doesn't even matter that they have different formats or codecs) as ffmpeg will encode the result. Use the command:
ffmpeg -i video1.avi -i video2.avi -filter complex “[0:v:0] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [v] [a]” -map “[v]” -map “[a]” output_video.avi
We are using the parameters:
-filter_complex with the filter specification in double quotes (they are needed):
[0:v:0] [0:a:0] [1:v:0] [1:a:0] this part tells the filter to use the video stream from the first file “[0:v:0]”, the audio stream from the first file “[0:a:0]”, the video stream of the second file “[1:v:0]” and the audio stream from the second file “[1:a:0]”.
Note that if any of the files does not have audio or video stream we should not include here.
And also note that if we are merging more than two files we should include the streams from the third, four or n files.
Also note that the first file is 0 (of [0:v:0] indicating 0- first file, v- video stream, and the second zero indicationg the first video stream of that file)
concat=n=2:v=1:a=1 [v] [a] this part is telling first to use the ffmpeg concat filter. With the n=2 we are telling ffmpeg concat filter that we have 2 input files (so we have to change it if we have more) with one video stream (v=1) and one audio stream (a=1). Finally we include the [v] and [a] so ffmpeg can use the resulting video ([v]) and audio streams ([a]) for future operations.
-map “[v]” -map “[a]” parameters are telling ffmpeg to use the resulting [v] and [a] streams from the concat filter rather than use the input files for output.
output_video.avi is the name of the resulting video. Note that we can include any format or encoding parameter before this. With the concat filter ffmpeg will encode the result so we can transform the files to whatever format and codec we want.
With this ffmpeg concat filter we can merge two or more files without worrying on their formats or codecs, so it is our favourite method to join videos.
So what command should I use if I want to merge 3 files
ffmpeg -i video1.avi -i video2.mp4 -i video3.webm -filter complex “[0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] concat=n=3:v=1:a=1 [v] [a]” -map “[v]” -map “[a]” output_video.mp4
Note that we have included “[2:v:0] [2:a:0]” and we have changed n=2 for n=3. In this case we have included 3 files each of one different format.
And if I want to merge files without audio
ffmpeg -i video1.avi -i video2.gif -filter_complex “[0:v:0] [1:v:0] concat=n=2:v=1 [v]” -map “[v]” output_video.mp4
Note in this case that we are not using the audio streams (the parts [0:a:0] and [1:a:0] are missing), and that we have not included a=1 after the concat part, and we haven't included the -map “[a]” either.
This is the command to use when we want to merge two files without audio streams, like joining two animated gif files.
Merging videos with different resolution
Now, what happens if we have input video files with different resolutions, and/or different aspect ratios? Well, the ffmpeg concat filter will show an error like this in this situation:
[Parsed_concat_0 @ 0x3c000a0] Input link in1:v0 parameters (size 1280x720, SAR 1:1) do not match the corresponding output link in0:v0 parameters (460x460, SAR 1:1) [Parsed_concat_0 @ 0x3c000a0] Failed to configure output pad on Parsed_concat_0 Error configuring complex filters. Invalid argument
So before trying to join various video files we must make sure that they all have the same size. If they do not have the same sizing we can always resize them before joining. Read how to resize a video file with ffmpeg.
Did you find this article helpful?
|
OPCFW_CODE
|
Can this be considered as a case of fraud?
I recently got into an argument regarding whether submitting the following letter to an instituition could be regarded as a fraud:
To Whomsover It May Concern
This is to certify that X is pursuing a Minor specialization in our department.
The courses offered by our department that have been taken by X are as follows:
A
B
C
Signed, Head of the Department
Now, B and C are the courses that X has taken as a part of the Minor specialization and A is a course that X has taken additionally but is not a part of the minor specialization. Now, one point of view is that the statement about the list of courses provides a sufficient reason behind the first statement--in which case, there is nothing fraudulent. Another intepretation is that the statement about the list of courses implies that these courses were taken as a part of the minor--due to which the document is obviously a case of fraud.
I would like to know which interpretation is correct--whether one should assume that the second statement provides a sufficient reason for the first or that it provides a necessary reason for the first.
Wouldn't it be simpler just to mention the first statement? If they need to know specific courses taken, use a transcript format, not a listing in a letter.
@Brandin Agreed. But as mentioned, since it is a Minor specialization, it would take a little effort for someone to find all the courses of the relevant department from the general transcript. The transcript doesn't mention them separately. So, I was thinking of attaching such a letter along with the official transcript.
The inference that "X has taken A,B,C" means "A,B,C are required for the program" is not reasonable. Even if the recipient cared whether A is part of the program, and even if the writer were mistaken about whether one of the courses had been taken, there is no intent to deceive and thus no fraud.
@user6726, agreed! You could reword it deceptively but would have to try pretty hard!
The second statement is completely independent of the first.
This is to certify that X is pursuing a Minor specialization in our department**.**[PERIOD]
The courses offered by our department that have been taken by X are as follows:[...]
As long as both of these statements are true, it is not deceptive, therefore not fraudulent.
A statement is only fraudulent if the misstatement is material to the person who receives it.
I can imagine a situation in which where someone actually has completed a minor and has actually taken the classes indicated, that the fact that one of them was actually taken for a purpose other than qualification for the minor would be material, although it is pretty esoteric.
This would be in the case of a transfer student in which the accepting institution only recognizes minor specializations with at least 18 credit hours of classes taken to qualify, but the sending institution recognizes minor specializations with 15 credit hours of classes to recognize a minor specialization, and the extra course is included to mislead the accepting institution to recognize a minor specialization that does not actually meet its rules for recognition. In that case, the misleading inclusion of the extra course would be material and it would therefore be fraudulent (although the injury suffered from the fraud would also sometimes be quite speculative, one might have to show that the transfer student would otherwise have to pay more tuition to graduate with this credential from the accepting institution, for example).
In almost any other case, the inclusion of the extra course would not be material and therefore, it would not be fraudulent.
Also, should be noted that there is an intent requirement. Badly worded does not necessarily mean fraud. But if the intent was to deceive the Registrar's Office with respect to the student's grad eligibility, there might be fraud in that action
|
STACK_EXCHANGE
|
I’d like to lay out a couple of thoughts here to discuss further to get more insight from people with different mindsets / opinions / facts / views. Reasons are mostly selfish: I know what I don’t know and I’d like to understand it better.
Folds and Braids
As Clojure developers we are in this “Cult of Simple”. Simple “was defined to us” 🙂 as one fold/braid/twist, which does not mean “one thing”, and is really about the interleaving, not the cardinality. We also know that simple is “objective”, we can look at it and see the “number of folds”.
I don’t think it’s such a bad cult to be in, but.. It seems that the above “simple definition of simple” is now taken to an extreme where, as definition, it is interleaved / complected with real problems and is losing its intended power.
The definition of simple is now used as a shield, rather than a tool.
It might have to do with the very subjective definition of “one fold”. It is easier to understand what “one fold” is when thinking about “primitives”: i.e. Rich’s example of “Sets vs. Lists”, where Sets are simpl(er) since Lists introduce order. But it is not as clear what “one fold” is in a more “complected problems” whether these are business problems or tool libraries.
Keep Your Functions Close, but “Just In Case” Closer
Clojure protocols are super powerful, and I would say “simple”.
I don’t think Clojure records are powerful, but good (in my experience) for type driven polymorphism, and they hold fields a bit more efficiently than maps. In reality “type driven polymorphism” would be the only reason I would use them for.
If “type driven polymorphism” is all that’s needed I would first reach out to deftype instead of records, since records complect data with types: two folds 🙂
However, since intuitively, building solutions with protocols feels a lot more extensible, robust and flexible, I think they get applied where a simple set of functions over multiple namespaces should have been used instead. This problem, I think, is caused by the invisible seduction of “easy”, which is defined to us as “lie near”:
“since my solution should be robust and extensible, I’ll use records and protocols, since I know they will make it so“.
In other words, for initial application design: “records and protocols lie near”.
That is not to say that protocols or types should be avoided, quite the contrary, dynamic type dispatch, libraries with (internal) abstractions, host interop: all great cases to use and love them.
But I don’t think it is wise, in the Clojure Universe, to make people create records / types and use protocols when they need to use your library to develop products. This is not the absolute truth, but for most cases, when developing a business application, I would rather use a Clojure function: one fold 🙂
Humor Driven Development
I spend a couple of years working in Scala. An implicit type in Scala was one of the most common causes of confusion. It is used everywhere internally in the language itself, as well as advocated to use in every day Scala programs. This and many other examples, teach us that “implicit” is “complex”. The flip side of that is where the problem lies. In Clojure “formal circles” the “inferential” belief is that “explicit” is “simple”.
I believe that neither implicit nor explicit can be applied to simple without a context. And no, a “well implied implicit” does not mean complex. And no, explicit does not mean simple.
For example, I need to create a local scope and bind some values: if I am given a choice to write an identity monad or to use a Clojure let binding, I would choose the let binding, because it is a great syntactic implicit. An identity monad would also work, but being explicit here does not buy me any simplicity.
An interesting quality of a good implicit, by the way, is “automatic” understanding of what’s implied: that’s how we laugh 🙂
Respect and Doubts
This brings me to the overuse of “explicit formalism” in Clojure. On one hand it cannot be subjectively complex, since we know that “simple” is objective. On the other hand it can, because a “single fold” can be defined very differently in the explicitly formal solution by me and by people who created it.
I say: let’s listen to each other, rather than teach and preach.
|
OPCFW_CODE
|
See Conversion Probability data
To open the Conversion Probability report:
- Sign in to Google Analytics.
- Navigate to your view.
- Open Reports.
- Select Audience > Behavior > Conversion Probability.
Conversion Probability data is delayed by 24 hours: this report depends on complete processing of the daily-aggregate tables.
If a reporting view does not meet the prerequisites for data, then the report is not visible in Analytics.
About Conversion Probability
Using the same data modeling techniques that determine Smart Lists and Smart Goals, Analytics calculates the % Conversion Probability dimension and the Average Conversion Probability metric to determine a user’s likelihood to convert during the next 30 days. Transactions for each user are evaluated, and the resulting probability of conversion is expressed as an average score of 1-100 for all users during the date range, with 1 being the least probable and 100 being the most probable. A value of 0 indicates that conversion probability is not calculated for the selected time range.
% Conversion Probability is calculated for individual users.
Average Conversion Probability is calculated for all users related to a dimension for the date range you’re using, for example:
- The score for all users where Channel = Organic Search January 1 - January 31
- The score for all users where Source = Google during January 1 - January 31
The % Conversion Probability dimension, with ranges as dimension values, is available in the Conversion Probability report, and in Analytics segments, remarketing audiences, and custom reports.
The Avg. Conversion Probability metric is available in custom reports.
In order to calculate the dimension and metric, Analytics needs the following:
- A minimum of 1000 Ecommerce transactions per month in the reporting view. (You must have implemented Ecommerce Tracking).
- Once you reach the initial threshold of 1000 ecommerce transactions, Analytics then needs 30 days of data to model.
- If after modeling the data Analytics is not confident in the accuracy of the results, then Conversion Probability data will not be available for that reporting view.
If the number of transactions in the reporting view falls below 1000 per month, then Analytics uses the last good model to generate data for the report.
The Conversion Probability report
The Conversion Probability report lets you see:
- The distribution of sessions, and sessions with and without transactions, across conversion-probability buckets (histogram) (for example, the number of sessions where user values for % Conversion Probability ranged from 21-50)
- Acquisition, behavior, and conversion metrics for users across the dimensions of Default Channel Grouping, Source, and Medium (table)
Using conversion-probability data
With segments, you can look at any of your data in the context of conversion-probability thresholds. For example, you can create a segment for % Conversion Probability > 25, and then examine things like:
- How your users who demonstrate a strong likelihood to convert compare with your overall user base. Do they represent a small fraction of your users, or do your advertising and site combine to engage a large percentage of your users?
- Which channels, keywords, and campaigns deliver highly engaged users.
- Which conversion paths are most effective, and where along the path can you deliver the most effective advertising.
Conversely, you can use a low threshold to examine the opposite end of the user spectrum:
- What percentage of your users are less likely to convert?
- Are the keywords and campaigns that draw users who are unlikely to convert different from the ones that draw more valuable users? If they are, does it make sense to devote less budget to them?
- Which conversion paths do the lower-scoring users follow? Are there opportunities along those paths to deliver more effective marketing?
Users who are on the threshold of converting are more easily convinced to complete those conversions. For example, users who have studied product details or added items to their carts have given strong signals that they’re already taking ownership of those products. A persuasive follow-up from you via a well-crafted remarketing campaign can provide that last nudge they need to complete the process.
Creating remarketing audiences based on your users who are more likely to convert and publishing those audiences to your various marketing platforms like Google Ads and Display & Video 360 lets you re-engage them everywhere you have an online presence.
You can also publish these audiences to Optimize so that you can understand exactly which refinements to your site content deliver the highest likelihood of conversion.
|
OPCFW_CODE
|
M: Ask HN: PebbleTime epaper refresh rate improved? Or illusion due to animation? - shengyeong
So, I have an OCD itch of a question I could not get out of my head. Does the transition animation for Pebble Time looks smooth to you? Has the refresh rate of the colour epaper improved (I could not find any recently published specs on the colour epaper), or is it just an illusion from the cutesy transition animation? Either way, my curiosity is piqued to the max.
R: daenney
E-ink/e-paper displays can actually handle much higher refresh rates than what
you traditionally see on things like e-readers. They're usually just
programmed not to because it's not necessary to their function and it saves
battery even more.
There's been quite a few advances in the field which would probably allow for
higher refresh rates with less impact on the battery that would make things
like the platforms they show in the video's work just fine. I doubt you could
watch a movie or play a FPS on it though.
R: shengyeong
Owh, this is revelation! :) Is there any white paper on the epaper FPS?
|
HACKER_NEWS
|
How can I re-position status bar icons I'm not running in my configs
I am currently learning how to customize my Ubuntu (20.04) laptop, and I currently use i3, i3status and Polybar. But I am getting status icons in my status bar while I don't have them configured as such. (sometimes in i3status, but mostly in polybar).
The network icon on the far right is what I'm getting while I have nothing related to that (or other) icons in my config files.
I3Status config
general {
output_format = "dzen2"
colors = true
interval = 5
}
order += "tztime local"
tztime local {
format = "%Y-%m-%d %H:%M:%S"
timezone = "Europe/Amsterdam"
}
Polybar Config
[bar/mybar]
modules-center = date
modules-right = battery
background = ${colors.base}
font-0 = JetBrainsMono Nerd Font:style=Regular:size=12
[global/wm]
include-file = ~/.config/polybar/macchiato.ini
[module/date]
type = internal/date
interval = 5.0
date = " %Y-%m-%d %H:%M "
[module/battery]
label-full = " %percentage%% "
label-charging = " %percentage%% "
type = internal/battery
full-at = 99
low-at = 10
format-full-background = ${colors.pink}
format-full-foreground = ${colors.base}
format-charging-background = ${colors.pink}
format-charging-foreground = ${colors.base}
[settings]
screenchange-reload = true
Macchiato.ini is filled with colors only.
;-------------------------
; Catppuccin Macchiato Palette
; Maintainer: justTOBBI
;--------------------------
[colors]
base = #24273a
mantle = #1e2030
crust = #181926
text = #cad3f5
subtext0 = #a5adcb
subtext1 = #b8c0e0
surface0 = #363a4f
surface1 = #494d64
surface2 = #5b6078
overlay0 = #6e738d
overlay1 = #8087a2
overlay2 = #939ab7
blue = #8aadf4
lavender = #b7bdf8
sapphire = #7dc4e4
sky = #91d7e3
teal = #8bd5ca
green = #a6da95
yellow = #eed49f
peach = #f5a97f
maroon = #ee99a0
red = #ed8796
mauve = #c6a0f6
pink = #f5bde6
flamingo = #f0c6c6
rosewater = #f4dbd6
transparent = #FF00000
That seems to be an icon showing the signal strength of a wireless network. Are you not using wireless networking?
I am on a wireless network, but I don't know why the icon is showing while I don't have anything related to networking in my configs
Can you post a link to your config files?
I've added them in the question @Stephan
I don't see anything related to networking either. Can you interact with that icon using a right mouse button and does it bring up menu? Can you also post what is in ~/.config/polybar/macchiato.ini?
I've updated the question to also include macchiato.ini. And yes, I can interact with the statusbar icons. Today, the icons are in my i3status bar instead of polybar.
Since there is nothing in the configs of your bar programs and you can interact with icon, it is possible it is from an external program that has its own system tray icon. One such program is nm-applet, which is commonly included in the default i3 config file to auto start. Look in ~/.config/i3/config for any exec lines. Also, open the icon menu and look for any "About" entry that may let you know the program name.
A default Ubuntu install will have the network-manager-gnome package installed, which includes the nm-applet program. There is an XDG auto start file at /etc/xdg/autostart/nm-applet.desktop. Remove this file and the exec line in the i3 config or remove the package altogether.
|
STACK_EXCHANGE
|
drivers: dma: Introduce driver for NXP's eDMA IP
Please check commit c63a3e7107c1977b6c665000f9e420013cec37d5's description for reason why this driver is used instead of dma_mcux_edma.
I’m ok with having another eDMA, what I did with the designware dma is try to create a common set of functions and reuse them where possible as there are many users of the designware gpdma. It would be nice if eDMA did something similar if possible. Like if there’s common transfer block definitions and configuration registers for example some helpers could be made to reuse between the two.
I’m ok with having another eDMA, what I did with the designware dma is try to create a common set of functions and reuse them where possible as there are many users of the designware gpdma. It would be nice if eDMA did something similar if possible. Like if there’s common transfer block definitions and configuration registers for example some helpers could be made to reuse between the two.
ACK. If everyone's ok with this, I'd like to have this in first as-is before making such clean-ups. It would make it a bit easier for me and I'm thinking it would also ease the review process a bit.
Sorry for interfering, but I am opposed to naming drivers and bindings after NXP family names, and I am trying to stop NXP from doing this with new drivers we upstream. NXP re-uses the same hardware IP across different families, and we already have confusion where a binding named for one family is used on another. I am also opposed to calling this new driver the imx_edma because the i.MX RT family of MCUs is branded under the i.MX name, also uses the eDMA which is already upstreamed, but apparently a different eDMA flavor than this one.
I agree the name isn't the best so this is pretty reasonable.
But I think if we work together internally, we can come up with a better name than imx_edma for this. Do you already know our name for this IP, and how NXP tells it apart from the existing upstream eDMA?
Mm, not sure actually. Took at look at i.MX RT1050's TRM and there seem to be a couple of differences:
RT has DMAMUX
RT has somewhat different register layout (no MP region, just TCD + of course, some registers differ)
Also took a look at how this is handled in Linux: seems like RT uses fsl-edma.c, while 93 for example uses fsl-edma-v3.c. As such, I propose 2 naming schemes:
If there's ever a scenario in which MPUs and MCUs use the same IP name but the IP works somewhat differently we go for {subsys}_mcux_{ip_name} for MCUs (e.g: dma_mcux_edma) and {subsys}_{ip_name} for MPUs (e.g: dma_edma).
We do the same thing as Linux: add a version to the name. In this case, we'd have dma_edma_v2.c. We'd have to specify in the binding how the IP is to be used (e.g: we use dma_edma_v2 for the EDMA IPs found on boards such as i.MX93, i.MX8QM etc...)
BTW, even boards such as i.MX93EVK may include different eDMA versions (e.g: see eDMA3 and eDMA4 which have slightly different register layouts and channel numbers) so I'm not sure how we can tell the difference.
Hi @LaurentiuM1234, as you mention it does seem that iMX93 has two EDMA controller IP revisions: EDMAv3 and EDMAv4. As far as EDMAv4, I am not sure we have support for this yet in Zephyr, and I'm not familiar enough with the IP to know if we need another driver for it.
However, EDMAv3 (at least with DMAMUX) was already supported by #61311. Could we consider decoupling the edma driver and the DMAMUX code somehow? Or compiling it out based on the driver compatible? I would like to reuse code as much as possible here when the underlying IP is similar.
Hi @LaurentiuM1234, as you mention it does seem that iMX93 has two EDMA controller IP revisions: EDMAv3 and EDMAv4. As far as EDMAv4, I am not sure we have support for this yet in Zephyr, and I'm not familiar enough with the IP to know if we need another driver for it.
This driver supports both of 93's versions.
However, EDMAv3 (at least with DMAMUX) was already supported by #61311. Could we consider decoupling the edma driver and the DMAMUX code somehow? Or compiling it out based on the driver compatible? I would like to reuse code as much as possible here when the underlying IP is similar.
What about the HAL revision? This driver uses a different NXP HAL EDMA revision. The point of this new revision is to provide a common API to the upper layers that can be used for any EDMA revision (or at least this is the plan, for now it works OK with EDMA3 and EDMA4). I've already had a lot of trouble dealing with the fact that EDMA4 from i.MX93 uses a different DMA_Type structure so I'm not willing to go back to the old revision and "somehow" make it work for all EDMA versions.
V2 updates
Fixed some typos
Added comments to edma_chan_cyclic_produce() and edma_chan_cyclic_consume().
Added comments to struct edma_channel's fields.
CONFIGURED->CONFIGURED transition is no longer allowed.
EDMA_CHAN_PRODUCE_CONSUME_B() call inside edma_reload() is now locked. This is because both the ISR and edma_reload() update the free and pending_length which may lead to a race condition.
V3 updates
Some naming-related changes to align with the new version of the NXP HAL EDMA driver. No functional changes.
V4 updates
Changed naming to dma_nxp_edma. No functional change.
@DerekSnell could you please review this again? Thanks!
Hi @LaurentiuM1234,
Thank you for updating this PR. There are still some references to i.MX I want removed, since this driver could be used on other SOCs. But once you remove those, I will remove my block. Thank you
Should be fixed now
dependency merged, should be good to go as well once CI is green
@teburd @dleach02 good to merge. dependency merged and SHA updated.
|
GITHUB_ARCHIVE
|
Bengaluru, Karnataka, India
January 2020 - Present
Building microservices in Golang to create and maintain infrastructure for gojek's own cloud
using Kubernetes, Gloo, Consul on Google Cloud Platform.
Worked on creating an Application Registry for teams and the cluster mapping of the teams.
March 2021 - May 2021
Helped different teams envision, design and implement their ideas into working softwares. Helped the teams with wide range of software stack , Helping design Database schema, High Level and Low Level Design of their applications.
August 2017 - December 2020
An Ecommerce giant, one of the top ecommerce companies with more than 50 million active customers.
Tasks & Projects
Create and maintain highly scalable, maintainable, server side applications. Applications with features like performance, scalability, simplicity, modifiability, visibility, portability, and reliability.
Bynder Image Asset Management Integration Service : Image Asset Management Service using Dropwizard based inhouse framework "jtier" . Technologies used are java, MessageBus(In House wrapper for Active MQ) (for messaging), Cron Jobs, Quartz scheduler, MySql database.
Aggregator Service : Backend Aggregator Platform for the front end, creating Api's to fetch relevant details for orders, details , handling authentication, authorization. Technologies used are Completable futures , Rxjava, async model, sort of an Api Gateway using javax, java, inhouse dropwizard based framework.
Content Flagging Service(Natural Language Processing): Content Flagging Service written in java , using NLP to detect abusive content and understand customer sentiment , alerting the concerned teams. Implemented GDPR guidelines in the service.
Next Generation Website Redesign : Designed the complete new experience for Groupon website written in Nodejs, preact, 90% coverage using integration and unit tests. The experience rating went up by 30% after the redesign and increased purchase rates.
Worked on Cloud Migration Of Services from on premise infrastructure to cloud AWS infrastructure on Kubernetes making the services easily scalable and more resilient.
June 2016 - August 2017
Cloud based Security company, a giant in the market providing internet security and provide data analysis to corporates and educational institutes.
Cloud Based security company working at scale of trillion requests a day.
CONNECTIVITY TOOL : Connection management for the system testing on udp , tcp, ssl and dns connections ,sending payload and receiving limited data. Fetching information from one cloud node to other using internal transportation mechanisms and connection managements for that data sent . Session management, cookie management .
MESSAGE TRANSPORT SERVICE : Message Transport Service for the cloud nodes to talk among themselves one acting as a master in carp protocol other as slaves talking on udp and tcp.
BLACK WIDOW CLIENT (HTTP TRAFFIC GENERATOR) & HTTP SERVER(Black Widow Server ) : Tool for sending huge amount of HTTP traffic from client to proxy to server supporting authentication mechanisms managing memory and tcp buffer issues. User based cookie management , data handling for users. Ssl support , Persistent connections and managing server responses.
Ssl Certificate population speed up tool : Dumping the acitve ssl certificates used by the active connections to files to repopulate them back giving the test team data to test those huge amount of certificates. The tool made it do all that using simple commands.
Built the Icap Server (Basically a server which offloads the virus scanning and content filtering off the web server, typically used in a security company to offload the main proxy server. ) Implemented according to RFC : https://tools.ietf.org/html/rfc3507
2012 - 2016
|
OPCFW_CODE
|
@ORACLE and user community
Has #ORACLE recognized the MyAgilePLM crowd support community as a go-to for customers seeking information? Can #ORACLE publish an article naming MyAgilePLM as a top support community for ALL users? Support used to be so easy back in the AgileSoft days. I even recall I had a rep that I would talk too quite frequently and even had the opportunity to have lunch with back in the Santa Theresa location days. With tech support becoming so difficult to navigate this community has provided a great go-to for quick questions that many of us have faced in the past. Please spread the word.
Everyone please add your two cents.
my two cents 🙂
Sometime I prefer to work with community blogs/forum/wiki than use the commercial ones. This is because they are more populated than commercial ones and the topics/discussions can be related specific to the problem that someone is facing in a particular period.
Regarding MyAgilePLM, I see that there are a couple of issues to be fixed with the application in general to be perfect but it is the first place where I look in case of an issue. Then I will look to OTN. THis is my approach and I think that it is the most used one, because of indexed in google. So it is very easy to find a solution/answer related to a particular issue/question.
For me is ok to be adopted by Oracle as well, but has to remain “free” and available to be reached over Google. This is also an opportunity for the community to grow and make it a very detailed and fully documented place to find answers in the Oracle Agile world 🙂
2 cents?? Having worked at Agile for 7 years, and then at Oracle for 5 more, can I give a dime’s worth??
Although I doubt Oracle would ever “acknowledge” MyAgilePLM, I am also certain that they know it is here. No idea on how one would approach them to discuss having Oracle publish an article about what this site is and how it can help people. Or if they would even be open to doing something like that. Believe me, things changed after Oracle bought Agile Software. Agile was a one-stop shop for everything, whereas Oracle doesn’t necessarily want to do implementations or consulting, just sell the software. I would prefer this website stay out of the clutches of Oracle.
That said, I suspect that this site will gain recognition more by word of mouth from folks who use it than by anything that Oracle can/would do. I have recommended to a client that they come here, and I know others who have also recommended such. Given that neither Agile nor Oracle has ever published anything about the database schema, and that the installation documentation assumes that things will always go correctly, a site like this is invaluable. But just as important is making it accessible and easy to use, and I think that it has done so quite well (although it does occasionally scramble attached files).
Note that I am also on the WRAU (Western Regional Agile Users) email list, which has been around since forever ( I first heard about it in 2003?). But I prefer this website, both for ease of use and much more accessible content.
|
OPCFW_CODE
|
New package: PRASInterface v0.1.0
Registering package: PRASInterface
Repository: https://github.com/NREL-Sienna/PRASInterface.jl
Created by: @jd-lara
Version: v0.1.0
Commit: dcb927a69268e3a776172e4c1c5d8bc83bfadaf2
Reviewed by: @jd-lara
Reference: https://github.com/NREL-Sienna/PRASInterface.jl/commit/dcb927a69268e3a776172e4c1c5d8bc83bfadaf2#commitcomment-149447558
Description: Interface to PRAS.jl maintained by Sienna\Ops
Thank you for submitting your package! However, please make sure to add some documentation before registering. At the very least, that would be a description of the package's purpose and a small usage example in the README.
It's not clear to me whether the name of the package is appropriate. I was not able to find the PRAS.jl referenced in the documentation. If it exists, PRASInterface seems like and unusual name (why would a Julia package need another Julia "Interface")?
Update: I was now able to guess at the URL for the documentation https://nrel-sienna.github.io/PRASInterface.jl/dev/
Please note that the docs badge in the README is currently broken. That will probably resolve itself once this registration goes through and there is a tagged version, but please confirm that it works at that point.
You'll also want to add the "About" info from the documentation to the main README.
Now that I've seen the documentation, I would recommend the package name PowerSystemsPRAS.jl. The current name PRASInterface only makes sense from the perspective of a PowerSystems user. It would be appropriate in an org-specific registry, or as a submodule name, but not for the wider audience of the General registry. There might be other orgs or packages that might want to interface with PRAS, after all. Moreover, since you already have PowerSystems, it is common practice and encouraged to use PowerSystemsSomething for "add-ons", where there is little scrutiny on the package name. You basically "own" that namespace.
Update: I was now able to guess at the URL for the documentation https://nrel-sienna.github.io/PRASInterface.jl/dev/
Please note that the docs badge in the README is currently broken. That will probably resolve itself once this registration goes through and there is a tagged version, but please confirm that it works at that point.
You'll also want to add the "About" info from the documentation to the main README.
Now that I've seen the documentation, I would recommend the package name PowerSystemsPRAS.jl. The current name PRASInterface only makes sense from the perspective of a PowerSystems user. It would be appropriate in an org-specific registry, or as a submodule name, but not for the wider audience of the General registry. There might be other orgs or packages that might want to interface with PRAS, after all. Moreover, since you already have PowerSystems, it is common practice and encouraged to use PowerSystemsSomething for "add-ons", where there is little scrutiny on the package name. You basically "own" that namespace.
Thanks for the comments @goerz, on the name we would like to keep the existing one since it is an interface to this package https://github.com/NREL/PRAS that isn't own directly by us. This package is an interface between our data library PowerSystems.jl and PRAS that's where the names comes from. The difference is that the authors of PRAS decided not to register it in the general Julia registry.
I will update the README and the registration request.
Thanks for updating the README!
on the name we would like to keep the existing one
Like I said, I don't think the name PRASInterface is appropriate in the context of General. Is there something that you find objectionable about PowerSystemsPRAS?
Of course, I don't have any special authority, so you could also ask for other people's opinion on Slack. If there's a broad consensus in the community that the name is okay, I'll be happy to unblock.
Thanks for updating the README!
on the name we would like to keep the existing one
Like I said, I don't think the name PRASInterface is appropriate in the context of General. Is there something that you find objectionable about PowerSystemsPRAS?
Of course, I don't have any special authority, so you could also ask for other people's opinion on Slack. If there's a broad consensus in the community that the name is okay, I'll be happy to unblock.
The objection is that PowerSystemsPRAS doesn't reflect the true usage of this library, if we develop our own Probabilistic Resource Adequacy capability then there will be name confusion. This is not the correct package convention in our organization.
The name we want to use is reflective of what the package actually does: an interface to PRAS.jl, similarly we registered in the past PowerModelsInterface.jl. It's an interface between our data model and another Julia modeling library period. The difference is that in the case of PRAS.jl, a common library for users in the Resource Adequacy space, our team doesn't have the mechanism to make the authors register it to General.
I don't understand the need to request community feedback for the registration of this package that follows the conventions of our organization that already have plenty of registered packages in the power systems modeling space.
I think it is important to note that the name PRAS.jl would also likely be rejected from General. Recall rule 1 on the naming guidelines:
Avoid jargon. In particular, avoid acronyms unless there is minimal possibility of confusion.
so the fact that PRASInterface.jl is related to PRAS.jl is kind of irrelevant because PRAS.jl only exists in a private registry.
On another note, I'm wondering if this should be a package extension rather than a standalone? it seems like the functionality here only exists in the context of PowerSystem.jl and PRAS.jl together, and it does not have any function as an independent package.
The name we want to use is reflective of what the package actually does: an interface to PRAS.jl
Maybe I'm confused. An interface between PRAS and what else? Whatever that "what else" is should be part of the package name. I thought this package was an Interface between PowerSystems and PRAS, hence the suggestions.
similarly we registered in the past PowerModelsInterface.jl
At the time that was registered, there weren't as many volunteers reviewing package submissions. Thus, things slipped through the cracks. The name PowerModelsInterface has the same issue as PRASInterface: an interface is between two things, and the name does not indicate what those two things are. That package would probably not have been merged by today's standards.
I don't understand the need to request community feedback
The General registry is a community resource, so all registrations will be reviewed
for the registration of this package that follows the conventions of our organization
You are moving a package from the context of a specific organization to the General registry that addresses the entire community. For people internal to NREL-Sienna the name PRASInterface makes perfect sense. Someone in the general public does not have that context, so it is not clear what is interfacing with PRAS here. This wouldn't be a problem if packages in Pkg included the org name. That's something I'd love to have in Julia 2.0, but it's not what we have now. If this package is an interface between PRAS and general code inside your organization (rather than between a particular package like PowerSystems and PRAS), then probably SiennaPRASInterface would be the appropriate name.
PRAS.jl would also likely be rejected from General
Yes, or at least it would receive some pushback. I'm actually not sure how exactly PRASInterface manages to avoid an explicit dependency on PRAS. There appears to be some vendored code…
It does seem like a problem to have a registered package interfacing with PRAS when PRAS is not registered. There's a reason that dependencies of registered packages also have to be registered. If PRAS ever were to be registered, I can't guarantee that it would manage to get an exception from the naming guidelines, even though I'm open to it. If PRAS had to change names, PRASInterface would have a problem.
It would be good for @GordStephen to clarify what kind of plans there might be for a registration of PRAS, and/or whether NREL can guarantee that PRAS will exist as a permanent name.
Side node: if PRAS ever were registered, even if we make an exception for the name (which we very well might), the repository URL has to be changed to end with .jl. I'd recommend doing that anyway, as early as possible
PRAS.jl would also likely be rejected from General
Yes, or at least it would receive some pushback. I'm actually not sure how exactly PRASInterface manages to avoid an explicit dependency on PRAS. There appears to be some vendored code…
It does seem like a problem to have a registered package interfacing with PRAS when PRAS is not registered. There's a reason that dependencies of registered packages also have to be registered. If PRAS ever were to be registered, I can't guarantee that it would manage to get an exception from the naming guidelines, even though I'm open to it. If PRAS had to change names, PRASInterface would have a problem.
It would be good for @GordStephen to clarify what kind of plans there might be for a registration of PRAS, and/or whether NREL can guarantee that PRAS will exist as a permanent name.
Side node: if PRAS ever were registered, even if we make an exception for the name (which we very well might), the repository URL has to be changed to end with .jl. I'd recommend doing that anyway, as early as possible
I can get behind using SiennaPRASInterface if it moves the conversation forward. Please just confirm so I can change the name.
@rayegun Point taken. That phrasing was indeed out of line. I apologize.
We talked this out on Slack. @jd-lara will check to see if renaming to SierraPRASInterface is easy to do. If the internal procedures within NREL make a rename too cumbersome (we're talking about a government agency, after all), we'll just stick to PRASInterface and I'll unblock.
This PR can be closed given https://github.com/JuliaRegistries/General/pull/120172
|
GITHUB_ARCHIVE
|
The Senior Bioinformatician/Computational Biologist will work with the Cancer Data Science (CDS) Shared Resource, at University of Michigan whose mission is to enhance the quality of Rogel Cancer Center research through the use of effective, appropriate and state-of-the-art bioinformatics and statistical methods. Your role will be to develop analysis workflows for multiplatform high-throughput genomic datasets. You will report to the Director of Bioinformatics and CDS Shared Resource.
As a University of Michigan School of Public Health employee, you have a unique opportunity to change lives locally, in our state, and around the world. We are seeking an experienced and dynamic staff member with a commitment to contributing to a diverse, equitable and inclusive environment for all members of our community.
Develop genomic analysis workflows as well as predictive models to relate phenotypes and genomics data with clinical outcomes.
Bioinformatics (routine and exploratory) analysis for diverse biomedical research projects involving multidisciplinary researchers. The projects include laboratory, clinical and translational research and require collaboration with cancer center investigators, formulation of scientific goals, data cleaning and management, bioinformatics and statistical analysis to address aims and presentation of results.
Provide consultations to investigators or lab personnel about project & analysis needs, analysis results and next steps.
Assist in grant writing endeavors for collaborative grants: bioinformatics methodology components for consulting PIs and researchers.
Provide leadership and co-lead teams to plan and execute bioinformatics projects.
Participate in research group meetings to formulate and refine research objectives.
Attend lab meetings and continuing education programs and events.
Participate in the writing of publications and other reports.
Multiple collaborative opportunities are available at the University, through the Rogel Cancer Center.
Senior level: A PhD or Master's degree with experience in leadership skills, computational biology, bioinformatics, statistics, mathematics, or related field.
Intermediate level: A Master's degree with experience in computational biology, bioinformatics, statistics, mathematics, or related field. In exceptional circumstances, a Bachelor's candidate with relevant experience can be considered.
The ideal candidate will have familiarity with modeling of genomics (bulk and single-cell - RNAseq, spatial single-cell, epigenomic, and proteomics) data.
Proficiency in R, Python, and other standard bioinformatics software.
Experience collaborating with biomedical researchers and leading projects.
Excellent communication skills.
Ability to multi-task.
This position may be underfilled at a lower classification depending on the qualifications of the selected candidate.
Salary for the senior level is: $81,159-$100,255
Salary for the intermediate level is: $65,967-$81,489
Michigan Public Health is seeking a dynamic staff member with a commitment to contributing to a diverse, equitable and inclusive environment for all members of our community.
The University of Michigan conducts background checks on all job candidates upon acceptance of a contingent offer and may use a third party administrator to conduct background checks. Background checks are performed in compliance with the Fair Credit Reporting Act.
U-M EEO/AA Statement
The University of Michigan is an equal opportunity/affirmative action employer.
|
OPCFW_CODE
|
What are the Delphi XE2 VCL Runtime BPLs?
The old Delphi 7, uncheck runtime packages and build all trick doesn't seem to work anymore so I can't restore whatever would be a good set of runtime BPL's for my Delphi project.
I've got a problem, which I will probably ask another question about and link here, which I think might be solved by including a particular BPL that contains VCL.CheckLst.pas.
The reason I don't just know the answer to this is when I start a new VCL forms project, there are no VCL BPLs in the runtime packages by default, there's RTL and some firemonkey stuff and good old MadExcept and Indy, but no VCL, what's the deal with that? is my Delphi misconfigured?
How is this a question you can't just answer for yourself by doing a filesystem search?
File -> New package -> View source: 'rtl' is there. File -> New unit, insert 'uses vcl.checklst;', IDE forces you to add 'vcl' and 'vclx'.
@rob can't tell if they're runtime or design time that way - no sense including (or distributing) designtime BPLs
There are no VCL. (note the . after VCL) runtime packages. There are the standard RTL packages you've been used to before; the VCL namespaces are contained in them.
To find out exactly what runtime packages you need to distribute, you might find this useful.
Go to the Project Options/Packages/Runtime Packages dialog (image below to help explain).
Expand the Link with runtime packages node, check True, and clear the three Value node checkmarks. You can also open the nodes below Runtime packages and clear the lists for the three entries there. Save the changes and close the dialog. (The IDE will repopulate the list and store it in the .dproj file; you'll see it if you reopen the Project Options dialog after building.)
Use Project|Build <yourproject>. Once it builds, use Project|Information for <yourproject>; the right side panel will show you the BPLs you'll be required to distribute. (VCL.CheckLst is in vc1x60.bpl, BTW, according to Sertac's comment below.) Make sure you build and don't just compile; you need to make sure all the dcu's are rebuilt so the package list can be determined.
Just to be sure I checked with the exports from both rtl160 and vclx160. It's in vclx160.bpl. Being an api control it wouldn't make sense to put it into rtl either.
Oops. I think you're right (it made sense earlier). It must be compiling the VCL stuff in instead of using the packages, because I only specified rtl160. The exe is still 5MB, though. The form or component didn't pull in vcl160; I can't figure out why, but it must be my error (as usual, not yours ). Rolling back my last edit and deleting my last comment. Thanks, Sertac.
Dunno, it's confusing.. Will check to put only 'rtl'. (I wish I were usually right!! ).
@Sertac: I'm still confused. Like I said, a new VCL application built in XE2 and copied to a Win7 Virtual XP Mode VM that only has D7 installed, copying only the testapp.exe and rtl160.bpl into a new folder, works. The app runs without any errors, and the form displays with the CheckListBox displaying properly. (I even confirmed that the path statement only referred to folders in that VM, and searched for any *160.bpl files - it found only the one I copied into the new folder.)
You're right, I'm quite surprised. So it's possible to use rtl in a package and other stuff linked in. It doesn't work the other way around though, once you refer vcl, vclx runtime packages, you have to deploy rtl160. Must have got to do something with the dependancy hierarchy.
@Sertac: I knew it worked that way the other way around as well, but I didn't know you could use only rtl. I guess I learned something today. Thanks! :)
Thanks to you!, it would never occur to me :). .. Some meaningless stats: no packages -> 6.84Mb, only rtl package -> 5.7Mb, rtl, vcl, vclx pagkages -> 2.2Mb.
Thanks, I did the project information thing for all my projects and tabulated the perfect runtime package configuration. When I added them all to all the DLL's I stopped getting my horrible annoying very bad TCheckListBox crash that didn't make any sense and was officially ignored by Embarcadero
Glad I could help. :) Just for the record, though: Embarcadero didn't ignore the problem. More info was requested on 11/15/2011, it ws promoted to the internal DB that same day, and it was marked as "fixed" 3 days ago (12/19/2011). How is that "officially ignored"? :)
@Ken, I guess I misunderstood that, I just figured they marked something 'fixed' that they couldn't replicate like I always do. I'm actually glad this bug was in there because it gave me the impetus to figure out exactly which runtime packages I needed.
|
STACK_EXCHANGE
|
// the default response code logging levels
// override with the 'resCodeOverrides' option
const responseCodeLevels = {
// 200 level responses are usually okay
'2xx': 'info',
// 300 level responses tend to just clutter things up
'3xx': 'verbose',
// 400 level responses aren't necessarily errors
// but are more useful than 200 level responses
'4xx': 'notice',
// 500 level errors are always an alert. If you get these,
// something is wrong and someone needs to look at it
'5xx': 'alert'
};
/**
* Determine the logging level of a given response code
* @param {int} statusCode the status code to evaluate
* @param {Object} overrides an object containing overrides
* @returns {string} the Winston syslog logging level string
*/
const statusCodeLevel = (statusCode, overrides) => {
// ensure we are dealing with a string
statusCode = String(statusCode);
// build a '[2-5]xx'
const value = `${Math.floor(statusCode/100)}xx`;
// check if we have a match in the overrides
// otherwise
return overrides[statusCode] || responseCodeLevels[value];
};
const defaultOptions = {
// an array of paths to ignore logging on
// note: if you ignore '/foo', '/foo/bar' will also be ignored
ignorePaths: [],
// whitelist paths
// if you ignore '/foo', whitelisting '/foo/bar' will log all
// requests to /foo/bar and sub paths
whitelistPaths: [],
// allow overrides of specific status codes - can also be used to override
// a whole nxx range
resCodeOverrides: {
'201': 'info',
'401': 'warning'
}
};
module.exports = (logger, options={}) => {
const logOptions = Object.assign({}, defaultOptions, options);
return (req, res, next) => {
// the response has finished so we are no longer blocking the client
// still it may be beneficial to let this roll
res.on('finish', () => {
// check if we are ignoring any paths
if (logOptions.ignorePaths.lenth) {
// check if it is whitelisted
if (!logOptions.whitelistPaths.includes(req.baseUrl) &&
// see if it is in the ignore list
logOptions.ignorePaths.includes(req.baseUrl)) return;
}
// build the log object
const logObject = {
// make sure express-request-id is being used
id: req.id,
// the method of the request
method: req.method,
// the hostname used in the request
host: req.hostname,
// the source ip
srcIp: req.ip,
// this is the baseUrl - this may need to be updated
path: req.baseUrl,
// the size of the request
reqSize: req.headers['content-length'] || 'n/a',
// the size of the response
resSize: res.getHeaders('Content-Length') || 'n/a',
// there is a requirement here on @local-lib/response-time
responseTime: res.locals.responseTime
};
// determine the log level based on status code
const logLevel = statusCodeLevel(res.statusCode, logOptions.resCodeOverrides);
// default to info
const logFun = logger[logLevel] || logger.info;
logFun(`${res.statusMessage}`,{http: logObject});
});
// don't forget to allow processing to continue once we setup
// the on 'finish' response handler
return next();
};
};
|
STACK_EDU
|
About this website.
2 pages under #andretorgal-com
This website: Meta
These are the raw notes collected while building this website. Todos, issues, ideas, dependencies, strategies, decisions, and so on.
This is my home on the web. A place where I blog some thoughts and run a few experiments.
Here, you can learn more about me, my work as software engineer and other stuff I have been up to.
8 posts under #andretorgal-com
Improving (MDX) authoring experience in Astro / Solutions
Astro built-ins and plugins are not sufficient to deliver all the features or details that I want t…
MDX authoring experience in Astro / Shortcomings
Since I am migrating this site to Astro - which I am convinced is the best SSG out there right now …
Advanced MDX authoring experience / Requirements
Once I got my new website's information architecture up and running, and migrated most of the old c…
(re)Building my website with Astro + MDX + Preact
Relaunching my website, again based on another Typescript SSG framework. 🚀 Not React-Static anymore, not even React "at all". Introducing the magnificent Astro. 🌔
Saying goodbye to my React Static website
I am relaunching my website with Astro. Writing down a word or two about the previous version, React Static, and the original "MDX" experience.
Staging My Website Rebuilt In React Static
Today the new version of my website is uploaded to a staging endpoint. What is happening, what is n…
Hello World (Again)
Here I am again, rebuilding the whole thing, enjoying React and static site generation and making p…
Hello World :wave:
My first website was hosted on tripod, back in 1995 and for the next 13 years I had a constant, tireless, and experimental presence on the web. But In 2008 I took my last site down. For a whole decade, andretorgal.com was nothing but a placeholder or a 404. It's time to change this sad state of affairs.
16 meta docs under #andretorgal-com
It's been ages since I've had a feed here.
Now I do RSS.
My website: Information architecture
My website: Responsive images
Image generation is handled by a custom Astro integration, similar to
@astrojs/image, and all images are wrapped in a custom
Figurecomponent for richer content and better semantics. The image optimisations are carefully synchronised with the layout. Aspect ratio awareness is mandatory to eliminate CLS.
Previous incarnations of this website.
When, what, how?
See the docs section for details on the current version.
English spelling, because
I honestly prefer english spelling. But support for
en-gbin our tools is some times lacking and it can be frustrating.
Some sparse documentation about how this website was built.
My website: Backlog
Stuff I want to do on this website, or otherwise just investigate and experiment with. The done stuff is on the changelog page.
My website: Changelog
Paper trail of changes in this website, since there is some continuity around 2017. Tasks from my website's backlog, eventually done (or archived).
These options were accepted ✅ and have made it into my website.
These options are under consideration 🤔 for further development of my website, or simply captured for future reference.
Options considered while building this website, maybe even implemented at some point, but eventually ❌ discarded or reverted.
My website: Colour
Colour palettes. They've been so fine tuned, they're vintage cheese 🐭 by now.
My website: Layout
This website uses break points defined by the content itself to reorganise content in 4 different layouts.
My website: Typography
About the typefaces used in this website and the responsive typographical scale in use.
My website: Conventions
My website: Styling
|
OPCFW_CODE
|
Every time we use a piece of computer software, on whatever device it may be, we are using an essential tool – a compiler. Computer programs written by us are in high-level languages like C, C++, Java, FORTRAN, etc. But the hardware of the computer does not understand these high-level instructions because the only language the electronic hardware understands is the presence or absence of electric current. This we denote, for our convenience as binary digits 1 and 0; this is called machine code. So the task of conversion of high-level language program to machine code must be done by another software and that is the compiler. The compiler performs multiple tasks in its life cycle and the best explanation of role of compiler will be found in the book shown below.
But what is the ACM Turing Award? The Turing Award was named for Alan M. Turing, the British mathematician responsible for the mathematical foundations and limits of computing, and who was a key contributor to the Allied cryptanalysis of the Enigma cipher during World War II. This award was started in 1966, and since then the Turing Award has honored the computer scientists and engineers who created the systems and underlying theoretical foundations that have propelled the information technology industry. It is like the Nobel Prize for Computer Science.
On March 31, 2021, ACM (Association for Computing Machinery) awarded the 2020 Turing Prize to Alfred V Aho and Jeffrey D. Ullman. Prof. Aho is the Lawrence Gussman Professor Emeritus of Computer Science at Columbia University. Prof. Ullman is the Stanford W. Ascherman Professor Emeritus of Computer Science at Stanford University. They are the pioneers of compiler design, theoretical foundations of algorithms, and formal languages. The award of $1million will be shared equally between the two researchers.
They made fundamental contributions to the field of compilers for programming languages and spread this information through their influential textbooks. Their work in algorithm design and analysis techniques contributed crucial approaches to the theoretical core of computer science.
Two books co-authored by Aho and Ullman are mandatory reading for anyone doing undergraduate or graduate level study in computer science. “The Design and Analysis of Algorithms” is a classic in this field (John Hopcroft is also a co-author of this book). This book introduced the random access machine (RAM) as the basic model for analyzing the time and space complexity of computer algorithms using recurrence relations. The general algorithm design techniques and the RAM model introduced in this book now form an integral part of the standard computer science curriculum.
The other book, “Principles of Compiler Design” is co-authored by Aho and Ullman. This is the bible of compiler construction, formal language theory, and syntax-directed translation techniques. This is also called as the “Dragon Book” because of its cover design, and it lucidly lays out the phases in translating a high-level programming language to machine code, the entire process of compiler construction is a set of modules, each doing a specific task, much like a function or a method in programming languages. The current edition of this book, Compilers: Principles, Techniques and Tools (co-authored with Ravi Sethi remains the standard textbook on compiler design.
|
OPCFW_CODE
|
Software Engineer 2 - ( 0098909 )
Product Engineering Group
A bit about Epsilon
We are the global leader in creating meaningful connections between people and brands. We work with 15 of the top 20 global brands and 8 of the top 10 Fortune 500 companies. How did we get this far? It is because of our team of thinkers and doers who, together, create the perfect blend of data, technology and creativity. They are fearless go-getters and creative innovators who have passion, determination and our support to make their ideas come to life every day.
To know more about us, please visit https://india.epsilon.com and follow us on Facebook, Twitter, LinkedIn, and Instagram.
A bit about who we are looking for
At Epsilon, we run on our peoples ideas. Its how we solve problems and exceed expectations. Our team is now growing and we are on the lookout for talented individuals who always raise the bar by constantly challenging themselves and are experts in building customized solutions in the digital marketing space.
So, are you someone who wants to work with cutting-edge technology and enable marketers to create data-driven, omnichannel consumer experiences through data platforms? Then you could be exactly who we are looking for.
Apply today and be part of a creative, innovative and talented team thats not afraid to push boundaries or take risks.
What youll do
Responsible for working in a product teams to build a product for infrastructure provisioning on AWS using .Net and Ansible.
Roles and Responsibilities
Create and maintain cloud infrastructure as code and provision AWS environments for product teams.
Create configuration management scripts
Implement Configuration Management and Infrastructure as Code using CloudFormation (CFN), Terraform, Ansible etc.
Experience with Zero downtime deployments on AWS with canary, blue-green and rolling deployments
Understanding of the Cloud Security tooling landscape and has experience integrating various Cloud Security tools together to provide end to end application lifecycle management
Work with different teams to conduct proof of concept (POC) and implement the design in production environment in AWS
Troubleshoot and fix configuration issues as and when required and Document/communicate the resolution notes to other team members
This position will also coordinate with operations teams, architects, and QA Teams to validate configuration with industry standard best practices before they are placed into production.
Supports users by developing documentation and assistance tools.
Updates job knowledge by researching new internet/intranet technologies and software products; participating in educational opportunities; reading professional publications; maintaining personal networks; participating in professional organizations.
Enhances organization reputation by accepting ownership for accomplishing new and different requests; exploring opportunities to add value to job accomplishments.
Good programming skills on .Net (MVC)
Good knowledge of AWS services
Hands on experience of AWS cloud and configuration management tools.
Experience with automation tools (Ansible), scripting (Python, Boto3, Bash, etc.)
Working knowledge with a database system such as MsSQL/Oracle and NoSQL/Mongo.
Experience with configuring databases to support various activities related to the system
Migrating experience with existing on-premises application to AWS.
Working knowledge of web application architectures
Communicate with internal stakeholders to clarify requirements and overcome obstacles to meet the organization goals.
Provide troubleshooting and root cause analysis for production issues.
Primary Location : IND - India-4009 - Karnataka-54353 - Bangalore-N1-Bangalore, India
Work Locations : N1-Bangalore, India
Job : Technology
Organization : Epsilon
Schedule : Regular
Job Type : Full-time
Division : Epsilon India
RI : AD
Posted on: 15 Sep, 2019
|
OPCFW_CODE
|
Forcing users to use a new password everytime they reset
This is getting common that while resetting one's password, web-applications force users not to use a password which they had used before. Just like forcing users to follow a particular pattern of passwords is a usability bug, restricting users not to use any of their previously used password seems to be equally problematic. Such approach compels the user to memorize more and more passwords but as a result, user would tend to use rather simple passwords which it could remember and recall or to go worse, write somewhere nearby for their reference.
My Question is: Do you consider this a usability bug that system doesn't let you use your previously used password?
My position is that applications should not only let you reuse your passwords but also let you keep your existing password in case you forgot one. But to be able to do that, user must
click a password reset link sent to his/her email (thus email is verified)
provide answer top secret question (authenticity of the user verified)
Scenario:
The company db was hacked and user information was stolen. The company is asking everyone to reset their passwords asap to limit any further theft using personnel ids.
In such a scenario, it is utmost important that the users DO NOT use the same password.
Same situation is also faced when someone has 'hacked' into your account. If you reset the password to the same thing, it is quit dumb on the system and user's part to allow the culprit easy entry by not changing the password.
Many systems go a step further and keep a record of your passwords for the past 6 months or some other period OR past 'n' number of passwords and do not allow you to use the same password. The reason being, like you mentioned, people do not want to memorize more passwords. So, what they do is, create a new password and reset which fulfills the systems requirement that it be not same as the previous password and the reset the new one to replace it with the old one again. Thus, being exposed to the theft again.
There are three scenarios here. (1) Catastrophic (happens once in years): justifies password must be different from old one - agreed. (2) Account Hacked (may happen couple of times a year): justifies password must be different - agreed (3) I forgot my password: now this is where user should be dealt politely and shouldn't be forced to create a new password.
@Salman If the user forgot their password, then, in reality there will be no scenario of 'not being allowed to use old password'.
This is exactly where I am asking that user should be allowed to use one of his old passwords and among among old passwords list, he may try to use the same password which he had used last time (and forgot). By doing that, we do not compromise on security aspects.
This is simply a tradeoff between usability and security. Think of it almost in the same way that you think of insurance. You pay a little bit regularly to covers yourself from a large loss that will typically happen infrequently (if at all).
Requiring regular password changes in theory makes a system more secure. Not allowing old passwords to be reused is an obvious requirement of this, as if you reused old passwords, the result would be the same as not changing the password at all.
Now the UX question is whether this increased (theoretical) security is worth the usability hit that you will take by having it. And for that there is no absolute answer. Is it worth having this for your Reddit account? Not very likely. Is it worth having this for accounts that access hospital medical records? Very, likely, yes.
So this should be evaluated for each service and it should be decided from there.
That said, I have constantly referred to this as a theoretical security improvement, as it depends on the situation. If you're primarily trying to protect yourself from web based attacks then it is usually an improvement. But if you're trying to protect yourself from someone in the same building from getting access, it is usually worse as people tend to write their passwords down - even people that should know better. I've worked on a number of central servers in insurance companies where the admin password was written in the inside of the server cabinet.
So unless you educate users and are sure that they will not write their new passwords down, you actually will decrease on site security.
What you should be doing is requiring passphrases instead of passwords. They are easier to remember (less need to write down) and harder to break (if you avoid purely dictionary terms). Obligatory XKCD comic to explain this:
|
STACK_EXCHANGE
|
There may be an issue that some iMac users experience where after being in sleep for a period of time, their systems will no longer respond to keypresses or other input, and even though pressing the power button should similarly wake the system, doing this similarly has no effect. With this problem, affected systems require either the power button be pressed and held for ~10 seconds, or the power cord is pulled to shut the system down and reset it.
Recently MacIssues reader Ken H. wrote in about this problem happening on a Late 2013 model iMac, running the latest versions of Yosemite and all other installed software. While the system would wake properly if immediately put into sleep and then woken, if left in sleep mode for a prolonged time, then it would simply refuse to wake up.
In general, this type of issue suggests a power management problem or other hardware-based configuration fault that can usually be tackled by the generic troubleshooting steps of resetting the Mac’s Parameter RAM (PRAM) or its system management controller (SMC). However despite Ken resetting the these features, the problem continued, suggesting the default values for these controllers is allowing the issue to persist.
When your Mac sleeps, it will initially enter a standard low power sleep mode that keeps memory active so you can quickly resume your work. However, if power is lost in this mode then your system will lose its memory contents and have to start back up from scratch. To prevent this, hibernation mode (aka standby mode) will write the contents of memory to your Mac’s hard drive and then shut off, allowing you to resume work in the face of lost power. Since this mode uses no power, it is required to be on by default for some systems in some countries.
Unfortunately it appears that some systems may have troubles with hibernation mode, so if you are finding your Mac unable to wake from sleep, then are three potential fixes you can try:
Turn off system hibernation
Since hibernation mode’s only real benefit is to allow you to resume from sleep in the face of a power loss, a simple solution may be to turn hibernation mode off by running the following two commands in the OS X Terminal (enter your password when prompted–it will not show):
sudo pmset standby 0 sudo pmset autopoweroff 0
These commands will turn off the hardware settings that will put your Mac into hibernation mode. The first is Apple’s main standby mode option, and the second is an implementation required for European energy regulations. To reverse these commands, you can again either reset your Mac’s system management controller, or re-run the commands but use “1” as the value instead of “0.”
Disable and re-enable FileVault
The writing and restoring of memory contents from disk may conflict with FileVault or other full disk encryption routines. Technically when waking from hibernate mode the system should allow you to authenticate and then load the contents of the hibernation file, but if a bug prevents this from happening, then your Mac may not be able to load the hibernation file, and could hang.
To hopefully overcome this, first try disabling your full-disk encryption routines. After your disk is fully decrypted (this may take a while) test hibernate mode to see if it works properly, and then re-enable disk encryption and test again.
Remove your the system’s hibernation file
A final approach you can try is to remove the system’s hibernation file, which is the hidden file that is written to whenever your Mac goes into hibernation mode. While the system will recreate this file if it is missing, if present then it will just write to the existing file. As a result, any damage to this file may prevent the system from reading it when waking from sleep. To fix this, you can force OS X to recreate it by deleting it, which can be done by running the following command in the Terminal:
sudo rm /var/vm/sleepimage
Again supply your password when prompted, and then see if hibernate mode works properly.
Special thanks to Ken for writing in about this issue.
|
OPCFW_CODE
|
Two internet connections to one server
I have one server running on Ubuntu 14.04. It is used to host a web application.
I also have two routers from two different internet providers with two different static IP addresses. I want to allow traffic through both internet providers to access the same web application.
Each Internet Provider has limited upload bandwidth. When several users log to the system, clients complain slowness. So I want to increase the upload bandwidth. Say one ISP is SLT and other is DT. I though of giving my SLT provided static IP to those who use SLT connections and provide DT static IP to those who use DT connections.
It that ever possible ?
(I currently have one ethernet port and one wifi port in the server, but I can install an additional ethernet network card if necessary.)
What precisely are you trying to achive? Load balancing or redundant links?
Each Internet Provider has limited upload bandwidth. When several users log to the system, clients complain slowness. So I want to increase the upload bandwidth.
You want to add that to your question
added details to answer.
I understand the aim, however most important thing is that to make this transparent for the clients you have to implement load balancing. That is if you want for all of them to use single domain name. Domain will be resolved to load balancer addres and it will direct traffic to one of the IP addresses that the server is reachable at. If you do not have to use single domain, create two of them and assign each IP address to different domain and gave your clients.
There is no need to give one IP address. I have two static IPs, the issue is to know how to direct both IP address to one server.
If you have 2 IP address, 2 routers and one server behind those routers, make a simple port forwarding on them. I don't get what is the problem tbh... Server needs to have 2 local IP addreses so 2 NICs are required. You can have virtual IPs/NICs on Linux but that does not resolve the problem of 2 wired/wireless connections. So 2 IPs + 2 NICs + 2 port forwardings.
You want to add a second Ethernet card if you can. Configure both.
Then in the DNS for your website, create two A records for the domain name.
mycomain.com A <IP_ADDRESS>
mydomain.com A <IP_ADDRESS>
When you do a dns lookup it should list both IP addresses.
nslookup mydomain.com
<IP_ADDRESS>
<IP_ADDRESS>
This should provide rudimentary load balancing. The server should send outgoing packets out the same interface they were received on.
However, if you choose to use only one card, and put two IP on that card, then Linux will send outgoing packets on the primary interface only. Which is not what you want.
DNS part and load balancing is not the issue. when I configure wifi card and the Ethernet card an Ubuntu, it is not possible to access the server from both connections at once.
Hi, then maybe you could explain your situation better.
Now we have move to cloud servers and this issue does not exists. Thank you for your help.
|
STACK_EXCHANGE
|
Proxy status reports proxy version incorrectly
As you can see the proxy is on 1.0.3
❯ kn logs ingress-nginx-external-controller-5878b8df7c-rc7bp --follow istio-proxy
2018-10-29T10:42:28.130078Z info Version<EMAIL_ADDRESS>
proxy-status says otherwise:
❯ istioctl proxy-status
PROXY CDS LDS EDS RDS PILOT VERSION
ingress-nginx-external-controller-5878b8df7c-rc7bp.ingress-nginx SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-55c5b77c8b-tmnks 1.0.2
Hi all
This is still a problem in 1.0.4, and kind of annoying when you're trying to do a cluster upgrade and check all the sidecars are OK.
And still a problem in 1.0.5!
This is still a problem in 1.1.1, apparently.
inspecting pod:
image: docker.io/istio/proxyv2:1.1.1
istioctl proxy-status:
NAME CDS LDS EDS RDS PILOT VERSION
web-7b6d4ffd59-5x5lm.foo-bar SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-5996947b5f-f86dk 1.1.0
istioctl proxy-config bootstrap:
{
"bootstrap": {
"node": {
"metadata": {
"ISTIO_PROXY_VERSION": "1.1.0",
"ISTIO_VERSION": "1.1.1",
I think this is tied to the fact that the ISTIO_META_ISTIO_PROXY_VERSION is not equal to the actual proxy version, and only gets bumped when the capabilities need to change. This is essentially hardcodeded into the proxy image (unless you override it, which might break things).
The value was changed for 1.1.0 and previously for 1.0.2, hence the examples @Stono posted.
https://github.com/istio/istio/blob/d76cc6f9e3e635050d1f19c8e1acac86c21ee502/pilot/docker/Dockerfile.proxyv2#L13-L14
Unless I'm missing something there are 2 options to solve this:
Bump this value in the images for each build (This might break things? Perhaps @costinm can comment)
Rework the proxy-status flow to pull the version from something else.
@dwradcliffe Simple solution would be to just set the ENV correctly in the deployment manifests.
I've been running a set of proxies with the ENV set to 1.1.1 and it seems to be working fine. I also took a quick look at the code and I think the only thing using the value is the check to see if it's > "1.1". That is definitely the simplest solution.
We have upgraded from Istio 1.1.3 to 1.1.4
But even after restarting pods, istioctl proxy-status shows version 1.1.3.
When I check the pods with k get pods <pod_name> -o yaml we can see that istio-proxy image tag is 1.1.4.
ref: https://github.com/istio/istio/pull/8252
|
GITHUB_ARCHIVE
|
How to implement Custom Log Levels in CocoaLumberJack from Swift?
I am using CocoaLumberjack for a Swift project. I would like to implement custom log levels/flags, as I would like to use 6 rather than the default 5, and would prefer different names.
The documentation for doing this is not helpful. It is only a solution for Objective-C.
The fact that DDLogFlag is defined as NS_OPTIONS means I actually could simply ignore the pre-defined values here, create my own constants, and just write some wrapping code to convert from one to the other.
However, DDLogLevel is defined as NS_ENUM, which means Swift won't be very happy with me trying to instantiate something to say 0b111111, which isn't an existing value in the enum. If it were an NS_OPTIONS, like DDLogFlag, I could just ignore the pre-existing definitions from the library and use whatever valid UInt values I wanted to.
As far as I can tell, I just have to write some Objective-C code to define my own replacements for DDLogLevel, DDLogFlag, and write a custom function to pass this in to and access these properties on DDLogMessage. But this feels bad.
How can I use my own custom logging levels in Swift with CocoaLumberjack?
https://github.com/CocoaLumberjack/CocoaLumberjack/pull/1249
This is indeed only possible in Objective-C right now - and there also only for the #define Log Macros. Even then I could imagine that the "modern" ObjC compiler will warn about the types that are passed to DDLogMessage.
The docs are indeed a bit outdated here and stem from a time where Objective-C was closer to C that it is to Swift nowadays... :-)
Nevertheless, in the end DDLogLevel and DDLogFlag are both stored as NSUInteger. Which means it can theoretically take any NSUInteger value (aka UInt in Swift).
To define your own levels, you would simply create an enum MyLogLevel: UInt { /*...*/ } and then write your own logging functions.
Those functions can actually forward to the existing functions:
extension DDLogFlag {
public static let fatal = DDLogFlag(rawValue: 0x0001)
public static let failure = DDLogFlag(rawValue: 0x0010)
}
public enum MyLogLevel: UInt {
case fatal = 0x0001
case failure = 0x0011
}
extension MyLogLevel {
public static var defaultLevel: MyLogLevel = .fatal
}
@inlinable
public func LogFatal(_ message: @autoclosure () -> Any,
level: MyLogLevel = .defaultLevel,
context: Int = 0,
file: StaticString = #file,
function: StaticString = #function,
line: UInt = #line,
tag: Any? = nil,
asynchronous async: Bool = asyncLoggingEnabled,
ddlog: DDLog = .sharedInstance) {
_DDLogMessage(message(), level: unsafeBitCast(level, to: DDLogLevel.self), flag: .fatal, context: context, file: file, function: function, line: line, tag: tag, asynchronous: async, ddlog: ddlog)
}
The unsafeBitCast here works, because in the end it's just an UInt and _DDLogMessage does not switch over the level, but instead does a bit mask check against the flag.
Disclaimer: I'm a CocoaLumberjack maintainer myself.
We don't recommend using a custom log level in Swift. There's not much benefit from it and logging frameworks like swift-log also use predefined log levels.
However, I personally could also imagine declaring DDLogLevel with NS_OPTIONS instead of NS_ENUM. The OSLog Swift overlay also uses an extensible OSLogType.
If this is something you'd like to see, please open a PR so we can discuss it with the team. We need to be a bit careful with API compatibility, but like I said it's totally doable.
On a side-note: May I ask what you need custom levels for?
I will open a PR for NS_ENUM -> NS_OPTIONS for DDLogLevel and use that PR as a jumping off point for discussing my needs. Mechanically, this is the smallest possible change needed to allow me to more easily have my own custom log levels, but I do not know whether or not this is the best overall change for CocoaLumberjack that could accommodate my needs. GitHub PRs are a bit better location for back-and-forth discussion.
|
STACK_EXCHANGE
|
We as developers create a lot of pull requests from building new features to fixing bugs and typos.
When we create pull requests it is important to optimize for easy review by other developers and even by yourself at a later point in time. The goal of PR is to keep it simple, clutter-free and allow others to review without any cognitive overhead.
1. Provide clear context
- It is better not to make assumptions in the pull request. Since the person reviewing it might not understand all the discussion that happened elsewhere outside the pull request
- If there are links to slack discussions / other documentations do mention them too
- Explaining why the code change is needed and what it does helps
- Don't use big words or any jargons if it is not needed
- If you do something new in the codebase explain it
- Follow some pull request template so that structure of the PR description is always the same and it is easy to find things
2. Write code spec document if possible
- When you are about to work on a big feature that would affect everything. To avoid any surprise you could write a simple spec document of change you propose to do and get it reviewed by other devs in your team
- This would allow you to avoid later disagreements and complete rewrite after the code is written
- This might not always be needed but will help for very big features
3. Keep PR size small
- It is better to have few changes in PR as possible
- If the feature is big rather than a shipping feature in a single PR it might be possible to break the feature into smaller features and ship it incrementally (ship it in parts at least to staging). That way the feature is easy to review and spot bugs. And you if would find issues with the feature and don't have to re-work the feature completely when everything is done
- If you find some other issue while writing the code create a different PR to fix the issue
4. Add Video / screenshots
- If you have visual changes add video or screenshots of the changes in the PR description
- If it is a bug fix provide screenshots of before and after the change. So that they don't have to jump to issues to see the bug
5. Host the changes
- Host the PR and provide a link for the reviewer to checkout
- You could use automated deployments to create preview URL for PR in frontend codebase using Vercel or Netlify
6. Highlight critical changes
- If there is any critical change you could highlight them by adding a Github comment explaining why you did it and providing more context
- You could also add a small video of walkthrough through the code changes if needed
- If you think there is something to look out for write down that too
7. Don't squash the commits once the review has begun
- Squashing commits during the review will make it harder to track changes for the reviewers
8. Automate repeated checks
- Using tools like prettier, Eslint to automate basic checks will help
- If there is an issue specific to the repo that comes up multiple times in different PR's automate it if possible either using custom Eslint rule or using some other tool so that the reviewer can be sure that issue handled and don't have to think about it
9. Don't get angry over a PR
- If the reviewer suggests some change and you disagree with it. You could disagree in a respectful way explaining your case. It is not worth getting angry over a PR. Since it is just the code change the reviewer disagrees.
- If you learned something new from the code review give a shoutout to the reviewer :)
Photo by Tim Mossholder on Unsplash
Top comments (2)
A really good rulebook to follow while making PRs, keeping in mind the mental sanity of both the author and reviewer.
This is excellent!
|
OPCFW_CODE
|
Xubuntu Those who want a plug and play operating system for your old laptop that can run with a minimum resource, then try Xubuntu. I found it to be much faster than Windows 7 which it was using before. It is designed with an aim to revive the old computers. Crunchbang++ seems less active Crunchbang or! This is a show stopper for me. Open Office is my current weapon of choice.
Key Features: Window Manager: Ubuntu Fork: Linux 17. Because Puppy Linux is built to be fast, it does not come along with bundles of applications. TinyCore saves on size by requiring a wired network connection during initial setup. Linux Lite already comes with lots of Default applications include the Thunar file manager, Ristretto image viewer, Gimp image editor, Mozilla Firefox web browser, Thunderbird email and new client and the LibreOffice office suite. Previously the graphics chip had been supported by the fglrx but no more. Based on a rolling-release model, Arch strives to stay bleeding edge, and typically offers the latest stable versions of most software. With the use of Xfce and the inclusion of a full complement of software, Linux Lite makes for an outstanding distribution for new users, working with old hardware.Next
Version Release date Codename End-of-life date Based on Future releases 4. Are you going to use it for everyday browsing? Do not ignore this window, this can be very helpful. Do you think that Lite might still work with it. Why not revive your old computer with Linux? Knoppix , another live lite Linux distro, based on Debian. While our focus is on older computers, you can also use most of these lightweight Linux on relatively new hardware. Now why would you want to use lightweight Linux distributions on newer hardware? An operating system is a software program that enables the computer hardware to communicate and operate with the computer software.Next
So that you can bring an old laptop back to life with full compatibility and functionality, and also these Linux operating systems are less hardware resources hungry. Linux Lite pack is coming with LibreOffice for your Office document needs, firebox to browse internet and Thunderbird email client as built-in install with the package. The latest development release is Linux Lite 2. The project offers three releases Standard, Legacy, and AppPack. You can also access the app store which let you choose from thousands of apps and utility software. It bundles a nearly full range of multimedia content creation applications for workflows involving audio, graphics, video, photography, and publishing.Next
Linux Lite is an operating system specifically designed to introduce Windows users to Linux. Based on Ubuntu and has two editions to choose from. The idea was to dispel myths that was hard to use. Coming back to the hybrid of cloud infrastructure, it comes with custom-made Ice applications for many tasks. Click to visit our Secure Online Shop and choose from a range of products. The Linux Lite Distro is coming as plug and play and ready to run from the box. It comes loaded with all the popular and useful applications.Next
You can ofcourse install your favorite applications if you need to. A little bit of history, availability and boot options The project provides a flexible and multi-functional Linux-based operating system with exclusive software. Also the laptop could not properly boot with acpi enabled with any distro I tried so the disk performance was severely handicapped as well. The most recent version at the time of writing 16. Have you kept your old computer somewhere in a rack? Everything we do, including Grub, is set up to work as a dual boot with Windows. Installation of open source apps can be done on any server and the pc itself, all you have to do is to choose the apps or games as per your need from the repositories.Next
One of the greatest aspects of Linux is its flexibility—it can be whatever you need it to be. I find it a very well thought out distro that works well for me. I started Linux with Slackware 0. Once you replace Windows or Mac from an old slow laptop with a lightweight Linux distro, you can revive these laptops to a new life, and those can fly. If you have an ancient computer then try this out and see the magic. In all these desktop environments Manjaro works like a charm. It is another best Linux distribution 2018 based on Ubuntu but with a focus on being lightweight and unbloated and it provides the entire necessary feature you want in your daily life.Next
Instead of a distribution based only on Ubuntu, Puppy offers releases based on Ubuntu and Slackware. There are also graphical tools to manage Samba shares and set up a firewall, for example. Debian has a wide range of software to choose from, which is, of course, free of cost. When you decide to switch to Linux, remember that there are plenty of resources available online and a helpful Linux community to help you and ease your transition. On top of that are all the basic tools. Doing so will be more of a learning curve as compared to some desktop environment. One of the best things about Linux is the range of choices available when it comes to Desktop distributions.Next
|
OPCFW_CODE
|
I am by training a computer scientist and software engineer, which means I’m somewhere in the vague nether world among mathematicians and logicians, computer programmers and writers. (A lot of computer science students are shocked and appalled at how much of their real work is writing for people to read rather than just for a computer to execute.)
Now, I think the things I learned in grad school are both very useful to a working software engineer and are beautiful and rewarding in themselves. But I spend a lot of my time now tutoring computer science students, and I’ve come to realize there is a whole body of knowledge that isn’t necessarily mathematical (even though it is built on mathematics) and that isn’t learned through multiple choice exams. It’s not what programming is based on, it’s what programmers do. It’s a skill. It’s a craft.
Like plumbing and welding, it’s a trade.
I don’t think a lot of people recognize this. Computer Science departments dare not: why, then they’d stop being an academic topic, they’d be a trade school. Shame too great to be borne. (And, in fact, one of my Duke professors, a brilliant man, honestly, didn’t program computers at all, and in fact, he needed a grad student to help him with his email.)
Of course, the (relatively) new phenomenon is programming “boot camps.” Like Thinkful and RefactorU (RIP) where I’ve taught in the past, a bootcamp is a concentrated program that takes people from other backgrounds and teaches them to code well enough to take an entry-level job in just a few weeks.
Now, that doesn’t make them computer scientists or even finished software engineers, but it makes them good coders. Good enough that I’m seeing increasing numbers of graduates of computer science and electrical engineering departments who then go to a bootcamp so they can actually learn to code.
Which raises a question in my mind. It’s an admission against interest, I suppose, since I make a substantial part of my income by helping college students who can pass the multiple choice tests but haven’t learned to write programs, but — well, what are college students in college for? Snazzy dorms and hookup culture, sure, and I would have loved that when I was an undergrad. But my total tuition and fees and room and board in 1973 at the University of Colorado was $1000. Total. (That’s about $5700 in 2019 dollars.)
In-state tuition and fees alone are over $11,000 this year; out of state, $34,125.
That’s $130,000 for a four-year degree.
So that’s the question I keep asking myself. Now, I really liked college — I must have, I did a total of something like 13 years in college. But I was able to pay for it myself, either working as a programmer or later with graduate assistantships.
When you are paying $40,000 to $130,000 for a bachelor’s degree and you still need a bootcamp to learn to be a working programmer, is a computer science degree worth the money?
Is any degree?
|
OPCFW_CODE
|
Java and C# are similarly easy to decompile. Many games are written in Java (e.g. Android) and C# (e.g. Unity), and there are a lot of modders/hackers using decompilers to obtain usable source code for games written in these languages.
Is it illegal to decompile a game?
5 Answers. Decompiling is absolutely LEGAL, regardless of what the shills say. At most, you can be sued for unauthorized activity relating to software unless you’re redistributing it.
Is it possible to decompile Unity Games?
They are already compiled and, to the best of my knowledge, there isn’t a tool to decompile them into Cg / HLSL. Despite this, they can still be imported into another project and they’ll work just fine.
Is it possible to reverse engineer a game?
Is reverse engineering legal? Yes, in fact there are many cases where the courts have sided with the reverse-engineer when it comes to anti-competitive practises. If you are interested there are a few court battles that are relevant: SEGA vs Accolade.
Can you decompile an EXE?
Yes it is possible to decompile . exes using tools like The dcc Decompiler. This will produce good results if the original program was written in C. If it was written in another language then you may have to try another tool that is suitable for that language.
How do I find the source code of a game?
Look on the game website or wikipedia to see if it is open source (meaning the code is available.) If not, try contacting the game creator by looking on Wikipedia for his/her/their email or phone, and ask them if you can use it. If they say no, then you probably can’t get the code.
Can I decompile a third party code?
I see this question is still driving traffic to my blog, so I’ll add an answer: yes, debugging 3rd-party assemblies is now possible with the JetBrains dotPeek decompiler (completely free), by using it as a Symbol server.
Is getting source code illegal?
No, reverse engineering is an allowed practice in the U.S. AFAIK it does not violate copyright laws on the source code. However, distribution of the actual source code for the app, if you do not have the rights to it, is a huge violation still.
Can we decompile third party code in Infosys?
It is your right to decompile any software your purchase or freeware you download as long as you do not redistribute it or sell it to third parties. It is also legal to talk about your discoveries.
Can you open games in unity?
You don’t open them. They aren’t source files. DLLs are compiled. If you want to view the source to a game, you will need to contact the developer of the game and request it.
How do I rip a Unity game?
Go to the Unity Game’s Folder GAMENAME_Data. File > Load File/Folder select the file/folder with the assets you wish to open. Once Selected go to Model > Export Selected 3D Objects. They export in FBX format, so use Noesis to convert to dae if your preferred program can’t open it.
Is it possible to reverse engineer source code?
You can reverse, but it’s not the same. Source code is often formatted with whitespace and comments, which don’t matter to the computer, but makes it readable to humans.
Is reverse engineering legal?
Reverse engineering is generally legal. In trade secret law, similar to independent developing, reverse engineering is considered an allowed method to discover a trade secret. However, in patent law, because the patent owner has exclusive rights to use, own or develop the patent, reverse engineering is not a defense.
Can Java code be reverse engineered?
If you are developing a Java application, it is important to understand that the Java class files can be easily reverse-engineered using Java decompilers. … Java source code is compiled to a class file that contains byte code. The Java Virtual Machine needs only the class file for execution.
Can you decompile C++?
Yes, but none of them will manage to produce readable enough code to worth the effort. You will spend more time trying to read the decompiled source with assembler blocks inside, than rewriting your old app from scratch.
|
OPCFW_CODE
|
3.6: Derivatives of Logarithmic Functions
This section has great because it contains two of my favorite things: a useful tool and a cute proof. Let’s dig in.
First, we start off with a proof of the general case of derivatives for any log function. The proof is one of those great proofs where you mutter “well played Professor Stewart” after reading it. It’s short, simple, clear, and builds upon a number of things you learned in earlier chapters.
You’ll notice that the derivative of is proprtionate to itself multiplied by ln(a). You may be saying to yourself “where the hell did ln come from? That seems like magic.”
Recall from earlier that the derivative of an exponential function is always equal to itself times some constant. Recall in addition that we (we being several centuries of mathematicians plus you and me) agreed that if we found the right base, the proportionality constant would equal 1, and then we called that base e. This amazing and amazingly convenient number was used to determine the character of the constant. Because of that, it ends up being log-base-e of a. It could have been defined by some other more convoluted method, but the cool thing about e is that it simplifies things. So, the take home here is this – as usual, the e is not there by magic. It’s by convention. Mind you, it’s not an arbitrary convention any more than having 2 headlights on a car is arbitrary. It’s a human choice, but it’s an informed smart choice.
Make sense? Sweet.
So, the cool thing is that, since it’s defined in this ln(a) form, as long as the base is e (that is, a=e) shit gets super simple. The derivative of ln(x) is just 1/x.
The examples that follow are, I think, pretty simple. But, there are two cool lessons worth noting.
1) (ln(something))’ = (something)’/(something)
This is fairly obvious, but worth remembering since it helps mental math.
2) (ln|x|)’ = 1/x
This one’s pretty amazing. I say that because absolute value can often be a real bitch. This gives you a nice way to deal with it.
The book makes this one seem a little more complex than it is, in my opinion. That said, it’s a really cool concept.
Here’s the basic idea: say you have an equation of the form
y = (some big ugly combination of exponential expressions)
Using the magic of natural log, you can convert that to
ln(y) = (some big ugly combination of polynomials inside natural logs)
This may make things easier to deal with, since you can just do an implicit differentiation, which converts the left side to y’/y. In other words, ln(y) is about as easy to deal with as y alone. So, if the ln will simplify the right side, it might be a good move. In example 7, you could do the product rule a couple times to expand it out and keep things simple, but that’d be really ugly. By using the log technique, things are… well… not pretty, but a lot less hideous.
Example 8 is even better. It takes a problem that seems pretty intractable and converts it to about 3 lines of simple math. Okay, so the result is surprisingly ugly, but you got what you needed.
Next stop: e as a limit.
|
OPCFW_CODE
|
Robert Banick05/19/2023, 3:50 PM
. Now setting up Prefect I’m having trouble designing a similar system. I’m using Prefect 2 on AWS Elastic Container Service (Fargate) tasks to implement ETL runs. We install our ETL repo as a library onto a docker image that the
block Task Definition uses. I’ve tried replicating our previous system by running
to upgrade the package in question at the very beginning of my flow. This works and I’m even able to see that the function I’m modifying is indeed updated in
pip install --upgrade git@<repo>@<branch>
. Nevertheless, when my flow run reaches the crucial step I’m testing, it very clearly uses (and fails on) the “old” function currently in the ETL repo
. Forcing reloading the repository in question via
does not appear to resolve the problem. My questions therefore are: 1. Is it possible to change a library mid-flow like this or is it a hard limitation of Python / Prefect? 2. Is Python code used by flows somehow installed somewhere different from
on the container? Such that
would install in the wrong place… 3. If it’s not possible to change a library mid-flow, is it possible to have the Prefect Agent/Worker run the
installs prior to spinning up the flow? Could the Agent even read the desired branch names from the flow parameters? 4. Any other ideas? The nuclear option here is manually changing the branches on docker images but that’s very clunky and will make iterative testing extremely time consuming. So we’d really like to avoid that path. All help and suggestions most appreciated, Robert
Zanie05/19/2023, 3:59 PM
option and it’ll get installed before your flow is loaded
Robert Banick05/19/2023, 4:03 PM
Austin Weisgrau05/19/2023, 4:08 PM
, but probably easier to delay the import until after the correct source code is in place
Robert Banick05/19/2023, 4:12 PM
variable within the
of a deployment? Or better to modify
of the ECSTask Block?
Austin Weisgrau05/19/2023, 4:13 PM
from prefect import flow, task @task def my_task(): import mypackage mypackage.foobar() @flow def myflow(): reinstall_package() my_task()
Zanie05/19/2023, 4:21 PM
subprocess.run("pip" ...) import ...
yourself in the command at that point.
Robert Banick05/19/2023, 4:25 PM
to deployment does not work
method won’t work — Python seems to line up a snapshot of all the libraries it’s going to import at runtime and changes afterwards don’t really register
route is more promising but we can’t get it working with
— possibly we are mis-specifying so could be user error @Zanie could you explain in a bit more detail what you meant w/ regards to entrypoints not being respected? Where would I modify the container command — on the agent? Sorry if this question is naive, I’m quite new to both Prefect and AWS land,
Zanie05/19/2023, 4:40 PM
let’s you configure the
to enter our engine
python -m prefect.engine
pip install … && python -m prefect.engine
/opt/prefect/entrypoint.sh python -m prefect.engine
bash -c "…"
Robert Banick05/19/2023, 4:44 PM
in the ECS Task code I’m now getting the below error
Submission failed. RuntimeError: Timed out after 120.72852396965027s while watching task for status RUNNING
command=["pip","install","git+<https://github.com/Arbol-Project/gridded-etl-tools@popen_IO>","&&","python","-m","prefect.engine"], : Flow run infrastructure exited with non-zero status code 2. command=["/opt/prefect/entrypoint.sh","python","-m","prefect.engine"] : `Submission failed. prefect_aws.ecs.TaskFailedToStart: CannotStartContainerError: ResourceInitializationError: failed to create new container runtime task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/opt/prefect/entrypoint.sh": stat /opt/prefect/entrypoint.sh: no such file or directory: unknown` command=["bash","-c","\"","pip","install","git+<https://github.com/Arbol-Project/gridded-etl-tools@popen_IO>","&&","python","-m","prefect.engine","\""] : Flow run infrastructure exited with non-zero status code 2.
Zanie05/19/2023, 5:28 PM
["bash", "-c", "pip install git+<https://github.com/Arbol-Project/gridded-etl-tools@popen_IO> && python -m prefect.engine"]
Robert Banick05/19/2023, 5:29 PM
commands, but here that’s not the case
block Task Definition uses.”
works, thank you very much! Hard coding the repo and package is not the optimum workflow here so I’d love to get the
command=["bash", "-c", "pip install git+<https://github.com/Arbol-Project/gridded-etl-tools@popen_IO> && python -m prefect.engine"]
command working. Since we’re using our own docker images would you suggesting replicating the script as part of our docker image setup so we can run the
component of it?
pip install $EXTRA_PIP_PACKAGES
Zanie05/19/2023, 6:07 PM
you can create custom template variables for those and they’ll show up in the UI — a little advanced but might be what you need.
Robert Banick05/19/2023, 6:10 PM
Zanie05/19/2023, 6:27 PM
Robert Banick05/19/2023, 7:49 PM
into the command of our ECSTask, like so
This allows us to use
command=["bash", "-c", 'if [ ! -z "$EXTRA_PIP_PACKAGES" ]; then pip install $EXTRA_PIP_PACKAGES; fi && python -m prefect.engine'],
overrides from within the UI as if the script were being run. Thank you so much for your support @Zanie, I would have lost days on this before I figured it out otherwise!
|
OPCFW_CODE
|
This year I’ve had the pleasure of participating in a team at MIT identifying characteristics that make patients likely to be intensive care unit “Frequent Fliers” - patients with multiple ICU admissions in a short time span. This series will explore the implementation of text-based patient phenotyping we use as a first step towards this goal.
Patients who experience frequent Intensive Care Unit (ICU) re-admission (“Frequent Fliers”) are at high risk of negative outcomes, even relative to other ICU patients (already at ~10-20% mortality rates), with observed mortality rates of 40% or more1. In addition to the impact to the patient himself, these patients account for an estimated 50% of ICU costs, despite only making up approximately 5% of the ICU population2.
While the problem of Frequent Fliers is widely recognized, the solution is elusive as the causes are varied and complex. In some cases the traditional health system could play a larger role, by highlighting comorbidities or complications that that may call for more intensive discharge disposition planning or other interventions. Other cases, however, stem from socio-economic causes such as food or housing insecurity, psychological problems, or other issues that our current health system is poorly suited to address. One representative account describes a patient with several comorbidities, primarily congestive heart failure (CHF) and chronic kidney disease (CKD), complicated by psychological issues3. Pilot programs have begun to implement more holistic responses4, but these are far from common.
Last year I was able to attend Dr. Leo Celi’s Secondary Analysis of Health Records course5, and had the good fortune of pairing up with a great team of students and physicians to investigate these problems further. We’ve recently published our first paper on Arxiv6 in which we describe a deep learning method for extracting frequent-flier-related patient phenotypes from free text notes. This is an important first step to the investigating the problem of frequent fliers, as many of the concepts that contribute to this problem (e.g. medication non-compliance, substance abuse) are poorly represented in structured data elements.
The general approach taken was as follows:
The team clinicians identified 10 patient phenotypes that are recognized for being contributing factors for ICU readmission, while also being difficult to assess from structured data. Examples include chronic pain, alcohol abuse, depression, and medication non-compliance.
Discharge summaries and nursing notes were extracted from MIMIC 2, and a random sample of ~1000 notes were inspected by the team clinicians and annotated with the presence or absence of the determined clinical concepts.
As several concepts have a low prevalence in the patient population, an imbalanced class problem arose. To address this we sought to increase the number of positive examples in our annotated dataset. Classifiers were created using our already-annotated notes, to use ICD9 codes as inputs and identify patients with increased probability of having notes with our concepts. ICD9 codes were used as they were not used elsewhere in this analysis, and so may reduce the potential for an Ouroboros issue of the analysis output contributing to the input. This classification task is described in the ICD9-based encounter classification series. Notes classifed with those algorithms as being likely positives were extracted, annotated, and added to our dataset.
Word embeddings were trained, using the gensim implementation of word2vec. As word2vec is a completely unsupervised method, we were able to train embeddings using notes from all ~50,000 patients, not just the ~1,000 we’d annotated. This greatly improved the quality of the calcuated embeddings.
Rules-based concept discovery (based on the cTakes tool) was used as a baseline for further algorithm comparison.
A deep learning model was defined and trained on our dataset, and compared to the rules-based method.
The series focuses on the development of the deep learning model, and is broken into several sections:
- ICD9-based phenotype classification, covered previously
- Word2Vec embedding training
- Deep-learning phenotyping implementation in Keras
|
OPCFW_CODE
|
[WIP] feat(formats) : add possibility to convert to multiple formats
Add support to convert a source image into multiple formats. Also introducing webp as a valid format (only when using sharp).
This closes #42.
This is a work in progress, I just wrote the code and didn't even run it once, I'll finish it next week end.
Meanwhile we could discuss about new inputs/outputs, here is how I did it for the moment :
Inputs
format: webp is now a valid format if sharp is used.
new formats options taking an array of format. format should probably be deprecated.
Alternative: make format more flexible, accepts both single format and array of format.
By convention the fallback and placeholder type are of the first format passed in.
Outputs
srcsets: array of srcset and mime types [{srcset: string, mime: string}]
imagesByMime: array of images and mime type [{images: [string]}, mime: string]. I've named it that way temporarily and to avoid breaking change with current images.
srcset and images contains the srcset and the image array of the first format.
Hey @ghetolay, that looks great! Thanks for the contribution!
Does the option to convert to multiple formats in one pass add that much value compared to importing the same image multiple times? I'm a little concerned about imagesByMime or changing the structure of images.
The main reason for my concern is that depending on if formats is used, the output of the loader would change from a flat list of images to one grouped by mime type. The alternative would be to always group by mime type but I don't think that's very user-friendly in the single-format case.
Is picking the first format in the list as the "main" top-level image is a good idea (shouldn't it be the last one?)?
These are just preliminary thoughts, I'm totally open to discussing this further. Maybe it would help if you could outline some examples of how multiple formats will be used in an app.
Could you maybe separate adding support for webp as an option for format and multiple formats into different PRs? I think the first one we can merge right away (once there are tests) and the multi-format option needs a bit more discussion.
Importing same image multiple times will make the loader decode the image each time depending on the size and number of the images this could have a real impact.
We could get rid of imagesByMime and just append everything to images if you want. I'm not sure what's the use of that images property that's why I didn't know how to handle it.
Which brings me to a new PR I may submit before this one. What do you think about adding the possibility to the user to generate the output ?
I'm a bit concerned about the ouput because it's what's gonna stick on prod files, so I would like to tailor it to my needs and remove unused properties (especially big ones like images).
We could add a new generateOutput option expecting a function easily.
Is picking the first format in the list as the "main" top-level image is a good idea (shouldn't it be the last one?)?
Well I went with first because you're already using first image to populate width/height property, I don't know which one makes more sense because in the end it depends on which order user will define formats and sizes.
Maybe it would help if you could outline some examples of how multiple formats will be used in an app.
My use case is to generate a <picture> dynamically, creating a <source> per mime type and an <img> fallback. So the properties I'm interested in are srcsets and src.
Could you maybe separate adding support for webp as an option for format and multiple formats into different PRs?
Yes totally, will do.
Please integrate the webp support :)
What work is outstanding, how can I help?
Please merge! I need webp support.
@ghetolay can you please fix the conflict so this can get merged.
Or @herrstucki can you please take a look at this and probably re-create the PR as it may not be fixed by @ghetolay and be lost. This is a really useful PR and as you see there is a demand from other people to use it. Thanks.
For those that are waiting for this change, I made similar changes to enable webp support on https://github.com/dscafati/responsive-loader . You have to run "yarn add dscafati/responsive-loader" to enable it (you can also use npm).
Your package.json will look like this:
{ ... "dependencies": { ... "responsive-loader": "dscafati/responsive-loader", "sharp": "^0.24.1" ... } ... }
Notice that you also need an up-to-date version of the sharp dependency.
Then you can use require() or import from and include format=webp in the query string of the path you are importing
I only added the webp mime to the list of supported mime tipes, and registered the webp extension for these images. I didn't add any validation, e.g. if you are using jimp you won't see any warnging.
|
GITHUB_ARCHIVE
|
Collaboration, Conflict & Consent workshop: Welcome!
Cecil Rhodes Statue: Proposals
In pair/group or on your own, propose what to do about the Cecil Rhodes statue on the front of Oriel College. Should it be removed, kept as it is, or some other creative solution? If you can, include a quick sketch of your idea. Publish your proposal somewhere online and post a link to it here:
Deciding between the proposals
Links & references
- Rhodes statue
- 'Gay cake' scandal
- For analysis, see:
- ‘No Platforming’ controversy.
- UK National Union of Students policy: https://www.nusconnect.org.uk/resources/nus-no-platform-policy-f22f
- In defence of no-platforming (6min):https://www.youtube.com/watch?v=bmCpKDgj7Mg- A critique of no-platforming:
1.3 THE TROUBLE WITH GENIUS
- Theories of authorship
- For references see: Greenhalgh, E. (2010) ‘Open Wide’ dissertation http://pzwiki.wdka.nl/mw-mediadesign/images/3/3b/OpenWideDissertation.pdf
- And: Daniel Defoe, quoted in: Rose, M. (1994). Authors and Owners: The Invention of Copyright. Cambridge, Mass: Harvard
- Image: memorial painting of William Lee (d.1637) and his 197 descendants: one of the earliest known ‘family tree’ paintings. In St Helen’s church Abingdon-on-Thames. Photo: Rex Harris, Flickr
1.4 OPEN WIDE
- Video – Human Microphone https://www.youtube.com/watch?v=tvJqLo_o7AM
- Video: Carl Rogers on non-directive listening https://www.youtube.com/watch?v=m30jsZx_Ngs
- Activity: active listening
- Rogers, C. (1990) Client Centred Therapy
- Rogers, C. (2004) On Becoming a Person
1.5 CREATIVE ASSIGNMENT
2.1 DECISION-MAKING SYSTEMS
- Some examples of decision making systems
- Activity: testing out different decision making systems
2.2 THE GROAN ZONE
- Theory of consent-based decision making + critiques of 'final outcome'
- For project examples, see http://eleanorg.org/art
- For theory, look up Seeds for Change web resources.
- For writing on consent, see Yes Means Yes (2018) incl Millar's essay 'Towards a Performance Model of Sex' https://ducttapedance.wordpress.com/2011/05/01/toward-a-performance-model-of-sex-by-thomas-macaulay-millar/
- Assignment: further prototype a solution OR decision-making system to decide Rhodes statue fate.
3.1 PSYCHOLOGY OF POWER & CONFLICT
- Theory: the need to consider operations of power & roles in groups.
- Reference: Freeman, J. (1970) The Tyranny of Structurelessness.
- Theory: Transactional Analysis
- Video & discussion: analysis of role-playing
- History today https://www.youtube.com/watch?v=aEQcsuXnnnc
-John Bercow ‘Order’ compilation: https://www.youtube.com/watch?v=EY7EIZl4raY
- Young ones Cornflakes https://www.youtube.com/watch?v=TLwc9lbJlIQ
- Rebecca’ (film version) Clip (starts 3:24): https://www.youtube.com/watch?v=EH2vljLgNvU
-‘50 Shades of Grey’ (film version). https://www.youtube.com/watch?v=XFK5SV1-Pzg
- References re: power dynamics and online meetings
- 2011 corporate video on how not to do video conferences https://www.youtube.com/watch?v=AjqKiLQ771M
- 2020 NYT article on gender differences in digital meetings https://www.nytimes.com/2020/04/14/us/zoom-meetings-gender.html
- And a research paper it cites, debunking the idea that digital communication (in text form) is a leveller between genders: https://ella.sice.indiana.edu/~herring/herring.stoerger.pdf
- Blog post manifesto for more equitable online meetings, with discussion of power/priviledge https://aspirationtech.org/files/AspirationPowerDynamicsAndInclusionInVirtualMeetings.pdf
- Academic article on power dynamics in distributed virtual teams https://www.researchgate.net/publication/234798464_Who_shouts_louder_exerting_power_across_distance_and_culture
- On adult/parent/child modes: Harris, A. ‘I’m OK- You’re OK’.
- Article: intro to parent/adult/child roles
- On Drama Triangle: Intro video (4.5mins):
- Theory: Attachment & infant needs
Some useful references on attachment & infant needs, met & unmet:
- Ainsworth, M.: ‘The Strange Situation’ experiment
- Horney, K. ‘Our Inner Conflicts’
- Horney, K. ‘New Ways in Psychoanalysis’
- Bowlby, J. ‘Attachment
- Easy intro to attachment styles: https://www.youtube.com/watch?v=QP-nPJbTgTs’
- Theory: Splitting & Projection
- On transference & projection: Freud. S. ‘Introductory Lectures on Psychoanalysis’ - Video: Introduction to Splitting & Projection (3min):https://www.youtube.com/watch?v=F3hzrDDBx-Y - For an overview of theory on splitting, see: https://en.wikipedia.org/wiki/Splitting_(psychology) - Video: Transference in Daily Life & Relationships (22min): https://www.youtube.com/watch?v=QDd7iJxn370
- Theory: Drama Triangle
- cf ‘St Michael & the Devil’ sculpture by Jacob Epstein, Coventry Cathedral. (Persecutor beomes victim; rescuer becomes persecutor.)
|
OPCFW_CODE
|
Hi @mjbcruz ,
I made a test, created a PowerApps custom form with a flow, and it worked for other members. The only difference is that it requires the permission of the SharePoint connection, but yours not. I believe this is the cause of issue.
I suggest you try to share the Flow to other members to check if this issue is fixed.
1. Go to Power Automate, select the Flow you used
2. Click Share, add a User or Group
If this still doesn't work, please re-create a flow and try again to check if your issue is fixed.
I am trying to call flow from my PowerApp, This flow will add attachment to the list item but calling this flow from PowerApp button is generating error.
Can anyone please help me find out the solution for this issue?
I'm also running into the same issue, popped up yesterday out of nowhere and I can't seem to find a fix that works. So far I've tried:
So far none of these have worked, but according to most of the solutions out there removing / re-adding should have done the trick. I have no problem running the flows from the PowerApp Studio, the issue only pops up when I use the PowerApp from a mobile device.
Same here.This error started to pop up yesterday afternoon and other shared users of the app is unable to execute the flow. The issue is replicable in brand new apps and flows that I recreated.
I've got this issue on 1 of 2 buttons inside the same app.
One button triggers a flow to get a SharePoint PDF and show it back in the app (failing)
The other button Gets a docx from SharePoint, creates a new file in OneDrive, saves a PDF version back to SharePoint, so both are connected to SharePoint and only 1 fails, which makes no sense.
Strangely, Button 1 that is failing works when I'm in the browser in design mode but fails to work on mobile.
What I've done:
I've rebuilt the entire flow from scratch
I've deleted the SharePoint connection from the flow and re-added
Deleted the flow button connection with the PowerApp (multiple times)
Saved the app, re-published
Wiped cache and logged out and back in on mobile
Forgot to mention, I'm using the "Get file content using Path" SharePoint connection in Button 1.
Also, Android mobile.
I resolved my issue by doing the following:
1. In Flow, created a new connection within the SharePoint block of my flow. Select the 3 dots, new connection.
2. Then in PowerApps, removed the Button action from connections (make sure you copy your code on the button action as when you reconnect the button to the flow, it wipes all the code out!).
3. Re-added the button action to connect back to my flow, pasted in my copied code from above step, saved and published.
4. Happy days!
Same problem here - invoking a PowerAutomate flow from a Powerapp button. Works fine when testing through powerapps studio but all users get the same error when doing it from the app (only tested on android so far).
The flow creates a new attachment for a list item in sharepoint.
SharePoint connector is the same one being used across the app.
Actions I've taken:
Nothing fixed it.
Any feedback from MS on this one? We have the same issue (that started this morning without any modifications to the APP). Our APP triggers a flow with a connection to a SQL db. When using the APP and triggering the flow via a browser all is working fine, but via either the mobile android or IOS apps, it is throwing an error
Keep up to date with current events and community announcements in the Power Apps community.
A great place where you can stay up to date with community calls and interact with the speakers.
Check out the latest Community Blog from the community!
|
OPCFW_CODE
|
do not put minty gum on your nipple!!!! i repeat do noT PUT MINTY GUM ON YOUR NIPPLe
why not? i want to try it
DO NOT PUT MINTY GUM ON YOUR NIPPLE UNLESS YOU WANT TO EXPERIENCE SATAN LICKING YOUR NIPPLE THEN A DRAGON BREATHING FIRE ON IT
i wanna put minty gum on my nipple
After a long-fought battle in Australia, a python bested a crocodile and swallowed the reptile whole over a span of several hours in Queensland, Australia.
The snake reportedly fought the croc for five hours in Lake Moondarra. Winning the fight, the python constricted its prey to death. The estimated 10-foot snake then dragged the 3-foot croc ashore and proceeded to swallow it whole in front of a group of onlookers.
National Geographic identified the snake as an olive python and the croc as a Johnson’s crocodile, both of which are native to Australia. After its hefty meal, the python should be full for at least a month.
(Source: The Huffington Post, via lumos-light-nox-night)
dean i see ur leg slip
dean do u think ur angelina jolie
(Source: casterlyrox, via thetomboywithheadphones)
Don’t tell me. We’re about to go over a huge waterfall
sharp rocks at the bottom?
bring it on
(Source: teaandbenedictcumberbatch, via thetomboywithheadphones)
A CROW TRIED TO GO IN OUR CLASSROOM AND HE HAD A PEN
yes hello i am here to learn geometries
That crow is more prepared than some of my students.
You’ve all just like, completely skipped over the possibility that this crow has seen people using pens in this room, found one, and is trying to return it. There’s been videos of crows picking up sweet wrappers and stuff and placing them in bins after seeing humans put their litter in bins. I really do believe that this crow is trying to return the pen and that is ADORABLE AS HELL.
THEY ARE SO SMART I LOVE THEM
Crows are thought to be self aware by some scientists. Its perfectly possible the crow wants to return the pen to humans. Knowing it belongs to humans.
Corvids. Who KNOWS. :)
Another cool crow deal: Once, when trying to assess if crows could reason and use tools, scientists had two crows who didn’t know each other each take a wire from a table (one was hooked, one was straight) and try to grab meat from a bottle with it. The crows could see each other, though they had separate bottles. Only the straight wire worked for this, so they hypothesized that if crows could reason, the second trial would have the two crows fighting over the straight wire. The second trial started and, to the surprise of the scientists, the two crows both went for the bent wire, one held it down and the other unbent it. They both got meat out of their bottles. They came to a peaceful solution without verbal communication. Crows are probably smarter than we are.
Crows are definitely smarter than humans
(Source: sickpage, via lumos-light-nox-night)
|
OPCFW_CODE
|
When Larry Cashdollar, a security researcher for Akamai’s Security Intelligence Response Team, found that it was possible to upload malicious files to the server using Blueimp’s File Upload jQuery plugin, (CVE-2018-9206), it started him “down a rabbit hole” investigating other projects.
While the code in this particular plugin has been fixed, there are many software projects relying on Apache's
.htacess to protect them, not realizing that security control is no longer available by default, Cashdollar warned.
While investigating Cashdollar's report, the plugin’s developer, Sebastian Tschan, discovered the problem was related to the changes Apache made in its Apache HTTPD server nearly eight years ago. The plugin relied on a custom
.htaccess file to restrict the permissions on the server’s upload folder, a common-enough practice. The problem was that starting with version 2.3.9 (November 2010), Apache HTTPD had switched on a new setting that allowed server administrators to ignore custom (user-owned)
.htaccess files. With this new setting, it didn’t matter what permissions the plugin defined in the
.htaccess file because the web server ignored them, with the unfortunate result that the upload directory was left unprotected. In this case, malicious individuals could potentially upload backdoors and web shells through the PHP application and compromise the server.
“The internet relies on many security controls every day in order to keep our systems, data, and transactions safe and secure,” Cashdollar wrote. “If one of these controls suddenly doesn't exist it may put security at risk unknowingly to the users and software developers relying on them.”
[Developers] don’t realize that the [security] control is gone, and that their application has been vulnerable for years.
It makes sense for Apache to put in a setting to prevent
.htaccess files from overriding server configurations, especially since these files impact server performance. The
.htaccess file can also pose a potential security issue in multi-tenant environments. In a situation like a university web server shared by thousands of students, a single student can override the administrator’s decision to block PHP code execution on the server and run PHP code.
“There may be a limited impact on the server, or it can allow the server to be compromised,” Cashdollar said.
Apache documented the fact that
.htaccess is disabled by default, but that doesn’t mean the fact was well-known. A prolific developer, Tschan was unaware of the change. Cashdollar checked with a number of security professionals and hackers who have used Apache’s web server software for twenty-odd years, and found none of them knew that Apache had tightened up the web server configuration in this way. Some of the most plugged-in people in web development and application security missed this change, so it is no surprise that many project developers would not know they had to change how they should use
There was “no parade or giant announcement” from Apache informing developers that a commonly used practice had been changed to require some upfront coordination with the server administrator, Cashdollar said. There was no public guidance informing developers of what they needed to do if they wanted to keep using
Down the Rabbit Hole
The plugin is widely used. Cashdollar has found 7,800 forks (a new project was created using this project as a base) of the plugin on GitHub or somehow integrated into other projects. In one case, the fork is provided in a Docker image. So even though Tschan has fixed the issue, the problematic code can still be present in these other projects. Cashdollar has already found a thousand forked projects that have the vulnerability, and about 10 or 15 projects where developers had removed the code, thus removing the vulnerability. There is no easy way to tell if the developer understood the implications of the code they had removed or why they didn’t communicate the changes upstream to the original project.
"As it turns out, the problem is much larger than a single jQuery project," Cashdollar wrote in a follow-up post.
Many of the projects have not been maintained for years and do not appear to be under active development, adding to the challenge of getting the issue fixed or notifying impacted users. Cashdollar said in a follow-up post that he and GitHub are currently discussing how they can contact the owners of vulnerable forked projects en masse and encourage them to pull the latest update.
This is where the rabbit hole gets even more convoluted, because any PHP application that relies on
.htaccess would be affected by this change Apache made years ago, not just Tschan’s file upload plugin or other applications that borrowed his code. Focusing on applications that integrated the plugin or on forked projects would be shortsighted, because the real issue lies in the fact that developers may still be using
.htaccess and not instructing application users to verify how the server is configured. Many plugins for WordPress and other CMS platforms use
.htaccess, as do CRM software and other enterprise applications. Affected projects may not even be in GitHub at all. Investigating and remediating projects would be a major, multi-year undertaking.
Don’t make the assumption that .htaccess is available.
“[Developers] don’t realize that the [security] control is gone, and that their application has been vulnerable for years,” Cashdollar said. He predicted that this disconnect between how the web server handles
.htaccess and how applications rely on the file would be the “source of breaches to come.”
Cashdollar is still trying to get a feel for the magnitude of the problem and said this was “just the tip” of a messy situation, since applications using
.htaccess are “all over the internet.”
It wasn’t immediately clear whether administrators who updated their servers suddenly went from having
.htaccess enabled to disabled, or if the change was visible only on fresh installs. Cashdollar said he planned to test to see if the update process would overwrite the existing
apache.conf file or keep existing configurations intact. If the file is not overwritten, then the application would keep working as designed. Of course, as soon as the administrator tried to re-install the application on a fresh server, the
.htaccess part would stop working.
Bottom line, “Don’t make the assumption that
.htaccess is available,” Cashdollar said.
This post was updated with reference to Cashdollar's follow-up blog post.
|
OPCFW_CODE
|
Android Core Proposal Merged (and some follow up goals)
by David Martin
After a massive 100+ comments, I've decided to merge the Android Core SDK
* Android Core SDK is available from Maven:
* Repo: https://github.com/aerogear/aerogear-android-sdk
* Example Android App that uses the core SDK:
* What you need to add to your gradle file:
The amount of comments, calls and back&forth on irc has reached a
reasonable level of agreement, with some remaining points of contention.
The contention is mainly around the level of complexity that a developer
has to undertake to use the SDK.
After listening to the 3 main voices on this (Summers, Wojciech, Passos), I
can see both points of view.
(WARNING: A lot of paraphrasing below :) )
>From Passos & Wojciech's point of view, ease of use of the SDK is what's
most important. There should be practically no setup/init required other
than having a mobile-config.json file in the right place, and call a static
method to get an instance of a service (similar to Firebase).
>From Summers point of view, ease of use is also important, but something we
can improve on iteratively. For example, the default use of a Service will
be fine & possible to automate the setup for for 95% of cases. However, the
other 5% is what we need to take into account from the beginning.
So, based on this, I would like if the following 2 things were follow up
goals for the Core SDK.
I believe these changes will take whats currently there (and working), and
move it towards something that is easier to use for developers.
Remove the need for static block initialisation/registration of service
classes & their dependencies. i.e. this:
>From chatting with Summers, this should be possible now that this PR is
merged https://github.com/aerogear/proposals/pull/16 and the config file
format is nailed down.
Allow a simpler way of getting an instance of a Service class other than
keycloakService = core.getService("keycloak", KeyCloakService.class);
If there are multiple instances registered for a particular Class, it may
still be necessary to use the above to get a 'named' instance (much like in
dependency injection libs like spring that use annotations).
However, in most cases, the below should be possible:
keycloakService = core.getService(KeyCloakService.class);
Red Hat Mobile
IRC: @irldavem (#aerogear)
5 years, 1 month
|
OPCFW_CODE
|
HW: RPi4 4GB
[start] 20:04:33 INFO: syncthing v1.5.0 "Fermium Flea" (go1.13.10 linux-arm) email@example.com 2020-04-21 20:45:03 UTC
[L6CNM] 20:04:39 INFO: My ID: xxxxx
[L6CNM] 20:04:40 INFO: Single thread SHA256 performance is 44 MB/s using minio/sha256-simd (43 MB/s using crypto/sha256).
[L6CNM] 20:04:40 VERBOSE: Starting up (/home/pi/.config/syncthing)
[L6CNM] 20:04:40 INFO: Hashing performance is 40.29 MB/s
[L6CNM] 20:04:40 INFO: Migrating database to schema version 9...
[L6CNM] 20:23:58 WARNING: Database schema: open /home/pi/.config/syncthing/index-v0.14.0.db/114168.ldb: too many open files
[monitor] 20:23:58 INFO: Syncthing exited: exit status 1
[start] 20:23:59 INFO: syncthing v1.5.0 "Fermium Flea" (go1.13.10 linux-arm) firstname.lastname@example.org 2020-04-21 20:45:03 UTC
[L6CNM] 20:24:06 INFO: My ID: xxxx
[L6CNM] 20:24:07 INFO: Single thread SHA256 performance is 44 MB/s using minio/sha256-simd (44 MB/s using crypto/sha256).
[L6CNM] 20:24:07 VERBOSE: Starting up (/home/pi/.config/syncthing)
[L6CNM] 20:24:08 INFO: Hashing performance is 41.06 MB/s
[L6CNM] 20:24:08 INFO: Migrating database to schema version 9...
Syncthing 1.5 boot loops at this point.
what can i do?
Increase file limit on the operating system
my limit is “unlimited” :-/
What’s the output of
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29184
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 95
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29184
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
thank you for your time, all of you
open files (-n) 1024
I guess you have noticed this already.
and what do i have to do now?
( i am a linux/ raspi noob
sysctl -w fs.file-max=500000
add this line:
I think if you google “linux how to increase open files” you’ll find plenty of answers.
now its migrating since one hour… without any error yet…
any idea about how long it could last? the migration?
or any option to see a progress percentage?
Sadly no. This should be quick on a device with reasonable storage speeds and reasonable CPU, so on a slower device/slower storage/more data it will take longer.
any idea where the error could come from?
syncthing worked nice, and suddenly the problem accured…
the gui was not working anmore…
it was before the uptdate to 1.5 already stoped working
i also never just cut the raspi power…
i always very kindly shotdown syncthing then shutdown the raspi…
What error are you talking about?
If it’s the error in the original post fails to migrate the database.
The database needs to be migrated because syncthing updated to new version.
okay then i do not know what was the error at the beginning…
i was suddenly not able to connect to the GUI (even waiting 2 days did not help)
then my friend got the idea, lets upgrade…
i did… now i am stuck here
it is still migrading…
the TOP in the other console window says, syncthing CPU load only 20%…
i never had this low synthingCPU usage
You might not be able to connect to the web ui while the migration is happening. If the migration is done, check the logs as it might be crashing.
i understand that the gui is not reachable during migration
the GUI not reachable (see the screenshot)
was before my “too many files” error, that i am talking here.
( i dont know what caused it , but since you say: the migration is because of the upgrade to 1.5, which i only did by hand, by command line, after i had no idea what made my gui stuck, we cannot know what was the problem at the beginning, right?
right now, finally, its working again…
Well if you are getting that error in the gui, you should check the logs, as it’s most likely crashing.
will do it in future
right now its working
files do appear in places where they never have been
and at the other syncthing end, it shows those files as different files now
i get out of sync
lets see what it does after a few more hours and what i have to do by hand
hope nothing gets missing of my files
by the way, here is my log, from this time, RASPI side…
( on the other side is my Synology )
Log Syncthing Raspi.txt (33.0 KB)
|
OPCFW_CODE
|
If your brand new 3.6GHz CPU and $500 video card is still waiting 60 seconds for a Doom 3 level to load, you'll know that the hard drive is probably the most significant bottleneck in your PC subsystem.
It's all about finding the right balance.
IDE and SATA hard drives have not evolved much in terms of speed, particularly within the past 18 months. Sure, fast 10 000rpm drives have been introduced, and manufacturers are beefing up the cache, but hard drives have not increased in speed the same way as CPUs and video cards have. Building an extremely fast PC is more than just dropping in the fastest processor you can find. A CPU is one part of the equation, but you'll need a fast video card to draw the images as quickly as the CPU can feed it. A fast subsystem will keep data moving efficiently, as a slower bus will bottleneck the data flow.
One problem that exists for almost all current drives is the way they access data. Unlike ram, hard drives are mechanical devices, that are limited by the speed of the internal components when accessing data. Like a record player, the "needle" needs to move from one spot to another to retrieve information. Rotational and seek latencies are the big hurdles here, which is why we see many SCSI or 10 000 rpm SATA drives seemingly so much faster than "standard" IDE or SATA drives. Faster spinning hard drives alleviate the problem somewhat by increasing the speed of the motors, but as a quick price check can tell you, these drives are very expensive, and in order to make the drives attractive price-wise, they often feature lower capacities. The larger cache mentioned earlier can also speed up data access, but it doesn't completely solve the problem as incorrect cached data is useless if the program doesn't need it.
How a Drive Accesses Data
All hard drives work the same way overall... the CPU makes a data request, the drive spins to where the data is located, and retrieves it and sends it back to the CPU. The data is stored on tracks, and unlike recordable CDs, hard drive tracks write from the outer edge of the platter and moves it's way to the inner platter. In a typical hard drive, data reads and writes begin on the bottom platter, referred to as Disc 0, and the first read/write head, which is head 0. After one cycle of data is complete (track 0 on head 0), the drive moves to the other side of the disc (track 0 on head 1). Once that is done, the drive moves to the next head on the second disc (track 0 on head 2). Once the last head on the last side of the last disc finishes, the cycle repeats with track 1 and head 0.
The two sides of a head is collectively known as a cylinder. As outlined earlier, the data is written to sequentially on the cylinders until the inner diameter of the disc is used. Ideally, program data is written in order and reads follow the order data was written. This of course is rarely the case, and in some cases, where the program thinks data should be isn't there, forcing it to look elsewhere (which makes a strong case for keeping your discs defragmented).
One issue that plagues Parallel ATA drives is that although you can speed up the mechanics, the drive still needs to be efficient at retrieving data. Ideally, a drive will know where to pick up data "A", and know where data "B" is located. It should know it needs "E" before "D", and so on. The best way to do this is through queuing, which at the system bus level, organizes the data that needs to be retrieved. Legacy Command Queuing (LCQ), has some limitations though, one of which is the bus is going to be occupied until the drive completes the reordering and retrieval of data. Given the mechanical nature of hard drives, if requests are being made faster than the drive can fulfill them, we still get bottlenecked, even with more cache and faster motors.
Native Command Queuing
Native Command Queuing (NCQ) was developed to address the problems of LCQ. Introduced with the Serial ATA II spec, this is a feature that can only be found in native SATA hard drives. Unlike LCQ, NCQ works by allowing a drive to process multiple commands at the same time. These commands can be rescheduled or reordered on a whim, and can also issue new requests while the drive is retrieving data from the previous request.
NCQ is tied in closely with Hyper-Threading, and combined with HT capable hardware and software, the performance differences should be quite substantial when compared to non-NCQ drives. Tagged command queuing is supported, and is the command reordering based on seek and rotational optimization. We mentioned that those two items are a big part of a drive's performance (or lack thereof), and by reordering the algorithms based on the linear and angular position of the data, the process will be much more efficient.
In addition to these algorithms, NCQ is capable of communicating the status of the commands being performed at any time. This is referred to as Race-Free Status Return Mechanism, and in essence, the drive is able to issue several commands at the same time, without needing to wait for the host to check on the status.
Interrupt Aggregation is another feature where multiple commands can be aggregated to one interrupt. Normally, for each command, the host bus would be interrupted each time, but with Interrupt Aggregation, this can happen only once.
Finally, NCQ has the ability to set up the direct memory access (DMA) operation for a data transfer without host software intervention. First Party DMA (FPDMA) allows the drive to process a number of commands without any intervention from the CPU and/or software.
How NCQ Works
When a command is given to the hard drive, the device needs to determine if this command is to be queued or processed right away. In order for NCQ to work efficiently, two commands were added to the SATA II specification, which are Read FPDMA Queued and Write FPDMA Queued. We mentioned Hyper-Threading earlier, and the advantage here is normally, applications request one piece of data at a time. With Hyper-Threading, several applications can request data. While this can happen without Hyper-Threading, the technology allows queues to be built more efficiently.
To simplify how NCQ works, a good example would be an elevator. Say there is a person to deliver three packages in the elevator, where each package represents a data request for an application (which would be the company these packages are intended for). Say the packages need to go to floors 2, 3, and 4, but they are stacked in a random order on the elevator floor. The deliver person ends up dropping off the packages as they are currently stacked, which would be on floors 4, 2, and 3. Naturally, this is inefficient, and it would be better to deliver the boxes in sequential order.
To represent queuing, the delivery person sorts the packages so that they are dropped off in sequential order. Hyper-Threading can be represented by having a second delivery person sorting out the boxes while the other drops them off. In any case, this was a simplified example, but that's the general idea.
Who and What Supports NCQ?
As we've already pointed out, NCQ is only present in native SATA drives. The majority of initial SATA drives (Seagate was the exception) upon debut were not native drives, but rather, IDE drives with SATA interfaces. NCQ enabled drives will only apply to those which qualify under the Serial ATA II specification. At the time of this writing, only Seagate and Maxtor offer these drives. The Seagate Barracuda 7200.7, which we'll be looking at today, is readily available and coming soon will be their Barracuda 7200.8 which increases the capacity and doubles the buffer to 16MB. Maxtor's offering is the DiamondMax 10, which should be available now, but we were unable to find any of these drives locally.
NCQ drives don't mean much without a controller, and like the drives, the controller support is a little slim right now. Intel's ICH6R, which can be found with their 915 and 925X chipsets fully supports NCQ from the get go. Silicon Image has recently demo'd their SiI 3124 controller at IDF 2004, and should have their part out soon. As for Promise, Siig, NVIDIA, ATI and VIA, none of them currently have controllers that support NCQ. No doubt, these controllers will come, but at the moment (as in, going to the store and buying something today), only Intel boards based on Grantsdale and Alderwood will support NCQ.
As for software support, for desktop users, the performance gain from "regular" applications will be minimal. NCQ thrives on multithreaded and multitasking situations, and the truth is, not many desktop applications and games tap into this. Workstation and server level systems on the otherhand may see a boost, but again, that will depend on the application.
The important thing to remember is NCQ drives will work on all SATA controllers, but the NCQ functionality will be disabled.
Now that we've discussed NCQ, let's have a look at a couple Seagate Barracuda 7200.7s.
|
OPCFW_CODE
|
This article is also available in following languages:
The Crash dump is not used for troubleshooting normal printer errors! Please only supply the Crash dump if requested.
If there is a software crash on the printer, a Crash dump will automatically be saved in the xFlash memory. When troubleshooting the cause of the crash, our Customer support may request this Crash dump. This feature can be used for diagnosing unusual system issues.
The Crash dump has important information about your printers, like the serial number and IP address. Sharing this data publicly is not recommended.
In Windows, it is possible to correctly recover the dump with either PuTTY or OctoPrint.
The Crash dump to xFlash is active at any time, it is stored in xFlash and can be downloaded even after the system reboot until a D22 command (clear the current memory dump in the printer) is sent to the printer.
Using programs other than the recommended ones for the serial link might cause the dump to be unusable.
The MK2.5 and MK2.5S do not contain the xFlash, so the crash dump will not be saved automatically after a crash. Therefore, the crash dump can only be retrieved if the printer is already connected to a serial link at the time of the crash.
Download the program here. Once downloaded, start the program, then:
- On the "Connection type" select "Serial".
- On the field "Serial line", type the COM port that the printer is connected to (the picture is an example, check your device manager to know the COM port used in your computer).
- Set the speed to 115200.
- On the left side panel, go to Category -> Terminal.
- Make sure that the following boxes are checked:
"Implicit CR in every LF"
"Auto warp mode initially on"
- Under "Line discipline options", check both "Force on" options.
- On the "Category" side panel select "Session/Logging".
- Select "All session output".
- Change "Log file name:" to "<your folder of choice>&Y&M&D_&T_&H_putty.log".(Example: "C:UsersPublicDownloads20220922_081522_COM13_MK3S_putty.log".)
- Go back to "Session" add a unique name, like "MK3S_COM13" in "Saved Sessions" and click Save. The name will appear in the field below. Your Log files will be saved when the program is closed.
- If your printer is not connected to the computer by the USB cable, connect it now. Ensure that the printer is on and not printing, as it will reset when you start the log file.
- Select the file saved under Default Settings, and click Open. This will start the logging. The printer will reset when connecting, make sure it is idle.
- You'll get a blank window where you'll see messages from the printer. Enter "D21" in Putty to get the DUMP.
- The dump will be automatically saved in the folder you chose as a text file, or you can copy the log starting from "D21" until "ok", and paste it into a text editor like Notepad++. Compress/zip the file before sending it.
- Note the serial number when contacting support.
In order to get the Crash dump using Octoprint, it is necessary to enable "Serial logging" in the OctoPrint Settings -> Printer -> Serial Connection -> General section. Scroll all the way down, check the box under "Serial logging", and click the Save button.
To get the dump, use the serial command D21 (read crash dump).
You can either download the OctoPrint serial log by going to Settings -> Octoprint -> Logging and clicking on the download icon at the right of "serial.log". Or you can copy the dump from "D21" until "ok", and paste it into a text editor like Notepad++. Compress/zip the file before sending it.
If you don't use serial logging by default, disable it after the dump again.
|
OPCFW_CODE
|
I can’t seem to find any information on the web that suitably solves an issue I’m having on a laptop.
It’s a 13.3inch Dell machine with a “native” resolution of 1920x1080.
The problem: That combinaiton yields very tiny characters in pre-login text modes such as grub2 screen and when entering the LUKS (disk encryption) passphrase.
I successfully managed to tweak the grub2 (“boot menu”) screen to appear in 800x600 which makes for legible character size. All fine on that part.
What bothers me however is that whatever I try to do in grub.cfg (gfxmode, “keep”, …) doesn’t seem to have any effect on the screen resolution used by the LUKS passphrase screen. As far as I understand, yeah, the passphrase screen is originating from Plymouth but I don’t find any documentation that tells me how to configure Plymouth in a way that the LUKS passphrase screen appears in a specific resolution.
All I found on the web is related to Ubuntu, dates back from the grub (pre-2) times, or is a decade old. I tried also to check on CentOS-, RHEL9-, or AlmaLinux- related info but to no avail. Seriously, there must be a way to do that, innit?
Anyone has a hint on how to?
Am on Rocky 9.1 with Wayland.
What desktop and window manager are you using? I don’t use luks but I did have issues with tiny login screen and solved it on Mate/lightdm. Of course I’ll have to hunt down what I did, but will start looking now.
My laptop native resolution is 2560x1440.
So for me all I had to do was edit the “login screen” settings menu to enable HDIPI support.
On my system F37 lightdm calls the slick-greeter to manage the login window. So in /etc/lightdm is the file:
slick-greeter.conf which contains the defaults I saved this to slick-greeter.orig and then overwrote the .conf file containing only my change.
I found the login menu after I made my edits.
Hi and thanks for your support, but my problem is actually happening way before the login screen appears. That is, it happens right between grub and the login screen. On a side-note, I’m using gnome (GDM) and for tweaking its login dialog I’m using a flatpak app called “login manager” which does a great job.
But as said, unfortunately, the LUKS screen is something completely different. Anyways, thanks again, Cheers, Thomas
Have you tried setting a terminal font as a kernel parameter? I have this on my kernel command line:
This made my tty console usable. It also makes the boot log output readable as it scrolls by.
You may have to install the terminus fonts.
Did this ever get solved?
well, yes and no.
It’s not solved to a suitable extent.
What I can do is use “nomodeset” on the commandline of the kernel to be loaded. That prevents the subsequent processes to load the full-fledged graphics driver and stick to the resolution I choose.
However, that resolution is then carved into stone, even for the GNOME desktop where I’ll find myself unable to change it to something better (in terms of resolution and color depth) than what I selected for the pre-login part. So it’s either one or the other: Live with great graphics in Gnome Desktop but a Plymouth resolution that sucks… OR have the perfect pre-login resolution with a desktop that sucks. Looks like Plymouth doesn’t allow to have both.
That is, unless someone comes up with the magic trick of some kernel parameter OR a plymouth config parameter that I wasn’t able to dig off the internet.
Anyways, I’m complaining on a high level. The LUKS screen with lower resolution than what the LCD gives by default would just be the icing on the cake of an otherwise gorgeous experience.
Sorry to hear adding the vconsole font parameter to the kernel didn’t help.
|
OPCFW_CODE
|
Why can threads change instance data if it is blocked in another thread?
I began to study lock and immediately a question arose.
It docs.microsoft says here:
The lock statement acquires the mutual-exclusion lock for a given
object, executes a statement block, and then releases the lock. While
a lock is held, the thread that holds the lock can again acquire and
release the lock. Any other thread is blocked from acquiring the lock
and waits until the lock is released.
I made a simple example proving that another thread with a method without the lock keyword can easily change the data of an instance while that instance is occupied by a method using the lock from the first thread. It is worth removing the comment from the blocking and the work is done as expected. I thought that a lock would block access to an instance from other threads, even if they don't use a lock on that instance in their methods.
Questions:
Do I understand correctly that locking an instance on one thread allows data from another thread to be modified on that instance, unless that other thread also uses that instance's lock? If so, what then does such a blocking generally give and why is it done this way?
What does this mean in simpler terms? While a lock is held, the thread that holds the lock can again acquire and release the lock.
So code formatting works well.
using System;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApp1
{
class A
{
public int a;
}
class Program
{
static void Main(string[] args)
{
A myA = new A();
void MyMethod1()
{
lock (myA)
{
for (int i = 0; i < 10; i++)
{
Thread.Sleep(500);
myA.a += 1;
Console.WriteLine($"Work MyMethod1 a = {myA.a}");
}
}
}
void MyMethod2()
{
//lock (myA)
{
for (int i = 0; i < 10; i++)
{
Thread.Sleep(500);
myA.a += 100;
Console.WriteLine($"Work MyMethod2 a = {myA.a}");
}
}
}
Task t1 = Task.Run(MyMethod1);
Thread.Sleep(100);
Task t2 = Task.Run(MyMethod2);
Task.WaitAll(t1, t2);
}
}
}
Imagine having a key for opening a door, behind the door there's a cat you want to pet. With locking (MyMethod1), one person at a time gets the key, opens the door, pets the cat and then returns the key. Without locking (MyMethod2), you don't have a door or a key. You can simply go ahead and pet the cat - even while others go through the door because nothing holds you back. With locking, you don't lock the cat itself - you lock the access to the cat.
Picture a Kensington slot exists on every object around you. Anyone can try and place their own lock into that slot and the only thing that stops them is if someone else put there own lock there first. But notice that the existence of the slot doesn't affect any normal use of the object.
why is it done this way? Because that's how the language was designed. In theory a language could be designed so that if you lock a field in one place, it is automatically locked everywhere else it's used, but I'm not aware of any languages that do that.
locks are cooperative, it relies on all parties that can change the data to cooperate and take the lock before attempting to change the data. Note that the lock does not care what you are changing inside the lock. It is fairly common to use a surrogate lock object when protecting some data structure. I.e.
private object myLockObject = new object();
private int a;
private int b;
public void TransferMonety(int amount){
lock(myLockObject){
if(a > amount){
a-=amount;
b+=amount;
}
}
}
Because of this locks are very flexible, you can protect any kind of operation, but you need to write your code correctly.
Because of this it is important to be careful when using locks. Locks should preferably be private to avoid any unrelated code from taking the lock. The code inside the lock should be fairly short, and should not call any code outside the class. This is done to avoid deadlocks, if arbitrary code is run it may do things like taking other locks or waiting for events.
While locks are very useful, there are also other synchronization primitives that can be used depending on your use case.
Thanks for the answer. About cooperative locks of an instance it became clearer to me. In other words, the use of a lock is a kind of "agreement" of some entities in different threads that these entities are required to do something with the instance within the conditional queue, but not simultaneously. And their agreement does not mean at all that another entity in which the lock is not applied in relation to this instance will not be able to influence the instance. If I understand correctly.
@NikVladi sounds about right. Note that threads waiting on a lock are not ordered, when the lock is released any of the waiting threads take it.
What does this mean in simpler terms? "While a lock is held, the thread that holds the lock can again acquire and release the lock."
It means that you can do this:
lock (locker)
{
lock (locker)
{
lock (locker)
{
// Do something while holding the lock
}
}
}
You can acquire the lock many times, and then release it an equal number of times. This is called reentrancy. The lock statement is reentrant, because the underlying Monitor class is reentrant by design. Other synchronization primitives, like the SemaphoreSlim, are not reentrant.
Thank you very grateful!
|
STACK_EXCHANGE
|
Reading the value of an input box after the user has changed its contents, possible?
I have a text box which looks like this
<input value="0123456789" id="phone_number" type="text" onkeyup="limitFieldLength(this, 16);">
When the user modifies the field I try and read value again like this
var phone_number = document.getElementById("phone_number").value;
But the phone_number var contains the original value, not what the user has changed it to.
Can I read the new value of the inputbox?
My code to read the phone_number occurs as a result of a button getting clicked and it unrelated to the limitFieldLength(this, 16) part of my code.
If I do not give the input box a value, then the above code works. But I want users to see the existing value before they update it.
How is the code to read the phone number invoked?
Where do you run this: var phone_number = document.getElementById("phone_number").value;? Globally or inside limitFieldLength() function?
limitFieldLength is not where I am accessing it from, I should have removed that from my question. I listen for a click on a button, and then read then read the value of the input box.
Use oninput instead of onkeyup and you will get the current value.
Update: Use the event object parameter, it's the first parameter of your event handler function, and set the variable value inside that onkeyup handler. Then, you can get the value dynamically.
The reason that the input value is not updated if value attribute is set, because document.getElementById().value uses the html's value, inputting the value in the input box doesnt change this value, you need to call setAttribute() to update the html's value
Using variable to store the value
<input value="0123456789" id="phone_number" type="text" onkeyup="limitFieldLength(this, 16);">
<button onClick="checkValue()">Check value</button>
<script>
var phone_number = document.getElementById("phone_number").value;//initialize var
function limitFieldLength(e, limit){
phone_number = e.value;//change the value onkeyup
}
function checkValue(){
console.log(phone_number);//now, phone_number is dynamic
}
</script>
Using setAttribute to change the html's value because document.getElementById().value obtain the value from the html's value
<input value="012" id="phone_number" type="text" onkeyup="limitFieldLength(this, 16);">
<button onClick="checkValue()">Check value</button>
<script>
function limitFieldLength(e, limit){
e.setAttribute('value', e.value);
}
function checkValue(){
console.log(document.getElementById("phone_number").value);//now, phone_number is dynamic
}
</script>
Reference : setAttribute
limitFieldLength is not where I am accessing it from, I should have removed that from my question. I listen for a click on a button, and then read then read the value of the input box.
But, it does work if I was accessing it from inside the limitFieldLength function.
thanks, that works. But it seems odd that you cant just read the value, when you can if it has no value property in the first instance.
@seventeen, you need to use setAttribute if going that way, updated answer above
I added "this[event.id] = event.value;" as I have many fields to update and this updated whichever one was modified.
May be the var phone_number = document.getElementById("phone_number").value; out of onkeyup event.
You need update adding it to your limitFieldLength method like below
function limitFieldLength(item, num){
var phone_number = document.getElementById("phone_number").value;
console.log(phone_number);
}
function limitFieldLength(item, num){
var phone_number = document.getElementById("phone_number").value;
console.log(phone_number);
}
<input value="0123456789" id="phone_number" type="text" onkeyup="limitFieldLength(this, 16);">
|
STACK_EXCHANGE
|
ARMA/GARCH statistical significance of estimated parameters
My question is general and is concerned with ARMA-GARCH modeling.
When performing the joint estimation of the ARMA and GARCH parts, some works tend to not be concerned with the statistical significance of the parameter estimates of the conditional mean equation.
If the conditional mean equation is something very simple, like, e.g., a constant this make some sense. But what if the equation contains AR and/or MA terms? Doesn't it mean that the ARMA-GARCH model is misspecified?
Is there any specific reason behind that?
Lack of statistical significance of a model's coefficients is not a strong indication of misspecification. (This could be contrasted to, say, systematic patterns in the model's residuals.) It simply indicates the sample size is too small to reliably distinguish the true coefficients from zero. You do not prove the coefficients are truly equal zero, you simply fail to reject such a null hypothesis. (For a given significance level, failing to reject a hypothesis is not as convincing as rejecting the "opposite" hypothesis would be.)
Statistical significance of individual coefficients is also a poor guide in model selection, especially in the context of forecasting where ARMA-GARCH models are routinely used, as argued by Rob J. Hyndman in "Statistical tests for variable selection".
thank you very much for sharing your knowledge. Just one more thing. Concerning the lag selection procedure. Since we talk about ARMA-GARCH processes is it better to jointly fit such a model starting from a standard model, e.g., AR(1)-GARCH(1,1), and then tweak - if needed - the lag order of ARMA and/or GARCH parts in order to get a model that passes all the diagnostic tests? Or to initially select an appropriate ARMA model based on ACF/PACF (I am a little worried about the Ljung-Box distribution in such a case) and then perform an ARCH test and then move to GARCH area. Again, joint est
@peter5, I wish I had a simple, universal answer to the question. I guess I would work with joint ARMA-GARCH models all the way if possible, since ignoring either ARMA or GARCH may mess up the diagnostics of the other (like the null distribution of the Ljung-Box test that you mention). But I am not 100% confident in my advice.
I share your point of view. As far I know, this is a very complicated matter due to many reasons. However, the joint procedure sounds correct, from the statistical point of view. Thank you very much for sharing your precious time and knowledge.
@peter5, you are welcome! I appreciate your gratitude.
|
STACK_EXCHANGE
|
Introducing fullstackhero – Open Source Boilerplates for Rapid Web Development
Ever gone through the painful process of setting up solutions from scratch although most of the features/code seems to be repetitive? fullstackhero addresses this very pain point and offers complete end-to-end solutions/boilerplates to facilitate and ease the process of getting started with web development.
fullstackhero is a collection of Boilerplates that are built with Clean Architecture, the latest Packages, and Essential Features that your projects will need to get started with. Using fullstackhero boilerplates, one can easily save more than 200 hours of development and research time and kick start the application development in no time. The Essentials are already done, you just have to worry about writing the business logic. Most importantly, fullstackhero is completely FREE to use!
Here is fullstackhero’s GitHub handle – https://github.com/fullstackhero
Benefits of fullStackHero
- Production Ready API – fullstackhero sports the latest .NET 6 Web API fused with clean architecture.
- Premium Client Applications – Blazor WebAssembly is the currently offered Boilerplate for fullstackhero’s client side. Angular , React and MVC on the way!
- Multitenancy Support – Multitenancy is built-in!
- Clean Seperation – API and client applications are seperated into different GitHub Repositories to ensure that there is no dependency whatsover. This enables us to accomadate various new technoligies as both client and API applications in future.
- Documenation – fullstackhero is completely documented on fullstackhero.net
- Completely FREE – Being Community driven, fullstackhero is actively developed on GitHub under MIT license and is open for contributions!
Now that we are aware of what fullstackhero is, let’s check out the projects under fullstackhero.
For the initial release, these are two actively developed projects/boilerplates under fullstackhero
- .NET 6 Web API Boilerplate
- Blazor Web Assembly Boilerplate
.NET 6 Web API Boilerplate
.NET WebAPI Boilerplate Template built with .NET 6.0. Incorporates the most essential Packages your projects will ever need. Follows Clean Architecture Principles.
fullstackhero’s .NET Web API Boilerplate is a starting point for your next
.NET 6 Clean Architecture Project that incorporates the most essential packages and features your projects will ever need including out of the box Multi-Tenancy support. This project can save well over
200+ hours of development time for your team.
Read more – https://fullstackhero.net/dotnet-webapi-boilerplate/general/overview/
Setting up Development Environment – https://fullstackhero.net/dotnet-webapi-boilerplate/general/development-environment/
GitHub Repository URL – https://github.com/fullstackhero/dotnet-webapi-boilerplate
- Built on .NET 6.0
- Follows Clean Architecture Principles
- Domain Driven Design
- Completely Documented at fullstackhero.net
- Multi Tenancy Support with Finbuckle
- Create Tenants with Multi Database / Shared Database Support
- Activate / Deactivate Tenants on Demand
- Upgrade Subscription of Tenants – Add More Validity Months to each tenant!
- Supports MySQL, MSSQL, Oracle & PostgreSQL!
- Uses Entity Framework Core as DB Abstraction
- Flexible Repository Pattern
- Dapper Integration for Optimal Performance
- Serilog Integration with various Sinks – File, SEQ, Kibana
- OpenAPI – Supports Client Service Generation
- Mapster Integration for Quicker Mapping
- API Versioning
- Response Caching – Distributed Caching + REDIS
- Fluent Validations
- Audit Logging
- Advanced User & Role Based Permission Management
- Code Analysis & StyleCop Integration with Rulesets
- JSON Based Localization with Caching
- Hangfire Support – Secured Dashboard
- File Storage Service
- Test Projects
- JWT & Azure AD Authentication
- MediatR – CQRS
- SignalR Notifications
Getting Started with .NET Web API Boilerplate
Here is how to quickly get started with setting up .NET Web API Boilerplate on your machine in no time.
Open up your Command Prompt / Powershell and run the following command to install the solution template.
dotnet new -i FullStackHero.WebAPI.Boilerplate
This would install the .NET Web API template globally on your machine. Do note that, at the time of writing this documentation, the latest available version is 0.0.6-rc which is also one of the first stable pre-release versions of the package. It is highly likely that there is already a newer version available when you are reading this.
Now that you have installed the template locally on your machine, let’s see how you can start generating complete .NET WebAPI Solutions seamlessly.
Simply navigate to a new directory (wherever you want to place your new solution), and open up Command Prompt at the opened directory.
Run the following command. Note that, in this demonstration, I am naming my new solution as FSH.Starter.
Once that is done, your new solution is created for you. As simple as that!
Here are the folders and files created for you.
Read more about getting started here – https://fullstackhero.net/dotnet-webapi-boilerplate/general/getting-started/
Blazor Web Assembly Boilerplate
Built with .NET 6.0 and the goodness of MudBlazor Component Library. Incorporates the most essential Packages your projects will ever need. Follows Clean Architecture Principles.
GitHub Repository URL – https://github.com/fullstackhero/blazor-wasm-boilerplate
Similar to the WebAPI Boilerplate, the Blazor project too has a NuGet package that you can install on to your machine to start generating the Blazor Boilerplate with ease.
dotnet new --install FullStackHero.BlazorWebAssembly.Boilerplate
Once installed, you can run the following to generate a new Blazor project.
dotnet new fsh-blazor -o FSH.Blazor
Note that it is vital to run the API Server first. Only then can the Blazor project consume services of the API.
Once both the Blazor and API projects are running, navigate to localhost:5002 to access the Blazor WebAssembly Project. Here is what you will be welcomed with.
You can auto-fill the default username/password & tenant by clicking on the
Fill Administrator Credentials button.
Anyways, here are the default login details for your reference.
- email –
- password –
- tenant –
Once you login, the home page will be loaded.
Features & Components
- Login / Registration / Forgot Password
- Home & Dashboard
- Tenant Management
- Dark Mode / Language Switcher
- Account Management
- Catalog Management – Sample CRUD Included for Brands and Products
- User Management
- Role Management
- Permission Management
- Theme Manager to control Primary & Secondary Colors, Border Radiuses, Table Component Attributes
Here are a couple of more screenshots of the Awesome Blazor WebAssembly project!
Explore more about the project by referring to – https://fullstackhero.net/blazor-webassembly-boilerplate/general/getting-started/
UI Fundamentals – https://fullstackhero.net/blazor-webassembly-boilerplate/ui-overview/fundamentals/
Sponsorship & Support
Has this Project helped you learn something New? or Helped you at work? Here are a few ways by which you can support.
- Leave a star! ⭐
- Fork it 😉
- Recommend this awesome project to your colleagues. 🥇
- Do consider endorsing me on LinkedIn for ASP.NET Core – Connect via LinkedIn 🦸
- Or, If you want to support this project in the long run, consider buying me a coffee! ☕
Sponsorships and Contributions are also accepted on open collective. Do consider sponsoring the project to keep it running for a long time.
you are back…
Great job. Could you provide a walkthrough of adding a working page
Will try to add documentation for that in a few days!
Thanks. In fullstack hero, how do you do form validation, i have several tabs and i want button to be disabled until form is filled. In Blazorhero it was fluentvalidation. please assist
Amazing job, discovering it recently and the quality of the architecture+code is amazing. It also saves a ton of time to kickstart something production-ready with logs and detailed audit trails.
One question though : I don’t quite find an elegant way of referencing users (with their ID) in other EF objects (one user to many objects or even many-to-many).
I get why the ApplicationUsers are in Infra project (obvious separation for safety and deps), but adding users as EF relations into like products for example ?
Did you manage to solve it? And if so, how did you manage it?
@mukesh you are awesome,
Your articles and architecture of any example help me a lot.
Thank you for sharing the knowledge.
My fondest memory in AspNet development was the “aspnet community starter kits” that Microsoft published for clubs, shops and other sites. They weren’t blank canvases but fully operational apps that you could customize and extend for your projects. On a new quest to find the netcore/6 version of those and this looks like a great option, thanks for your effort!
This is a great effort Mukesh. I have gone through the code and documentation and tried following step by step. Kudos to you.
Thanks for great article. I have some questions in mind. How can we handle one to many relationships while inserting and retrieval of data. lets have realtime example. I want to generate bill for products. So I have two tables BillDetails(id,bill_date,bill_no) and BillItemDetails(id,bill_id,productId,quantity , rate), in this scenario how can save data with multiple products. Also in complex case how can we use stored procedure?
|
OPCFW_CODE
|
Excited to share that my StoryMap paper received an Honorable Mention (top 5% of the submissions) at CHI 2021.
StoryMap is a family informatics app that uses social storytelling and reflection for promoting exercise among low-SES families. We found that both data and stories in health informatics are important means of health promotion, but they work in their own unique ways.
Herman Saksono, Carmen Castaneda-Sceppa, Jessica Hoffman, Magy Seif El-Nasr, Andrea G. Parker. 2021. StoryMap: Using Social Modeling and Self-Modeling to Support Physical Activity Among Low-SES Families. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2021). ACM, New York, NY. (PDF)
I will present this paper at CHI in two sessions:
- Personal Health Data A: Sun, May 9, 7-9 pm Eastern
- Personal Health Data C: Mon, May 10, 11 am – 1 pm Eastern
I also provide a short article summarizing the takeaways of this paper on Medium: Stories are just powerful as data in personal health informatics.
Additionally, I will participate in two CHI workshops. The first workshop is Realizing AI in Healthcare in which I will present my paper Algorithmic Patient Matching in Peer Support Systems for Hospital Inpatients:
Peer support in inpatient portal systems can help patients to manage their hospital experiences, namely through social modeling of similar patients’ experiences. In this position paper, I will begin by providing a theoretical foundation of social modeling and peer matching. Then I will present matching strategies for algorithmic tools to match patients by their similarities, as well as the challenges and consequences that will surface when such a system is deployed in the wild. These technosocial complexities show that algorithmic matching in this context is non-trivial. Finally, based on the evidence and theories known thus far, I will present two recommendations on how to algorithmically match patient that will support social modeling, align with human cognition, and reduce the risk of injustices in clinical settings.
Watch the video presentation below:
The second workshop is Artificially Intelligent Technology for the Margins, in which I submitted a position paper titled Transformative-fair AI for Addressing the Societal Origins of Marginalization:
This paper introduced the Transformative-fair framework for understanding the scope of impact of algorithmic tools for supporting marginalized communities. In contrast to Reformative-fair, algorithmic tools that meet Transformative-fair criteria seek to counter the societal origin of marginalization itself. More specifically, by amplifying the community assets (e.g., skills and knowledge in the community) and aspirations, strengthening social relationships, and supporting internally driven community efforts.
See you at CHI 2021!
|
OPCFW_CODE
|
The Venom Gaiter V2 takes the fear out of backcountry missions in the heart of snake country thanks to the incredible penetration resistance properties of the Fang Shield™ internal membrane.
An innovative new outer fabric is lighter with increased resistance to tears and annoying burrs/seeds with increased noise suppression.
These great fitting gaiters utilise a tailored design to contour the shape of your leg significantly reducing bulk.
An angled zip, Velcro, and dome connection have been strategically placed to make the process of doing up and undoing the gaiters a natural angle for your arm to reach, and no annoying hardware will dig into your shins.
An incredibly tough lace hook is twin riveted to an over-sized tab that gives you plenty of grip.
- Weight SzM: 350g each
- Double-stitched Cordura lining for durability and stab protection
- Fang Shield internal membrane
- Quick-adjust webbing strap for easy on & off
- Twin riveted lace hooks to lock the gaiter down
- Heavy-duty webbing loops for wire attachment (not supplied).
- Reinforced stress points for endurance
- Angled side closure makes zipping up simple and protects Velcro
- Heavy-duty YKK zippers
Sz M – Calf size 410mm
Sz L – Calf size 440mm
Sz XL – Calf size 470mm
Sz 2XL – Calf size 500mm
Hunters Element Venom Gaiters were designed and tested for resisting bites from Australian snake species. These gaiters should not be treated as snake proof. Hunters Element, Evolve Outdoors Group Ltd and Peter Bryant do not guarantee the safety of the user from snake bites. Although sample tests resulted in 0% fang penetration from a variety of snake species, there is no guarantee that the wearer will not be bitten. Venom Gaiters should be used as a safety precaution only and in conjunction with other precautions.
Users of Venom Gaiters should take all reasonable precautions to avoid any contact with snakes wherever and whenever possible. As snakes are unpredictable creatures and can cause fatal damage to humans, it is recommended that upon any sighting or contact with a snake, the user should back away immediately.
It is recommended that in addition to wearing Venom Gaiters, users should also wear thick leather boots and tough, hard to penetrate trousers underneath their gaiters. This will further add to the risk reduction of a snake’s fang penetrating through to the wearer’s skin. It is recommended that users avoid areas known to contain snake populations, particularly in the hotter periods of the year when the snakes tend to be more active.
If you come into physical contact with a snake of any kind, it is highly recommended that you contact emergency medical authorities immediately.
The testing process involved Professional Snake Handlers Peter Bryant attempting to incite bites on multiple Venom Gaiter samples, from multiple Australian snake species. The sample gaiters were initially wrapped tight around inflated balloons and the handler forced the snake to try and penetrate through the gaiter to pop the balloon. The gaiters were then inspected meticulously for any sign of penetration. No fang penetration occurred in any instance of testing.
Testing also included the gaiters being worn on the leg of a handler to test the striking ability of the snakes as well as biting strength. Once again no penetration occurred from multiple bites.
Hunters Element Venom Gaiters do not offer any guarantee of protecting the wearer, they should be worn only as an aid in resisting snake bites. Additionally, it should be noted that this product does not cover all the users vulnerable body parts. We recommend that the user wear thick leather boots and heavy duty trousers to help reduce the risk.
If a snake comes in contact with the gaiters we strongly recommend that the wearer should remove the gaiters immediately and avoid all skin contact with any venom or the surrounding area. The entire gaiter should be comprehensively washed in mentholated spirits and left to completely dry. When making any physical contact with the contaminated gaiters, the user should completely cover their skin. We recommend a face mask, protective eyewear, latex gloves and full-body overalls. Before any contact with the gaiter (even after cleaning), the user should contact Hunters Element to discuss additional precautions.
|
OPCFW_CODE
|
While people are generally interested in chatting with ChatGPT, as a developer I have been thinking of how to use OpenAI APIs for building business applications. Large Language Models (LLMs) have a data freshness problem. For example, as ChatGPT's knowledge cutoff date is September 2021, it won’t be able to answer questions which require the latest information such as the latest offers / promotions. Also, LLMs don’t have access to proprietary / confidential information. For example, you may have internal company documents you’d like to interact with via an LLM.
The first challenge is adding those documents to the LLM, we could try training the LLM on these documents, but this is time-consuming and expensive. And what happens when a new document is added? Training for every new document added is beyond inefficient — it is simply impossible.
So, how do we handle this problem? We can use retrieval-augmented generation. This technique allows us to retrieve relevant information from an external knowledge base and give that information to our LLM. The external knowledge base is our “window” into the world beyond the LLM’s training data. Over the last couple of months, I have been learning about implementing retrieval-augmented generation for LLMs by developing a tech POC using LangChain.js & OpenAI Embeddings API to allow you to chat & query with your own files.
After doing some research, I came up with the following plan and executed it:
Prepare the documents - I downloaded HTML files from Mastercard Priceless website with following command (I interrupted its execution and got a total of 171 files which are stored in data/docs folder of the repo):
wget --user-agent="Mozilla" --no-parent\
-e robots=off -r -m\
-P data/docs https://www.priceless.com/
Create embeddings of these documents and store them to a vectorstore:
When a question is input by the user, the frontend will send both the new question and chat history, if any, to backend.
If the chat history is empty, the backend will call OpenAI Embeddings API to generate embeddings for the new question and then use the embeddings to do a similarity search from the vectorstore which will return the top 4 related document chunks;
If the chat history is not empty, the backend will call OpenAI Completion API to generate a standalone question based on the new question and the chat history. The backend will call OpenAI Embeddings API to generate embeddings for the standalone question and then use the embeddings to do a similarity search from the vectorstore which will return the top 4 related document chunks.
The document chunks returned from the vectorstore will be used as the context for OpenAI Completion API to generate the final answer which will be streamed to the frontend. Once text generation is completed, the related document chunks, as well as their metadata like URLs, will be sent to fontend to render the list of sources.
This project supports 4 vectorstores: HNSWLib, Chroma, Milvus and Pinecone. Here are some basic facts collected from the internet.
The below table, screen-captured from the Python notebook here, shows what happened when the same set of questions were answered by using the different vector stores.
As you can see from the above table, though the standalone questions generated by LLM are almost the same, the similarity search results from vectorstores differ, which causes the differences in the final answers. In some cases, the search results don’t contain any relevant document chunks, resulting LLM replying “I’m sorry …” as it is unable to provide a useful answer from the context provided.
After completion the comparison of vectorstores, I also tried to compare embeddings functions.
I tried to test the embeddings functions of some open source LLMs on my Mac (without GPU). It turns out that, though the speed is too slow to deliver any reasonable user experiences for end users, the answers are not bad.
The results below were captured after I used Vicuna (q4_1), GPT4ALL and OpenAI to generate embeddings for 3 HTML files which are returned by this chatbot when someone asks "Entertainment in New York". These files are split into 60 chunks with a chunk size of 500 tokens. Then I ask all models to use the generated embeddings vectors to run RetrievalQA on the same question. For comparison, I also tried to use OpenAI + pre-generated embeddings vectors which are stored at Pinecone to run the same query. At last, I appended the answer from this chatbot. Refer to two Python notebooks (part 1 & part 2) under scripts/ folder for more details.
Here are some findings through the development of this project:
The capabilities of vectorstores vary. You need to carefully evaluate them and select the one most suitable to your business needs.
OpenAI Embeddings API might be costly for a large set of documents. As a reference, $22 USD was charged by OpenAI to load the 171 HTML files stored in folder
data/docs, with a total size of around 180M.
To save costs, I used HNSWLib to store all embeddings generated by OpenAI in the local file system and then load them into other vectorstores: Pinecone, Chroma and Milvus.
It might be worthwhile to use other embeddings methods, such as Sentence Transformers, which would not only save costs, but also eliminate the risks of leaking sensitive / confidential information.
In order to reduce source data sizes and save costs, I tried to use html2text to convert all HTML files to pure text files. The result was not ideal - in most cases the chatbot couldn’t answer the user question or give an incomplete answer. Eventually I had to switch back to use HTML files as is - you can tell it when you tap any item of the list of the sources in frontend.
The capabilities of vectorstores vary. Enterprises need to carefully evaluate them and select the one most suitable to their business needs.
As of now, to build LLM based applications for most enterprises, our best bet would be using OpenAI APIs. To save costs, other embeddings methods can be considered if they can work with the selected vectorstore to deliver similarity search results with good performance.
PS. You can play with the chatbot deployed at Netlify. The sources, including NextJS code, Python notebooks, HTML files and HNSWLib data files, are all hosted at GitHub. Feel free to create an issue or submit a pull request. Enjoy!
|
OPCFW_CODE
|
#include "microestrutura.h"
//Construtores
Microestrutura::Microestrutura(void)
{
}
Microestrutura::Microestrutura(Matriz *matriz)
{
numGraos = matriz->numNucleos(); //Número de grãos da matriz
for (int i = 0; i < numGraos; i++)
{
listaDeGraos.push_back( Grao(i + 1, matriz) ); //Cria o grão de acordo com o identificador i+1
}
//@@@@@@
for (int i = 0; i < numGraos; i++) //Determina os vértices de cada grão
{
encontraIntersecoes(listaDeGraos[i], matriz); //Encontra os Pontos de Interseção dos Planos da Face do Grão
//@encontraIntersecoes(listaDeGraos[0], matriz);
filtraPontos(&pontosI); //Exclui os pontos repetidos e aqueles que não estão no interior do poliedro
for (int j = 0; j < pontosI.size(); j++)
{
listaDeGraos[i].setVertice(pontosI[j]); //Insere os vértices ao seu grão
//@@@listaDeGraos[0].setVertice(pontosI[j]);
vertices.push_back(pontosI[j]); //Armazena todos os vértices
}
}
cout << "Tamanho " << vertices.size();
limpaRepetidos(&vertices); //Remove repetidos
cout << "Tamanho " << vertices.size();
}
//Destrutores
Microestrutura::~Microestrutura()
{
}
//Métodos
Grao Microestrutura::getGrao(int id)
{
return listaDeGraos[id - 1];
}
void Microestrutura::encontraIntersecoes(Grao grao, Matriz *matriz)
{
int nVizinhos;
bool graoEstaNoInterior, vizinhosEstaoNoInterior; //posição dos grão e seus vizinhos
vector<Ponto> vizAjust; //Armazena os núcleos dos vizinhos com ajuste de condição de contorno periódica
vector<Ponto> normal, inter; //Vector contendo os pontos normais e as inteseções
nVizinhos = grao.numeroFaces(); //número de vizinhos
graoEstaNoInterior = 1; //Assume que o grão está no interior
vizinhosEstaoNoInterior = 1; //Assume que seus vizinhos estão no interior
////
////Inicialização do vetor de armazenamento de pontos e de planos
////
pontosI.clear(); //Esvazia o vetor
planoFace.clear(); //Esvazia o vetor
/////////////////////////////////////////
/*cout << "\nNucleo do grao!\n";
grao.nucleo.exibe();
cout << "\n______________\n";
for (int i = 0; i < nVizinhos; i++)
{
cout << grao.getVizinhos(i) << "\n";
}
cout << "\n______________\n";*/
//Conversão dos pontos tendo em vista a condição de contorno periódica
for (int i = 0; i < nVizinhos; i++)
{
vizAjust.push_back( convertePonto( grao.nucleo, getGrao( grao.getVizinhos(i) ).nucleo , matriz ) );
/////////////
/*cout << "\n|vizinho";
getGrao( grao.getVizinhos(i) ).nucleo.exibe();
cout << "\n|vizinho ajustado";
vizAjust[i].exibe();*/
//////////////////////
}
for (int i = 0; i < nVizinhos; i++)
{
//Define os vetores normais
//centro do vizinho - centro do grão
normal.push_back( soma( vizAjust[i], produto(-1, grao.nucleo) ) );
//Define os vetores da interseção (por enquanto é válido apenas para crescimento com velocidades iguais)
//Ponto médio: 0.5 * (nucleo_vizinho + nucleo_grao)
inter.push_back( produto( 0.5, soma( vizAjust[i] , grao.nucleo ) ) );
//Define os planos
planoFace.push_back( Plano( inter[i], normal[i] ) );
}
//Teste//////////////////////////////////////////////////////////////////////////////
/*for (int i = 0; i < planoFace.size(); i++)
{
planoFace[i].exibe();
inter[i].exibe();
}
cout << "------------------------";
*/
//Fim Teste////////////////////////////////////////////////////////////////////////////////
//Interseções de 3 em 3
for (int i = 0; i < nVizinhos; i++)
{
for (int j = i + 1; j < nVizinhos; j++)
{
for (int k = j + 1; k < nVizinhos; k++)
{
if ( intersecaoUnica( planoFace[i], planoFace[j], planoFace[k] ) ) //Se houver só uma interseção
{
pontosI.push_back( intersecao( planoFace[i], planoFace[j], planoFace[k] ) ); //Calcula a interseção e insere no vetor pontoI
}
}
}
}
////////////////////
/*for (int i = 0; i < pontosI.size(); i++)
{
pontosI[i].exibe();
}*/
///////////////////
}
void Microestrutura::exibe() //Teste, pode ser apagado
{
for (int i = 0; i < pontosI.size(); i++)
{
pontosI[i].exibe();
}
}
//Converte o Ponto para seu equivalente de fora da Matriz
Ponto Microestrutura::convertePonto(Ponto nucleoG, Ponto nucleoV, Matriz *matriz)
{
int xG, yG, zG, xV, yV, zV; //Coordenadas do Grão e do Vizinho
int i, j, k; //Auxiliares
int col, lin, cot; //Dimensões da Matriz
int nx, ny, nz;
col = matriz->getColunas();
lin = matriz->getLinhas();
cot = matriz->getCotas();
//Separação dos valores
xG = nucleoG.x;
xV = nucleoV.x;
yG =nucleoG.y;
yV = nucleoV.y;
zG = nucleoG.z;
zV = nucleoV.z;
//Parâmetros i, j, k para contemplação das condições de contorno periódicas
i = condInterno(xG, xV, (col)/2)*condSinal(xG, xV); //Parâmetro para o eixo x
j = condInterno(yG, yV, (lin)/2 )*condSinal(yG, yV); //Parâmetro para o eixo y
k = condInterno(zG, zV, (cot)/2 )*condSinal(zG, zV); //Parâmetro para o eixo z
//Conversão
nx = xV - i*col;
ny = yV - j*lin;
nz = zV - k*cot;
Ponto N(nx, ny, nz);
return N;
}
///////////////////////////////////////////////////////////////////////////////////////////
void Microestrutura::filtraPontos(vector<Ponto> *pontos) //, vector<Plano> *planos)
{
//Eliminar pontos repetidos
limpaRepetidos(pontos);
//Eliminar pontos de fora do poliedro, mantendo apenas os vértices
for (int i = 0; i < pontosI.size(); i++) //Para cada ponto
{
for (int j = 0; j < planoFace.size(); j++) //Para cada plano na face
{
if ( produtoInterno(pontosI[i], planoFace[j].normal()) > planoFace[j].d + 1) //!!Testa inequações dos planos (+1 fator para contemplar erros de precisão na conta)
{
pontosI.erase(pontosI.begin() + i);
i--;
break;
}
}
}
}
void Microestrutura::limpaRepetidos(vector<Ponto> *pontos) //Eliminar pontos repetidos
{
for (int i = 0; i < pontos->size(); i++)
{
for (int j = i + 1; j < pontos->size(); j++)
{
if (((*pontos)[i].x == (*pontos)[j].x ) && ((*pontos)[i].y == (*pontos)[j].y ) && ((*pontos)[i].z == (*pontos)[j].z )) //Se encontrar ponto repetido
{
pontos->erase(pontos->begin() + j); //Apaga a posição j + 1 do vetor
j--;
}
}
}
}
vector<Ponto> Microestrutura::listaVertices()
{
return vertices;
}
|
STACK_EDU
|
I'm what you might call an early adopter. Shiny new technology to play with makes me happy. So I've been running the Windows 8 Release Preview (that's a fancy term for public beta) on two of my PCs for a while now. I even put it on a very elderly Compaq laptop before I risked it on computers I actually use, and it worked fine.
There's much to like about Windows 8, not least its super-speedy install (about 10 minutes on an SSD), and the fact that it runs OK on ancient hardware (unlike iOS).
But there's one thing I absolutely hate: there's no Start orb.
It started out as a Start button, and first appeared on Windows PCs back in 1995 as part of, um, Windows 95. Microsoft was jolly proud of it, even roping in the Rolling Stones' classic track Start Me Up as part of the marketing campaign.
The Start button made life so much easier: it was a quick way to launch programs. As it matured through Windows XP, Vista and Windows 7, it got a lot better: it became a nifty way to find stuff on your PC. You just clicked, started typing the first couple of letters of what you were looking for – a document, an email, whatever – and the box was populated with offerings. Usually, what you were looking for was right there.
It got prettier, too. In Windows 95, it was a rather ugly button. In the XP default theme, it looked like a green headache tablet, though if that gave you a headache, you could choose other themes, or even, if you preferred the vintage Windows 95 look, choose that.
By the time we got to Windows Vista in January 2007, the Start button had morphed into a glowing orb, from which you could launch programs, search for stuff, access the control panel and shut down your computer. Or put it to sleep. Or log off from the network. Or switch user. Or hibernate. Yes, all of those.
By Windows 7, the Start orb was a fixture in our lives. It sat there, unobtrusive, in the bottom left-hand corner of your screen, and glowed gently when you moused over it. Like the best technology, it didn't impinge on your consciousness. It just worked.
Who moved my orb?
Until it vanished. When you install Windows 8, you're first of all presented with the Metro UI, which is quite a surprise if you haven't dealt with it before. I think it will work well on touch devices, but on my desktop, I click straight through to the desktop, which reassuringly looks like Windows 7.
Except for the lack of the Start button. Drop your eye down to the bottom left-hand corner and – it's gone. If you mouse over the hot corner, a thumbnail of the Metro start screen floats into view, which you can click on, and which takes you to the Metro start screen. (Duh.) Or you can mouse into the top right hand corner and the Charms bar floats into view. Click on either Start or Search, and you'll be taken to the Metro screen for those tasks. Or you can hit either Windows + w for the Metro search screen, or Windows + q for the Metro Start screen. I absolutely hate this. Instead of one quick click, it means faffing about with the mouse to find the hot corner, or remembering which combination of keystrokes brings up the dialog you want.
Microsoft says that the "telemetry" from the user data returned to it by the Customer Experience Improvement Program suggested that people don't use the Start orb much. And sure, there are easier ways to launch programs in Windows 7 than by invoking them via Start: you can drag a shortcut on to the desktop, or pin a shortcut to the taskbar. I think it's more nuanced than that. Yes, I pinned my most-used applications to the taskbar in Windows 7. But the absence of the Start orb in Windows 8 is driving me crazy; I hadn't realised how much I used it to search for stuff. Microsoft has a lot riding on the launch of Windows 8 – it's a next-generation OS for touchscreen devices. But I suspect that if the Start orb doesn't make into the final version of Windows 8, then once punters start using their new PCs there will be a lot of grumbling about its absence, because while it might not make much sense in a touch environment, it makes loads of sense on a desktop. However, there is an answer. You can do a registry hack to return the Start orb; and there various bits of software that resurrect it too, including one from Stardock.
But here's hoping Microsoft sees sense and restores the Start orb in the final version of Windows 8. Please, Microsoft?
|
OPCFW_CODE
|
Injecting boundary conditions is presumably the simplest way to numerically impose them. It implies simply overwriting, at every or some number of timesteps, the numerical solution for each incoming characteristic variable or its time derivative with the conditions that they should satisfy.
Stability of the injection approach can be analyzed through GKS theory , since energy estimates are, in general, not available for it (the reason for this should become more clear when discussing the projection and penalty methods). Stability analyses, in general, not only depend on the spatial approximation (and time integration in the fully-discrete case) but are in general also equation-dependent. Therefore, stability is, in general, difficult to establish, especially for high-order schemes and nontrivial systems. For this reason a semi-discrete eigenvalue analysis is many times performed. Even though this only provides necessary conditions (namely, the von Neumann condition (7.80)) for stability, it serves as a rule of thumb and to discard obviously unstable schemes.
As an example, we discuss the advection equation with “freezing” boundary condition,, but we present an eigenvalue analysis as a typical example of those done for more complicated systems and injection boundary conditions. Figure 4 shows the semi-discrete spectrum using a different number of collocation points . As needed by the strong version of the von Neumann condition, no positive real component is present [cf. Eq. (7.82)]. We also note that, as discussed at the beginning of Section 9.8, the spectral radius scales as .
There are other difficulties with the injection method, besides the fact that stability results are usually partial and incomplete for realistic systems and/or high-order or spectral methods. One of them is that it sometimes happens that a full GKS analysis can actually be carried out for a simple problem and scheme, and the result turns out to be stable but not time-stable (see Section 7.4 for a discussion on time-stability). Or the scheme is not time-stable when applied to a more complicated system (see, for example, [116, 1]).
Seeking stable numerical boundary conditions for realistic systems, which preserve the accuracy of high-order finite-difference and spectral methods has been a recurring theme in numerical methods for time-dependent partial differential equations for a long time, especially for nontrivial domains, with substantial progress over the last decade – especially with the penalty method discussed below. Before doing so we review another method, which improves on the injection one in that stability can be shown for rather general systems and arbitrary high-order FD schemes.
Assume that a given IBVP is well posed and admits an energy estimate, as discussed in Section 5.2. Furthermore, assume that, up to the control of boundary terms, a semi-discrete approximation to it also admits an energy estimate. The key idea of the projection method [317, 319, 318] is to impose the boundary conditions by projecting at each time the numerical solution to the space of gridfunctions satisfying those conditions, the central aspect being that the projection is chosen to be orthogonal with respect to the scalar product under which a semi-discrete energy estimate in the absence of boundaries can be shown. The orthogonality of the projection then guarantees that the estimate including the control of the boundary term holds.
In more detail, if the spatial approximation prior to the imposition of the boundary conditions is written as
The latter are then imposed by changing the semi-discrete equation (10.6) to
Details on how to explicitly construct the projection can be found in . The orthogonal projection method guarantees stability for a large class of problems admitting a continuum energy estimate. However, its implementation is somewhat involved.
A simple and robust method for imposing numerical boundary conditions, either at outer or interpatch boundaries, such as those appearing in domain decomposition approaches, is through penalty terms. The boundary conditions are not imposed strongly but weakly, preserving the convergence order of the spatial approximation and leading to numerical stability for a large class of problems. It can be applied both to the case of FDs and spectral approximations. In fact, the spirit of the method can be traced back to Finite Elements-discontinuous Galerkin methods (see and for more recent results). Terms are added to the evolution equations at the boundaries to consistently penalize the mismatch between the numerical solution and the boundary conditions the exact solution is subject to.
As an example, consider the half-space IBVP for the advection equation,
We first consider a semi-discrete approximation using some FD operator satisfying SBP with respect to a scalar product , which we assume to be either diagonal or restricted full (see Section 8.3). As usual, denotes the spacing between gridpoints,
In the case of diagonal SBP norms it is straightforward to derive similar energy estimates for general linear symmetric hyperbolic systems of equations in several dimensions, simply by working with each characteristic variable at a time, at each boundary. A penalty term is applied to the evolution term of each incoming characteristic variable at a time, as in Eq. (10.18), where is replaced by the corresponding characteristic speed. In particular, edges and corners are dealt with by simply imposing the boundary conditions with respect to the normal to each boundary, and an energy estimate follows.
The global semi-discrete convergence rate can be estimated as follows. Define the error gridfunction as the difference between the numerical solution and the exact one evaluated at the gridpoints,
Using Eq. (10.26) and the SBP property, the norm of the error satisfiespointwise to zero with rate,
Like the FD case, we summarize the method through the example of the advection problem (10.14, 10.15, 10.16), except that now we consider the bounded domain and apply the boundary condition at . Furthermore, we first consider a truncated expansion in Legendre polynomials. A penalty term with strength is added to the evolution equation at the last collocation point:
Using the discrete energy given by Eq. (9.95) and SBP property associated with Gauss quadratures discussed in Section 9.4, a discrete energy estimate follows exactly as in the FD case if
Devising a Chebyshev penalty scheme that guarantees stability is more convoluted. In particular, the advection equation is already not in the Chebyshev norm, for example (see ). Stability in the norm can be established using the Chebyshev–Legendre method , where the Chebyshev–Gauss–Lobatto nodes are used for the approximation, but the Legendre ones for satisfying the equation. In this approach, the penalty method is global, because it adds terms to the right-hand side of the equations not only at the endpoint, but at all other collocation points as well.
A simpler approach, where the penalty term is only applied to the boundary collocation point, as in the Legendre and FDs case, is to show stability in a different norm. For example, in it is shown that a penalty term as in Eq. (10.33) is stable for the Chebyshev–Gauss–Lobatto case in the norm defined by the weight, cf. Eq. (9.17),
Living Rev. Relativity 15, (2012), 9
This work is licensed under a Creative Commons License.
|
OPCFW_CODE
|
How To Install A Ruby 1.8 Stack on Ubuntu 8.10 From Scratch
Want to install Ruby, RubyGems, and a collection of common gems on Ubuntu 8.10 (Intrepid Ibex) in just a few minutes? Here's the skinny.
If you want, you could use something like Passenger-Stack to do the legwork for you, but I prefer doing manual installations so I know the full score. There are several "how to install Ruby on Ubuntu Intrepid" guides out there but none of them got it totally right for me. I've just used these instructions twice in a row so I know they work. Another bonus is you get ImageMagick and rmagick installed which some people get really frustrated with..
Note: These instructions assume you're running as
rootfor convenience. You can alternatively
sudoevery line or just run
sudo bashuntil you're done.
Install the system level basics
apt-get update apt-get -y install build-essential zlib1g zlib1g-dev libxml2 libxml2-dev libxslt-dev sqlite3 libsqlite3-dev locate git-core apt-get -y install curl wget
Install ImageMagick (for rmagick)
apt-get -y install libmagick9-dev
Install Ruby 1.8 (MRI)
apt-get -y install ruby1.8-dev ruby1.8 ri1.8 rdoc1.8 irb1.8 libreadline-ruby1.8 libruby1.8 libopenssl-ruby ln -s /usr/bin/ruby1.8 /usr/bin/ruby ln -s /usr/bin/rdoc1.8 /usr/bin/rdoc ln -s /usr/bin/irb1.8 /usr/bin/irb ln -s /usr/bin/ri1.8 /usr/bin/ri
Note: Some advise not to use the packaged version of Ruby on Ubuntu due to its performance. I'm not worried about this. If you are, replace this section with a download of the Ruby source code (
http://ftp.ruby-lang.org/pub/ruby/1.8/ruby-1.8.7-p72.tar.gz) and un
make installit by hand. You're on your own with that though.
Install RubyGems (from source)
curl http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz | tar -xzv cd rubygems-1.3.5 && ruby setup.rb install cd .. && rm -rf rubygems-1.3.5 ln -s /usr/bin/gem1.8 /usr/local/bin/gem gem sources -a http://gems.github.com # add Github as a gem source, you won't regret it
Install a set of starter Ruby gems
gem install rake nokogiri hpricot builder cheat daemons json uuid rmagick sqlite3-ruby fastthread rack
By this point you now have Ruby installed with RubyGems, a collection of gems (including rmagick) and you can branch off where you want. If you want to develop a Sinatra app, install the
sinatra gem and you're away. If you want to install Rails,
gem install rails. And so forth.
If you want to install Apache with Passenger for hosting your apps, however, read on..
Optional: Install Apache and Passenger
echo "deb http://apt.brightbox.net hardy main" > /etc/apt/sources.list.d/brightbox.list wget -q -O - http://apt.brightbox.net/release.asc | apt-key add - apt-get update apt-get -y install libapache2-mod-passenger
Note: Brightbox's Passenger package is officially for Ubuntu 8.04 (Hardy) but it works fine on Intrepid in my experience.
If you need PHP5 as well:
apt-get -y install php5 libapache2-mod-php5 php5-mysql /etc/init.d/apache2 restart
Optional: Need a very, very basic firewall?
apt-get -y install ufw ufw allow to 0.0.0.0/0 port 80 ufw allow to 0.0.0.0/0 port 22 # (or whichever port you use for ssh) ufw allow to 0.0.0.0/0 port 25 # (if you need mail in) ufw enable
Note: You're installing the firewall, not me, so don't complain if you get locked out because of the firewall or something :) Ensure you have the correct ports and/or a console access to your server just in case (such as Linode supplies).
|
OPCFW_CODE
|
As final year draws to a close, it’s time to look back and reflect on what exactly has happened over this period of time.
When we started this all the way back in September, none of us really knew what we wanted to do. I was no exception, though I knew I wanted it to involve controlling the TV. I didn’t really know why, and it didn’t really have much of a purpose. This has always been one of my weak points as a designer. In a lot of ways, I’d class myself as more of a developer: taking someone else’s ideas which had a clear purpose, and making them work. After having Ideas Day, and speaking to some amazing people, it became clear to me which way I should be going: the gestural TV control interface route. However, at this point, it was just a gestural TV control interface project without much of a purpose. After all, I wasn’t studying Applied Computing, but I am studying Digital Interaction Design.
What I was doing had to have purpose. As I started thinking about TV and how programming is consumed, the content became more of a prevalent topic for me. It then occurred to me that the British are quite ignorant of different cultures.
Knowing I wanted to investigate different cultures, I needed a method to do this. After brainstorming a few ideas, I chose to use little stickmen to send out. It took a long time to get these packs made up due to the custom packaging and actually making the stickmen. However, the results were great! I was very pleased with the results and then had to start looking at how to implement the results. I realised a needed a more powerful computer and I received a more powerful computer! I managed to program one culture’s gesture set and send commands to the TV depending on the gestures performed. This was big news and I was felt confident.
With so much of the heavy programming being completed in Phase 1, I could now spend Phase 2 working on the phone app. As this was the primary part of the project where any type of visual design language could be implemented, I had to make sure that the visual aesthetic was perfect. I chose to use the same font and general design language which I have used all throughout this project, but also kept within the Windows Phone UI Design Guidelines. I chose to represent different cultures by the countries which are generally associated with these cultures. Since I was slightly ahead, I also spent a lot of this time helping others with their coding and electronics. I am now proficient in RFID technologies, SOMO modules, AppleScript, current sensors, basic Xcode, Arduino code and general determination!
When it came to integrating the Kinect sensor and the Windows Phone app together, there was a steep learning curve. I had to tell the computer side of the code to send a message containing the culture of the day to all the registered phones, once a user was detected by the Kinect. Once I overcame this battle, I had managed to successfully implement push notifications on the handset and have a full backend system ready to accept more Windows Phones’ subscriptions to these notifications. This means that at the Degree Show, other people with Windows Phones can download the app and receive the push notifications on their own handsets. I was very proud of that achievement and I just hope Windows Phone users attend the Degree Show!
The project is almost completely over. It is a sad time. Am I glad to see the back of it? No. I will hopefully continue to develop the application. I wouldn’t mind commercialising the project, but in the same way, I’m not going to actively seek funding. I’m very thankful for all of the things that this project has taught me. The ethnography, the research, the technology, the presentation skills. It’s been a wonderful time.
|
OPCFW_CODE
|
[Bug]:
Name and Version
rapidfort/postgresql:15.1, 15.1-debian-11, 15.1.0, 15.1.0-debian-11-r19, latest
Which runtime are you using to reproduce this issue?
[ ] Kubernetes
[X] Docker Compose
[ ] Docker
Is this issue reproducible on the original source image?
Reproducible
Could you please identify the category? Details in TROUBLE_SHOOTING.md
Coverage missing
What steps will reproduce the bug?
POSTGRESQL_TIMEZONE=UTC
Are you using any custom parameters or values?
POSTGRESQL_TIMEZONE=UTC
What is the expected behavior?
2023-01-07 12:38:47.745 GMT [1] LOG: database system is ready to accept connections
What do you see instead?
postgresql 12:40:18.66 INFO ==> ** Starting PostgreSQL **
2023-01-07 12:40:18.684 GMT [1] LOG: invalid value for parameter "TimeZone": "UTC"
2023-01-07 12:40:18.684 GMT [1] FATAL: configuration file "/opt/bitnami/postgresql/conf/postgresql.conf" contains errors
exited with code 1
Additional information
No response
Thank you for pointing this out @ddominguezcorcoba. We have debugged this issue. Kindly check using rapidfort/postgresql:latest.
Now, We're resolving the other bug you've mentioned separately.
I am afraid the issue is not entirely resolved.
When starting the container setting the (bitnami) environment variable POSTGRESQL_TIMEZONE set as:
POSTGRESQL_TIMEZONE=UTC
the container complaints about an invalid value.
In order for this to work, UTC needs to be assigned as "UTC+0:00"
In any case, using the official JDBC driver is NOT possible, regardless whether the container is started with the POSTGRESQL_TIMEZONE set or not. I tried using the same nomenclature for setting driver properties, to no avail.
When using the original bitnami image, everything works as expected.
Please use any of the latest tags. We have added fixes to them.
rapidfort/postgresql:latest
rapidfort/postgresql:15.1.0-debian-11-r20
rapidfort/postgresql:15.1-debian-11
If these images already exist on your system, kindly remove them and pull again.
We'll update other tags(older versions) asap. Please let us know if you need a particular image/tag to be fixed. We'll prioritize it.
You are right, I just pulled the latest image and the time zone issue is no longer.
Thank you very much, Anmol.
Denis Domínguez Corcoba (Mr.)
On 11 Jan 2023, at 10:56, Anmol Virdi @.***> wrote:
Please use any of the latest tags. We have added fixes to them.
rapidfort/postgresql:latest
rapidfort/postgresql:15.1.0-debian-11-r20
rapidfort/postgresql:15.1-debian-11
If these images already exist on your system, kindly remove them and pull again.
—
Reply to this email directly, view it on GitHubhttps://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Frapidfort%2Fcommunity-images%2Fissues%2F235%23issuecomment-1378496210&data=05|01|denis.dominguezcorcoba%40un.org|55dda15820334c61dd5808daf3ba1324|0f9e35db544f4f60bdcc5ea416e6dc70|0|0|638090277750492000|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=j0C%2FML6WkRPGxi1CCJIRcAKu4D5CtXC2ZDc6t9%2FrJGI%3D&reserved=0, or unsubscribehttps://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAFNWLD3UEL5IU47RPNRXCFLWRZ7TXANCNFSM6AAAAAATT5S67E&data=05|01|denis.dominguezcorcoba%40un.org|55dda15820334c61dd5808daf3ba1324|0f9e35db544f4f60bdcc5ea416e6dc70|0|0|638090277750492000|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=CYGXd340TsivY9DBxKE0bSquZr%2FIDK5GCrZWF3Syopw%3D&reserved=0.
You are receiving this because you were mentioned.Message ID: @.***>
|
GITHUB_ARCHIVE
|
#!/usr/bin/env python
"""
Command-line interface to the Sumatra computational experiment management tool.
This is a developer tool which does exactly the same as the 'smt' command
except that it also outputs profiling information about Sumatra upon successful
completion of the run.
"""
import sys
import cProfile
import pstats
from textwrap import dedent
from argparse import ArgumentParser
from sumatra import commands, __version__
from sumatra.versioncontrol.base import VersionControlError
from sumatra.recordstore.base import RecordStoreAccessError
usage = "smt_profile [profiling_options] <subcommand> [cmd_options] [args]"
description = dedent("""
Profile a Sumatra subcommand (run 'smt' for available commands).
This runs the given command using cProfile and upon completion
outputs a diagnostic message on the screen. It also dumps the raw
profiling data into a file which can be analysed using Python's
'pstats' module or a graphical user interface like 'RunSnakeRun'.
Apart from the profiling options this should be the exact same
command that would be used without 'smt' itself. Example:
"smt_profile run -r 'Informative message.' defaults.param". Note
that enabling profiling may considerably slow down execution.
""")
parser = ArgumentParser(usage=usage, description=description)
parser.add_argument('-n', metavar='N', type=int, default=20,
help="number of lines to print in profiling stats. Default: 20.")
parser.add_argument('-s', '--sorting-method', metavar='METHOD', default='cumulative',
help="method used to sort the profiling stats. "
"This can be any of the methods accepted by "
"pstats.Stats.sort_stats(). Default: 'cumulative'")
parser.add_argument('-o', '--output-file', metavar='PATH',
default='profiling_stats.prof',
help="Filename for storing the generated profiling "
"data (in binary format, as generated by "
"cProfile). Default: 'profiling_stats.prof'")
# The parser should only parse options up to the first valid
# Sumatra subcommand; everything after that should not be
# interpreted as an option for the 'profile' command but should be
# passed on to the subcommand. We achieve this by finding the
# first valid subcommand and splitting the list of options into
# "before" and "after". Only the first half is processed here.
subcmds = [cmd for cmd in sys.argv if cmd in commands.modes]
i = sys.argv.index(subcmds[0]) if subcmds != [] else len(sys.argv)
argv_profile, argv_cmd = sys.argv[1:i], sys.argv[i+1:]
args = parser.parse_args(argv_profile)
if subcmds != []:
cmd = subcmds[0]
else:
parser.error('Please specify a command which you would like to profile.\n\n'
'Available commands:\n {}'.format("\n ".join(commands.modes)))
stats_file = args.output_file
cProfile.run("from sumatra import commands; "
"commands.{}({})".format(cmd, argv_cmd), stats_file)
p = pstats.Stats(stats_file)
p.sort_stats(args.sorting_method).print_stats(args.n)
cmd = sys.argv[1]
try:
main = getattr(commands, cmd)
except AttributeError:
print(usage)
sys.exit(1)
try:
main(sys.argv[2:])
except (VersionControlError, RecordStoreAccessError) as err:
print("Error: {}".format(err))
sys.exit(1)
|
STACK_EDU
|
#include "Main.hpp"
void Map::load(TCODZip &zip) {
seed=zip.getInt();
init(false);
for (int i=0; i < width*height; i++) {
tiles[i].explored=zip.getInt();
}
}
void Map::save(TCODZip &zip) {
zip.putInt(seed);
for (int i=0; i < width*height; i++) {
zip.putInt(tiles[i].explored);
}
}
void Actor::load(TCODZip &zip) {
x=zip.getInt();
y=zip.getInt();
ch=zip.getInt();
col=zip.getColor();
name=_strdup(zip.getString());
blocks=zip.getInt();
bool hasAttacker=zip.getInt();
bool hasDestructible=zip.getInt();
bool hasAi=zip.getInt();
bool hasPickable=zip.getInt();
bool hasContainer=zip.getInt();
if ( hasAttacker ) {
attacker = new Attacker(0.0f);
attacker->load(zip);
}
if ( hasDestructible ) {
destructible = Destructible::create(zip);
}
if ( hasAi ) {
ai = Ai::create(zip);
}
if ( hasPickable ) {
pickable = Pickable::create(zip);
}
if ( hasContainer ) {
container = new Container(0);
container->load(zip);
}
}
void Actor::save(TCODZip &zip) {
zip.putInt(x);
zip.putInt(y);
zip.putInt(ch);
zip.putColor(&col);
zip.putString(name);
zip.putInt(blocks);
zip.putInt(attacker != NULL);
zip.putInt(destructible != NULL);
zip.putInt(ai != NULL);
zip.putInt(pickable != NULL);
zip.putInt(container != NULL);
if ( attacker ) attacker->save(zip);
if ( destructible ) destructible->save(zip);
if ( ai ) ai->save(zip);
if ( pickable ) pickable->save(zip);
if ( container ) container->save(zip);
}
void Container::load(TCODZip &zip) {
size=zip.getInt();
int nbActors=zip.getInt();
while ( nbActors > 0 ) {
Actor *actor=new Actor(0,0,0,NULL,TCODColor::white);
actor->load(zip);
inventory.push(actor);
nbActors--;
}
}
void Container::save(TCODZip &zip) {
zip.putInt(size);
zip.putInt(inventory.size());
for (Actor **it=inventory.begin(); it != inventory.end(); it++) {
(*it)->save(zip);
}
}
void Destructible::load(TCODZip &zip) {
maxHp=zip.getFloat();
hp=zip.getFloat();
defense=zip.getFloat();
corpseName=_strdup(zip.getString());
}
void Destructible::save(TCODZip &zip) {
zip.putFloat(maxHp);
zip.putFloat(hp);
zip.putFloat(defense);
zip.putString(corpseName);
}
void PlayerDestructible::save(TCODZip &zip) {
zip.putInt(PLAYER);
Destructible::save(zip);
}
void MonsterDestructible::save(TCODZip &zip) {
zip.putInt(MONSTER);
Destructible::save(zip);
}
Destructible *Destructible::create(TCODZip &zip) {
DestructibleType type=(DestructibleType)zip.getInt();
Destructible *destructible=NULL;
switch(type) {
case MONSTER : destructible=new MonsterDestructible(0,0,NULL,0); break;
case PLAYER : destructible=new PlayerDestructible(0,0,NULL); break;
}
destructible->load(zip);
return destructible;
}
void Attacker::load(TCODZip &zip) {
power=zip.getFloat();
}
void Attacker::save(TCODZip &zip) {
zip.putFloat(power);
}
void MonsterAi::load(TCODZip &zip) {
moveCount=zip.getInt();
}
void MonsterAi::save(TCODZip &zip) {
zip.putInt(MONSTER);
zip.putInt(moveCount);
}
void ConfusedMonsterAi::load(TCODZip &zip) {
nbTurns=zip.getInt();
oldAi=Ai::create(zip);
}
void ConfusedMonsterAi::save(TCODZip &zip) {
zip.putInt(CONFUSED_MONSTER);
zip.putInt(nbTurns);
oldAi->save(zip);
}
void PlayerAi::load(TCODZip &zip) {
}
void PlayerAi::save(TCODZip &zip) {
zip.putInt(PLAYER);
}
Ai *Ai::create(TCODZip &zip) {
AiType type=(AiType)zip.getInt();
Ai *ai=NULL;
switch(type) {
case PLAYER : ai = new PlayerAi(); break;
case MONSTER : ai = new MonsterAi(); break;
case CONFUSED_MONSTER : ai = new ConfusedMonsterAi(0,NULL); break;
}
ai->load(zip);
return ai;
}
void Healer::load(TCODZip &zip) {
amount=zip.getFloat();
}
void Healer::save(TCODZip &zip) {
zip.putInt(HEALER);
zip.putFloat(amount);
}
void LightningBolt::load(TCODZip &zip) {
range=zip.getFloat();
damage=zip.getFloat();
}
void LightningBolt::save(TCODZip &zip) {
zip.putInt(LIGHTNING_BOLT);
zip.putFloat(range);
zip.putFloat(damage);
}
void Confuser::load(TCODZip &zip) {
nbTurns=zip.getInt();
range=zip.getFloat();
}
void Confuser::save(TCODZip &zip) {
zip.putInt(CONFUSER);
zip.putInt(nbTurns);
zip.putFloat(range);
}
void Fireball::save(TCODZip &zip) {
zip.putInt(FIREBALL);
zip.putFloat(range);
zip.putFloat(damage);
}
Pickable *Pickable::create(TCODZip &zip) {
PickableType type=(PickableType)zip.getInt();
Pickable *pickable=NULL;
switch(type) {
case HEALER : pickable=new Healer(0); break;
case LIGHTNING_BOLT : pickable=new LightningBolt(0,0); break;
case CONFUSER : pickable=new Confuser(0,0); break;
case FIREBALL : pickable=new Fireball(0,0); break;
}
pickable->load(zip);
return pickable;
}
void Gui::load(TCODZip &zip) {
int nbMessages=zip.getInt();
while (nbMessages > 0) {
const char *text=zip.getString();
TCODColor col=zip.getColor();
message(col,text);
nbMessages--;
}
}
void Gui::save(TCODZip &zip) {
zip.putInt(log.size());
for (Message **it=log.begin(); it != log.end(); it++) {
zip.putString((*it)->text);
zip.putColor(&(*it)->col);
}
}
void Engine::load() {
engine.gui->menu.clear();
engine.gui->menu.addItem(Menu::NEW_GAME,"New game");
if (TCODSystem::fileExists("game.sav")){
engine.gui->menu.addItem(Menu::CONTINUE,"Continue");
}
engine.gui->menu.addItem(Menu::EXIT,"Exit");
Menu::MenuItemCode menuItem=engine.gui->menu.pick();
if (menuItem == Menu::EXIT || menuItem == Menu::NONE){
// Exit or window closed
exit(0);
} else if (menuItem == Menu::NEW_GAME){
// New game
engine.term();
engine.init();
} else {
TCODZip zip;
// continue a saved game
engine.term();
zip.loadFromFile("game.sav");
// load the map
int width=zip.getInt();
int height=zip.getInt();
map = new Map(width,height);
map->load(zip);
// then the player
player=new Actor(0,0,0,NULL,TCODColor::white);
actors.push(player);
player->load(zip);
// then the stairs
stairs=new Actor (0,0,0,NULL,TCODColor::white);
stairs->load(zip);
actors.push(stairs);
// then all other actors
int nbActors=zip.getInt();
while ( nbActors > 0 ) {
Actor *actor = new Actor(0,0,0,NULL,TCODColor::white);
actor->load(zip);
actors.push(actor);
nbActors--;
}
// finally the message log
gui->load(zip);
// to force FOV recomputation
gameStatus=STARTUP;
}
}
void Engine::save() {
if ( player->destructible->isDead() ) {
TCODSystem::deleteFile("game.sav");
} else {
TCODZip zip;
// save the map first
zip.putInt(map->width);
zip.putInt(map->height);
map->save(zip);
// then the player
player->save(zip);
//then save the stairs
stairs->save(zip);
// then all the other actors
zip.putInt(actors.size()-2);
for (Actor **it=actors.begin(); it!=actors.end(); it++) {
if ( *it != player && *it != stairs ) {
(*it)->save(zip);
}
}
// finally the message log
gui->save(zip);
zip.saveToFile("game.sav");
}
}
|
STACK_EDU
|
How to know whether any process is bound to a Unix domain socket?
I'm writing a Unix domain socket server for Linux.
A peculiarity of Unix domain sockets I quickly found out is that, while creating a listening Unix socket creates the matching filesystem entry, closing the socket doesn't remove it. Moreover, until the filesystem entry is removed manually, it's not possible to bind() a socket to the same path again : bind() fails with EADDRINUSE if the path it is given already exists in the filesystem.
As a consequence, the socket's filesystem entry needs to be unlink()'ed on server shutdown to avoid getting EADDRINUSE on server restart. However, this cannot always be done (i.e.: server crash). Most FAQs, forum posts, Q&A websites I found only advise, as a workaround, to unlink() the socket prior to calling bind(). In this case however, it becomes desirable to know whether a process is bound to this socket before unlink()'ing it.
Indeed, unlink()'ing a Unix socket while a process is still bound to it and then re-creating the listening socket doesn't raise any error. As a result, however, the old server process is still running but unreachable : the old listening socket is "masked" by the new one. This behavior has to be avoided.
Ideally, using Unix domain sockets, the socket API should have exposed the same "mutual exclusion" behavior that is exposed when binding TCP or UDP sockets : "I want to bind socket S to address A; if a process is already bound to this address, just complain !" Unfortunately this is not the case...
Is there a way to enforce this "mutual exclusion" behavior ? Or, given a filesystem path, is there a way to know, via the socket API, whether any process on the system has a Unix domain socket bound to this path ? Should I use a synchronization primitive external to the socket API (flock(), ...) ? Or am I missing something ?
Thanks for your suggestions.
Note : Linux's abstract namespace Unix sockets seem to solve this issue, as there is no filesystem entry to unlink(). However, the server I'm writing aims to be generic : it must be robust against both types of Unix domain sockets, as I am not responsible for choosing listening addresses.
I know I am very late to the party and that this was answered a long time ago but I just encountered this searching for something else and I have an alternate proposal.
When you encounter the EADDRINUSE return from bind() you can enter an error checking routine that connects to the socket. If the connection succeeds, there is a running process that is at least alive enough to have done the accept(). This strikes me as being the simplest and most portable way of achieving what you want to achieve. It has drawbacks in that the server that created the UDS in the first place may actually still be running but "stuck" somehow and unable to do an accept(), so this solution certainly isn't fool-proof, but it is a step in the right direction I think.
If the connect() fails then go ahead and unlink() the endpoint and try the bind() again.
I have tested this and it appears to work as advertised. Brilliant!
I don't think there is much to be done beyond things you have already considered. You seem to have researched it well.
There are ways to determine if a socket is bound to a unix socket (obviously lsof and netstat do it) but they are complicated and system dependent enough that I question whether they are worth the effort to deal with the problems you raise.
You are really raising two problems - dealing with name collisions with other applications and dealing with previous instances of your own app.
By definition multiple instances of your pgm should not be trying to bind to the same path so that probably means you only want one instance to run at a time. If that's the case you can just use the standard pid filelock technique so two instances don't run simultaneously. You shouldn't be unlinking the existing socket or even running if you can't get the lock. This takes care of the server crash scenario as well. If you can get the lock then you know you can unlink the existing socket path before binding.
There is not much you can do AFAIK to control other programs creating collisions. File permissions aren't perfect, but if the option is available to you, you could put your app in its own user/group. If there is an existing socket path and you don't own it then don't unlink it and put out an error message and letting the user or sysadmin sort it out. Using a config file to make it easily changeable - and available to clients - might work. Beyond that you almost have to go some kind of discovery service, which seems like massive overkill unless this is a really critical application.
On the whole you can take some comfort that this doesn't actually happen often.
Thanks for your answer. Using a traditional lockfile system is admittedly the safest way to go. Also, as to whether a service discovery system is overkill or not : ironically enough, this server is planned to be part of a service discovery system by itself (service "registration" system seems more appropriate). This should answer your question ;-)
Assuming you only have one server program that opens that socket.
Then what about this:
Exclusively create a file that contains the PID of the server process (maybe also the path of the socket)
If you succeed, then write your PID (and socket path) there and continue creating the socket.
If you fail, the socket was created before (most likely), but the server may be dead. Therefore read the PID from the file that exists, and then check that such a process still exists (e.g. using the kill with 0-signal):
If a process exists, it may be the server process, or it may be an unrelated process
(More steps may be needed here)
If no such process exists, remove the file and begin trying to create it exclusively.
Whenever the process terminates, remove the file after having closed (and removed) the socket.
If you place the socket and the lock file both in a volatile filesystem (/tmp in older ages, /run in modern times, then a reboot will clear old sockets and lock files automatically, most likely)
Unless administrators like to play with kill -9 you could also establish a signal handler that tries to remove the lock file when receiving fatal signals.
|
STACK_EXCHANGE
|
So this is going to be my first home-built system.
I plan on buying the components one at a time when my budget allows, so overall price really isn't much of an issue, but theoretically I plan on spending anywhere from 750 (AMD) to 850 (Intel), and rebates are a plus but not a must, but I'm relying on combination deals like newegg does sometimes.
I'll use it for moderate to heavy gaming/multitasking in addition to all the everyday stuff like email and facebook, etc. My main focus is a great deal of power with longevity/upgradeability in mind.
All parts are going to be brand new (no recertified or open box) and all I'm looking for is the contents of the tower; I'll worry about I/O later on down the road.
I love newegg, but if another site has cheaper prices I'll give it consideration. (I live in the USA, by the way)
I'd prefer an AMD system, but I'll adjust my budget and get an Intel system if that would suit my needs more (again, power/longevity).
As for overclocking, I'm not even gonna touch that monster until I get some more building experience under my belt.
As far as SLI or Crossfire, I want something able to do it, but I won't utilize that until farther down the road.
So that's about it. Here is my idea of what I want:
Hi, that build doesn't look too bad but i can think of quite a few improvements. 750W is way more power than youre going to need because that motherboard won't support crossfire and you don't want to overclock. 550W would be fine. Also i don't think theres much need for a performance HDD if youre going to get a SSD, most people (including myself) would go for one or the other. Personally i'd keep the caviar black and lose the SSD in favour of an Intel CPU. I would do something like this.
Something like that would be good because you don't need an expensive motherboard if you don't want to overclock and you can do without SLI/Crossfire. Also i dont think the SSD would be worth it yet. The Intel Core i5-2400 is a good choice too because it's like the i5-2500k with a very slightly lower clock speed and its not as overclockable. (A lot cheaper though.) Lastly that graphics card i listed is much better than an HD 6850 and this build still only comes to $708.94 before rebates and shipping.
As far as i know i think it's an integrated graphics controller in the CPU so you won't lose any kind of advantage in getting a dedicated GPU. Even so, the GTX 560Ti is an excellent card which would absolutely destroy any integrated graphics. Don't know much about the mean time between failures between those CPU's but i know a lot of people rely on intel and for good reason, theyre reliable in my opinion. The i5-2400 is also going to be a lot better than any Phenom II x4 in games.
Hmmm... ok. something to think about I suppose. definitely like that graphics card, I saw a video on newegg. Well, thank you for your advice, I'll definitely change up my config to suit Intel, since it's gonna be better (heard many people tell me that). Oh one more point, though, I plan on getting into software engineering/coding, etc... So which is going to be easier to code for, an Intel CPU or an AMD?
gotcha, ok well in that case ill go ahead and look at the bigger CPUs then. at first the things i plan to code (while im still learning how to do it) my programs will be small, but will get increasingly more demanding of my future system's resources, so... ya, bigger CPUs. do I really need a hexa core though for heavy coding? or will a quad do the trick?
Sorry my internet has been cut off for a week or something so i haven't been able to reply. When talking about what you really NEED then yeah 6 cores is probably too much but it will code lots faster than a quad core. (Phenom II x4) If you can afford it go for the i7-2600 then.
|
OPCFW_CODE
|
using System.Collections.Generic;
using System.Linq;
namespace Duplicity.Filtering.IgnoredFiles.GitIgnore
{
internal sealed class GitIgnoreFilter : IFileSystemChangeFilter
{
private readonly IList<IMatcher> _included = new List<IMatcher>();
private readonly IList<IMatcher> _excluded = new List<IMatcher>();
/// <summary>
/// Any file system changes matching will be included, taking precedence over any exclusions.
/// </summary>
public void Include(IMatcher matcher)
{
_included.Add(matcher);
}
/// <summary>
/// Any file system changes matching will be excluded
/// </summary>
public void Exclude(IMatcher matcher)
{
_excluded.Add(matcher);
}
/// <summary>
/// Should the given change be filtered out given the configured .gitignore rules?
/// </summary>
/// <returns>true to exclude/ignore the change, otherwise false.</returns>
public bool Filter(FileSystemChange change)
{
if (_included.Count == 0 && _excluded.Count == 0) return false;
if (ShouldBeIncluded(change)) return false;
return ShouldBeExcluded(change);
}
private bool ShouldBeIncluded(FileSystemChange change)
{
return _included.Any(inclusion => inclusion.IsMatch(change));
}
private bool ShouldBeExcluded(FileSystemChange change)
{
return _excluded.Any(inclusion => inclusion.IsMatch(change));
}
}
}
|
STACK_EDU
|
How do I asynchronously query an API on an array of objects and then mutate each object correctly? (Using promises correctly)
I have an array of movies with IDs, but without ratings. I want to query a movie database to get the ratings for each movie, so I iterate over each object using fetch(url) to query the API and then use .then(function(response) { add_rating_to_specific_movie}).
The problem is, .then is an async response, and I have no way of knowing which movie has returned a rating value so that I can mutate the correct movie object with the rating. And I can't create a new array with the returned values, because some movies will return status: movies not found, and I have no way of knowing which movies are unrated.
Could use some guidance on a good algorithm for using promises here. Thanks!
Yes you can create a new array with returned values. Please show us your code.
You don't show your actual code for how you are iterating the array of movies so we can only provide a conceptual answer (next time show your actual iteration code please). But, in concept, you just use a function to pass the index or object separately for each array element and then you can access that index or object in the .then() handler. In this case, if you use .forEach() to iterate your array, the object from your array of objects that you are iterating and the index of that object are both passed to you in a function that will be uniquely available for each separate request.
For example, here's one concept that would work:
var movies = [....]; // array of movie objects
movies.forEach(function(movie, index) {
// construct url for this movie
fetch(movieURL).then(function(data) {
// use the data to set the rating on movie
movie.rating = ...
});
});
If you want to use promises to know when all the requests are done, you can do this using Promise.all():
var movies = [....]; // array of movie objects
Promise.all(movies.map(function(movie, index) {
// construct url for this movie
return fetch(movieURL).then(function(data) {
// use the data to set the rating on movie
movie.rating = ...
});
})).then(function() {
// all ratings updated now
});
Rather use .map() so that you can right apply Promise.all on it…
@Bergi - it wasn't clear the OP was asking for that, but I added that as an option.
Yeah, it's not clear what OP needs, I wonder why you answered at all… :-)
@Bergi - it's not that unclear. Array of movies without ratings. fetch() on each movie to get rating, update movie object in the array with the returned rating. While the OP could have made things easier, the info is there to see what they're asking. I did suggest to the OP that they should include the code they have so far next time.
@Yoni - did this answer your question?
@jfriend00 - In your first solution - the engine would iterate over each movie, calling fetch(url) on each. When 'then' is called, at some later asynchronous time, I would think it wouldn't have access to the movie object (very well may be mistaken). Also, would upvote your answer, but don't have enough reputation yet :(
@Yoni - movie is still in scope in the fetch() callback. It is in a parent scope which is accessible. If this answers your question, then even with no reputation, you can still accept an answer (click green checkmark to the left of an answer) to indicate to the community that it was the best answer and that it answered your question. That will also earn you some reputation.
|
STACK_EXCHANGE
|
import os
import sys
import csv
"""
CSV Import Format:
HOME_TEAM_RANK,HOME_TEAM,AWAY_TEAM_RANK,AWAY_TEAM
#1,Princeton,#2,Yale
The GAME_NAME column is required but can be left blank to generate a HOME_TEAM vs. AWAY_TEAM description.
Example Wordpress Contact Form Code:
[contact-form]
[contact-field label='Name' type='name' required='1'/]
[contact-field label='Email' type='email' required='1'/]
[contact-field label='New Orleans Bowl 12/20/2014' type='radio' required='1' options=' Louisiana Lafayette, Nevada'/]
[/contact-form]
This generates the following look in Wordpress:
National Championship - #1 Princeton vs. #2 Yale - 1/1/1900
Options:
- Princeton
- Yale
"""
if(len(sys.argv) == 2):
include_all_teams = True
RANKED_FILTER = True
TEAM_FILTER = ['Alabama', 'Auburn', 'LSU', 'Arkansas', 'Texas A&M',
'Missouri', 'Tennessee', 'Vanderbilt', 'Georgia', 'Florida',
'South Carolina', 'Kentucky', 'Ole Miss', 'Mississippi State']
IMPORT_PATH = sys.argv[1]
import_file = open(os.path.abspath(IMPORT_PATH), 'r')
#Generate contact form header code
wp_contact_form_code = "[contact-form]\n[contact-field label='Name' type='name' required='1'/]\n[contact-field label='Email' type='email' required='1'/]"
#Open CSV file for editing
csvreader = csv.reader(import_file, delimiter=',')
for row in csvreader:
#CSV Field Mapping
home_team_rank = str(row[0])
home_team = str(row[1])
away_team_rank = str(row[2])
away_team = str(row[3])
bowl_name = ''
if len(row) == 5:
bowl_name = str(row[4])
#Create label
if bowl_name:
label = bowl_name
elif away_team_rank and home_team_rank:
label = "{} {} @ {} {}".format(away_team_rank,away_team,home_team_rank,home_team)
elif away_team_rank:
label = "{} {} @ {}".format(away_team_rank,away_team,home_team)
elif home_team_rank:
label = "{} @ {} {}".format(away_team,home_team_rank,home_team)
else:
label = "{} @ {}".format(away_team,home_team)
#Generate contact form radio button code
if include_all_teams:
wp_contact_form_code += "\n[contact-field label='{}' type='radio' required='1' options='{},{}'/]".format(label,away_team,home_team)
elif RANKED_FILTER:
if home_team_rank or away_team_rank:
wp_contact_form_code += "\n[contact-field label='{}' type='radio' required='1' options='{},{}'/]".format(label,away_team,home_team)
elif home_team in TEAM_FILTER or away_team in TEAM_FILTER:
wp_contact_form_code += "\n[contact-field label='{}' type='radio' required='1' options='{},{}'/]".format(label,away_team,home_team)
else:
pass
else:
if home_team in TEAM_FILTER or away_team in TEAM_FILTER:
wp_contact_form_code += "\n[contact-field label='{}' type='radio' required='1' options='{},{}'/]".format(label,away_team,home_team)
#Close contact form code
wp_contact_form_code += "\n[/contact-form]"
print(wp_contact_form_code)
import_file.close()
else:
print('Error: Invalid and/or missing argument(s)\nArguments: ' + str(sys.argv))
|
STACK_EDU
|