Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Prestantiousnovel 《Dragon King’s Son-In-Law》 – Chapter 349 son slave propose-p3
Supernacularnovel – Chapter 349 dirt salty quote-p3
Novel–Dragon King’s Son-In-Law–Dragon King’s Son-In-Law
Chapter 349 excited statement
“Humph!” Yue Yang suddenly stepped in the gas pedal and left Hao Zhonghua’s car regarding .
“Those two women aren’t poor, so i feel like they are really just fooling around . Provided that my mother is joyful, you can just let the children be,” Hao Zhonghua, who was keying in on his notebook, said .
The West Water Dragon Palace wouldn’t dare go from the Dragon G.o.d Shrine .
Zhen Congming caught up his chest muscles with take great pride in and headed to the institution gate in enormous methods, overlooking the women following him .
John Bull on the Guadalquivir
“He’s preparing,” Yue Yang clarified as she spread the breads .
After the three of these went back to the house, Hao Ren simply let Lu Linlin and Lu Lili remain in Xie Yujia’s room . Then, he headed to his space in the following floor to relax .
Lu Linlin and Lu Lili waggled beside her as if these were free roses .
The evening pa.s.sed quickly as Hao Ren combed his character basis and circulated it with all the Lightweight Splitting Sword Shadow Scroll . He was created to the higher character basis severity for the Ethereal Summit on Fifth Heaven, so the very thin character heart and soul here around the area appeared to be inadequate for him . This so-called farming was really only a practice rather then making authentic advance .
icerigger – mission to molokini
“Uncle, your s.h.i.+rt is b.you.t.well developed completely wrong!” Lu Lili reminded him .
Xie Yujia was decent in all of the facets, and she considered obtaining Xie Yujia more than well before too . Nevertheless, she was concerned about Hao Ren and Xie Yujia obtaining along because it would restrict the interaction.h.i.+p between her and Zhao Hongyu plus the correspondence between two parents .
“Oh . . . ” Hao Ren nodded . He somehow sensed the fury from his mom .
“Congming, Ren, I’ll present you with fellas a ride!” Yue Yang quickly acquired her car important and said to them .
“Are you currently all right, Mum?” Hao Ren was a minimal concered about her .
“It are only a number of years until Zi grows up, and her loved ones becomes along with ours so well . You are aware that I don’t go along with other individuals, nevertheless i can talk to Zi’s mommy for just two several hours whenever I see her . We even journeyed store shopping before . . . “
la fontaine fables summary
Whoosh . . . Hao Zhonghua’s bright white Ford sped up and trapped from regarding . They gone alongside on the highway .
“The two ladies aren’t undesirable, and I feel as though these are generally just fooling all over . Provided that my mom is pleased, we can easily just permit the youngsters be,” Hao Zhonghua, who had been typing on his computer, explained .
She was pretty angry at Hao Zhonghua because he had always allow her to have her way before . Nonetheless, he was adamant on deciding on Xie Yujia through Zhao Yanzi with no undermine, which really arranged her out of .
“Grandfather, your s.h.i.+rt is b.you.t.well developed incorrect!” Lu Lili reminded him .
The Lu sisters both equally welcomed Hao Ren as soon as they discovered him, “Gongzi!”
Dong, dong, dong . . .
Since Grandma experienced already taken care of them as her very own granddaughters, Hao Zhonghua could only stick to her like .
“Are you presently ok, Mommy?” Hao Ren was actually a small worried about her .
“Get in the vehicle, Yujia!” Hao Zhonghua said .
Xie Yujia was in her white silk outfit along with a rose-reddish cropped longer-sleeve jacket she looked for instance a rosebud that was going to grow as she sat straight next to the family table waiting for breakfast to generally be completely ready .
Lu Linlin and Lu Lili’s shout came in via the balcony about the second floorboards when they termed out via the home window with their bedroom around the very first surface .
their son the necklace pdf
divided hearts britain and the american civil war
Xie Yujia viewed Yue Yang’s vehicle and hesitated for a couple secs with the entrance well before she went into Hao Zhonghua’s auto .
|
OPCFW_CODE
|
If capacity is your ISV business’ biggest software development challenge, you aren’t alone. According to research for Coding Sans State of Software Development in 2019 report, 21.29 percent of software development companies report that dealing with a backlog with limited capacity while still delivering software is the most significant challenge they face.
Coding Sans also drilled down into the results of the survey based on responses from 695 leaders at software development companies around the world, the majority of which provide applications to B2B companies. Capacity is a bigger concern among managers, with 24.32 percent ranking it as the biggest challenge their companies face compared to 18.29 percent of developers who feel it’s the greatest hurdle their teams have to overcome.
Coding Sans reports that to address capacity, software companies are hiring additional professionals, prioritizing development over other tasks, and finding ways to improve productivity. About 85 percent of respondents also use one or more Agile methods, including:
- Scrum: 60.58 percent of respondents
- Kanban: 35.40 percent
- Lean software development: 14.19 percent
- Agile modeling: 13.86 percent
- Scrumban: 11.55 percent
- Extreme programming: 11.55 percent
- Feature-driven development: 8.58 percent
- Rapid application development: 8.25 percent
What Do Developers See as the Biggest Challenge?
From the perspective of a developer, Coding Sans’ survey shows that the most significant challenge that 24.57 of them see is with sharing knowledge. Some of the strategies the survey respondents say they use to overcome this challenge are holding dedicated information sessions such as brown bag lunches, meetings or tech talks. They also share knowledge via code reviews or through internal wikis or documentation shared on collaboration tools.
According to the survey, the most popular project management tools that software development companies use are:
- Jira: 57.7 percent
- GitHub: 34.53 percent
- BitBucket:19.86 percent
- Trello: 17.27 percent
And the most popular communication tools are:
- Slack: 55.97 percent
- Email: 50.79 percent
- Jira: 36.12 percent
- Skype: 22.73 percent
- Google Hangouts: 14.68 percent
- GoToMeeting: 13.24 percent
Other Tools Your Peers and Competitors Use
Coding Sans reports that software development companies commonly use a variety of tools to improve productivity and quality; for example, 75.83 percent use tools for testing software. The most common are Jenkins (30.94 percent of respondents), Selenium (24.75 percent), and JUnit (20.14 percent). Researchers also asked software development companies which factors limit them from using tools for software testing, and the most common reasons are:
- Time to research: 23.21 percent
- Don’t need it yet: 19.64 percent
- Not sure how to use it: 20.83
- Lack of time to use it: 14.88
- Budget: 13.10 percent
ISVs also leverage version control systems like Git and SVN to manage changes to source code over time, and software control management (SCM) tools, like GitKraken, SourceTree, and GitHub Desktop.
The majority of software companies, 60.14 percent, rely on peer review to ensure code quality. Other methods businesses use include:
- Continuous integration (CI) and test-driven development (TDD): 41.15 percent
- Documentation: 16.69 percent
- Commenting within the code: 15.11 percent
- Industry style guide: 12.52 percent
The report also points out that 11.94 percent of companies don’t use any specific way to ensure code quality.
Another software development challenge is monitoring how well teams execute projects. State of Software Development survey respondents use these metrics to measure performance:
- Completed tasks: 49.50 percent
- Working software: 48.63 percent
- Code readability: 24.46 percent
- Don’t use metrics: 24.89 percent
- Speed of development: 13.24 percent
- Number of bugs: 21.44 percent
- Test coverage: 22.88 percent
- Third party scoring: 4.17 percent
- Lines of code: 2.88 percent
Coding Sans research also reveals the top reasons for delivery problems:
- Unrealistic expectations: 14.96 percent
- Lack of clearly defined deliverables: 13.09 percent
- Estimation: 12.37 percent
- Ever-changing landscape: 11.08 percent
- Requirements prioritization: 9.50 percent
For more insights on Coding Sans’ research, see the State of Software Development in 2019.
|
OPCFW_CODE
|
3.7.6 Pipeline interleaving, not quite an equivalence transform
It has repeatedly been noted in this section that any undertake to insert an supernumerary register into a feedback coil with the idea of pipelining the datapath destroys the equivalence between original and pipelined computations unless its impression is somehow compensated. After all, circuits c and b of fig.3.41 behave differently. Although this may appear a futile wonder, let us ask “ What happens if we do just that to a first-order recursion ? ” Adding an extra reaction time register to ( 3.69 ) results in the DDG of fig.3.42a and yields
observe that all indices are even in this equality. As thousand increments with prison term k = 0, 1, 2, 3, … indices do in fact alternate between even and curious values. It therefore becomes potential to restate the ensuing input-to-output map as two separate recursions with no interaction between “ even ” data items x ( k = 0, 2, 4, …, 2n, … ) and “ odd ” items x ( k = 1, 3, 5, …, 2n + 1, … ).
This pair of equations says that the original data processing recursion of ( 3.69 ) nowadays gets applied to two distinguish data streams as depicted in fig.3.42c. From a more general perspective, it is indeed potential to cut the combinable stay in any first-order feedback loop down to 1p by inserting phosphorus − 1 pipelining registers, however the calculation then falls apart into the process of phosphorus interleaved but otherwise mugwump data streams. More frequently than not this is undesirable. however, hardheaded applications exist where it is possible to take advantage of this effect. Examples Cipher block chain ( CBC ) implements the recursion yttrium ( thousand ) = carbon ( x ( kelvin ) ⊕ ) yttrium ( k − 1 ), u ( k ) ) What counts from a cryptanalytic sharpen of scene is that patterns from the plaintext do not show up in the ciphertext. Whether this is obtained from feeding back the immediately preceding block of ciphertext yttrium ( k − 1 ) ( CBC-1 mode ) or some prior stop y ( k − phosphorus ) where 2 ≤ p ∈ ℕ ( CBC-p mode ) is of minor importance. Some cryptochips, consequently, provide a fast but nonstandard CBC-8 modality in accession to the even CBC-1 mode, see fig.3.41c. In the happening of the IDEA nick described in [ 79 ], maximum throughout is 176 Mbit/s both in pipelined ECB mode and in pipeline-interleaved CBC-8 manner as compared to fair 22 Mbit/s in nonpipelined CBC-1. Fig.3.43 shows a high-level block diagram of a celestial sphere decoder, a key subfunction in a MIMO OFDM ( orthogonal frequency division multiplex ) receiver. Sphere decoding is basically a advanced tree-traversal algorithm that achieves close-to-minimum error rate operation at a lower average search complexity than an exhaustive search. Pipelining the calculation in search of throughput is not an choice because of the ( nonlinear ) first-order recursion. alternatively, the facts that ( a ) OFDM operates on many subcarriers at a time ( typically 48 to 108 ) and that ( b-complex vitamin ) each such subcarrier poses an mugwump tree-search problem, make celestial sphere decoding an ideal candidate for grapevine interleave. This and many other refinements to sphere decoding have been reported in [ 80 ] from which fig.3.44 has been taken .
Read more: A Few Thoughts on Cryptographic Engineering
( diagram courtesy of Dr. Markus Wenk ) For a a lot simple model, consider some effigy serve algorithm where rows of pixels are share with independently from each early. Rather than scanning the image rowing by rowing, pixels from phosphorus consecutive rows are entered one by one in a cyclic manner before the march is repeated with the future column, and then on. All process can so be carried out using a unmarried pipelined datapath of p stages [ 81 ]. grapevine interleave does obviously not qualify as an equality translate. still, it yields utilitarian architectures for any recursive calculation — including nonlinear ones — provided that data items arrive as discriminate time-multiplexed streams that are to be processed independently from each other, or can be arranged to do indeed. From this perspective, pipeline interleave is easily recognized as a apt and effective combination of time sharing with pipelining .
|
OPCFW_CODE
|
They are not real-time games-Having game play that relies on any kind of precise timing are going to need a more controllable rendering model than the traditional web. Our campus has a First-Year Experience (FYE) in which faculty propose courses for incoming students while helping to build the skills they’ll need in higher ed. 413) This is the kind of problem participants tend to face when an affinity space is in a passionate state, when “participation means, primarily, gaining technical knowledge and skills related to the shared interest” (p. Not moving to new loops is a critical problem for games and they have a specific term for it, “Bottom Feeding”. Fortunately, there are some diet-friendly alternatives to a fair few favorite foods out there now, some of which are even geared towards specific diets. I enjoy writing, but as you can see, my writing habit tapered off in the past few years. If you have trouble falling asleep, you can try a few things.
You can also try swapping out unhealthy snacks for healthier options like nuts or fruit. A relatively simple system, from a game mechanics perspective, but one that hides a lot of depth, story-telling potential, and that particular player satisfaction from figuring out a puzzle. He does not want to be associated with a player so bad that it’s almost like he’s not even trying to win. He has come to watch his friend Edgar play, and Edgar is the worst football player he has ever seen. This post may come across as Microsoft bashing. Moreover, that analysis has identified that individuals who post a large number of messages to chat were more likely to be active on GitHub, as were individuals whose chat messages were related to the “events” topic. This is a co-constitutive process wherein individuals collectively develop a technical infrastructure, which in turn defines conditions in which future IndieWeb development occurs. These authors will explore the ways machine learning, AI and immersive technology enhance and hinder our lives and how to prepare for the future ahead. If in future updates, the creators remove or significantly tone down the “nemesis” mechanic, then I would recommend.
In fact, budget gaming mice with sensors as low as 8,000 to 12,000 DPI can still track movement faster than what the human eye can perceive, and, with the exception of some competitive gaming scenarios, can be just as capable performers. Finally, talk to your doctor about possible solutions if you still can’t fall asleep. Fraction of the cost, supports lossless audio, and I can wear it while exercising. I took a little time to rewrite my LD submission using this style, and overall it increased my line count while decreasing my character count – the line increase is probably due to function declarations. Despite a rocky start, STO has grown into a gargantuan, compelling, and free MMO and one of the best space games out there; it’s frequently expanded by massive updates that add whole new storylines, and 카지노사이트 a while back the neutral Romulan faction introduced unique missions and ships. You can check out the Github repository or the documentation site for PCG, both very much in progress. Extra feature are the 4 different views in the game which stress different elements of the game, like player, bombs or object you can collect.
|
OPCFW_CODE
|
Failed to Create Assembly (SQL Server) to excute an C# program that reference NetFwTypeLib
I'm trying to create an Assembly in my database in order to excute an program that I make when the trigger is fired.
My Assembly requires the assemblies of System.Windows.Form and Interop.NetFwTypeLib to create, I added them all to my database successfully
here
However the system still show me this error
Msg 10301, Level 16, State 1, Line 1
Assembly 'FwSetting' references assembly 'interop.netfwtypelib, version=<IP_ADDRESS>, culture=neutral, publickeytoken=null.', which is not present in the current database. SQL Server attempted to locate and automatically load the referenced assembly from the same location where referring assembly came from, but that operation has failed (reason: 2(The system cannot find the file specified.)). Please load the referenced assembly into the current database and retry your request.
You can see that the version of Interop.NetFwTypeLib that I added in the database totally match with what the error required, that make me very confused.
From my understanding of that message, all I tried is to find an older firewallapi.dll version in hope that would have an older Interop.NetFwTypeLib.dll version, but it doesn't apparently.
Here is the sql code to create the assemblies
sp_configure 'clr enable', 1
RECONFIGURE
ALTER DATABASE gdt SET TRUSTWORTHY ON
use gdt
CREATE ASSEMBLY [Interop.NetFwTypeLib]
AUTHORIZATION dbo
FROM FROM "C:\Path\That\Go\To\My\Project\FwSetting\FwSetting\bin\Debug\Interop.NetFwTypeLib.dll"
WITH PERMISSION_SET = unsafe
CREATE ASSEMBLY WindowsForm
AUTHORIZATION dbo
FROM "C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Windows.Forms.dll"
WITH PERMISSION_SET = unsafe
CREATE ASSEMBLY FwSetting
AUTHORIZATION dbo
FROM "C:\Path\That\Go\To\My\Project\FwSetting\FwSetting\bin\Debug\FwSetting.exe"
WITH PERMISSION_SET = unsafe
For more information, I'm using .NET Framework 2.0 for this program, the database is SQL Server 2005 and I've read this document to learn how to create an assembly.
Any idea of what my problem is?
The framework and SQL server are out of date for years now, why not start with Vs 2022 und SQL Server 2022
Also... System.Windows.Forms? SQL Server is a headless environment, why are you trying to use GUI assemblies in it?
Create assemblies just to run some program sounds a bit weird. You can use xp_cmdshell, OR even better, start a agent job which executes your program. Or just create assembly that executes programs, you don't need any dependancies on weird stuff for that
Could you tell us what this program you're starting supposed to be doing? This might be a bad idea if your program uses a GUI
@siggemannen I used xp_cmdshell and solved the problem, it's a simple application that make some changes in a inbound rule, it will close automatically when the task is done. I'll keep it at a GUI app anyway for testing purpose.
I have found a better way using xp_cmdshell
CREATE TRIGGER Fwtrg ON mdt for Insert
AS
EXEC master..xp_cmdshell '"C:\Path\That\Go\To\My\Project\FwSetting\FwSetting\bin\Debug\FwSetting.exe"'
GO
Refer to this
You're better off creating a SQL Server Agent job in Powershell that runs the application, then use sp_startjob. And running this in a trigger is possibly the most horrible thing I've ever seen: the insert will have to wait until the application closes.
@Charlieface I know, but considering that insert won't be performed frequently, I think it's alright
|
STACK_EXCHANGE
|
Novel–Cultivation Online–Cultivation Online
Chapter 44 Appearance wax soda
Nonetheless, Yuan quickly shook his mind and said, “No, I became just thinking simply because I have never actually taken care of my own physical appearance.”
Luo Li offered a short description for many of the stores and houses they pa.s.sed, and Yuan would pay attention to her having a dazzling face, hunting almost like he is at an amus.e.m.e.nt car park.
who is the god of disease
“That is me…?”
The previous time he’d viewed his very own face was when he was 7 yrs old— before he missing his ability to see and have become blind.
It absolutely was noticeable that besides speaking with her daddy, Luo Li got devoted a long time boosting her very own look with lighting makeup products.
A while afterwards, they eventually left the Lord’s Manor.
“Daoist Yuan, I have just let my dad understand about your wishes to look around this town. We can easily leave behind anytime,” she believed to him.
Luo Li gifted a quick explanation for many of the suppliers and properties they pa.s.sed, and Yuan would focus on her having a vibrant facial area, appearing almost like he was in an amus.e.m.e.nt park.
Nevertheless, Yuan quickly shook his brain and said, “No, I had been just wondering for the reason that I have got never seriously taken care of my very own physical appearance.”
For lots more, check out li/ght/novelpub[.]com
Another time he’d found their own deal with was when he was 7 decades old— before he missing his chance to see and became blind.
“Why are you inquiring regarding your visual appearance, Sibling Yuan? Could it be because of that Luo young lady?” she chosen to inquire him.
“Is the fact so? Then exactly what are we waiting around for? Let’s go try to eat until our stomachs are spherical!” Yuan quickly said.
a great captain in hindi
“X-Xiao Hua is showing the fact! Buddy Yuan is quite good looking!” Xiao Hua explained again, but her confront was flushed with redness this period.
“In which do you need to take a look at initially, Daoist Yuan?” she asked him as soon as they were definitely out of doors.
New new chapters are released on lightnove/lpub[.]//com
Yuan shrugged and reported, “I don’t know what’s on this community, so I’ll let you pick out where you can visit.”
Pay a visit to l/i/ghtnove//lpub[.]com for a much better experience
“Definitely? Are you sure that you’re not just for complimenting me since you also don’t wish to injured my emotions? It’s okay to share with the reality, Xiao Hua.” Yuan thought to her, while he experienced a feeling that her opinion could possibly be somewhat incorrect because of their loved ones.h.i.+p.
“Acceptable, then let’s make now,” Yuan claimed.
“This can be the residences’ place, the place many of the citizens exist.”
“Seriously? Are you currently certain you’re not only complimenting me since you also don’t need to injure my sentiments? It’s fine to tell the facts, Xiao Hua.” Yuan thought to her, because he possessed a sensation that her verdict may be somewhat wrong for their interaction.h.i.+p.
Sometime later on, Yuan mentioned, “In addition, I might like to take a look around this location in case you don’t head.”
Xiao Hua investigated him with huge vision, ostensibly speechless by his immediate problem.
A Short History of the 6th Division
“Daoist Yuan, I have got simply let my dad be aware of your would like to look around the city. We could make anytime,” she thought to him.
“Don’t worry, Sibling Yuan. In the farming planet, one’s visual appeal isn’t that critical. So long as you are skilled and impressive, except there is a world’s ugliest confront, you’ll be capable of catch the attention of young ladies! And also since Brother Yuan is both skilled and fine, you are going to, undoubtedly, have beauties combating in your case in just about every course in the foreseeable future!” Believing that Yuan was worried about his overall look, Xiao Hua chosen to cheer him up.
“Is so? Then what are we looking forward to? Let’s go take in until our stomachs are spherical!” Yuan quickly reported.
“Basically If I remember correctly, Yu Rou after mentioned that one’s overall look inside the online game will closely look like their look in the real world, for this reason why I accessed this video game with the avatar already created for me. However, I have no idea a few things i be like in real life, and contains been over several years since i have very last noticed my own personal experience, well, i cannot ensure whether this encounter definitely is similar to my actual visual appearance or not…”
Obviously, as somebody who doesn’t pay a great deal focus on performances, Yuan was completely oblivious on this simple fact.
Several times afterwards, she spoke within a bashful tone of voice together with slightly rosy cheeks, “Xiao Hua perceives Brother Yuan is extremely handsome…”
One of the most up-to-date books are released on li/ghtn/ovelpub[.]/c/om
“What? The food the following is that cheap? My previous dinner was atrociously pricey when compared!” Yuan was shocked to learn how the food items with this put was so cheap, particularly if he’d spent 500 precious metal coins on his prior mealtime.
“Acceptable, then let’s leave behind now,” Yuan stated.
“What? The meal the following is that low-priced? My past dinner was atrociously high priced in contrast!” Yuan was surprised to discover that the foods in this position was inexpensive, particularly if he’d spent 500 gold coins on his former food.
Finding his problem quite ridiculous, Luo Li couldn’t assist but chuckle a bit, “With 10 precious metal coins, it is possible to feed on all the things on every single food list in this particular metropolis and have plenty of funds kept.”
“Really? Are you presently certain that you’re not only complimenting me since you also don’t wish to damage my thoughts? It’s alright to share with the fact, Xiao Hua.” Yuan said to her, when he experienced a emotion that her verdict could be somewhat incorrect because of their loved ones.h.i.+p.
“The facts, Sibling Yuan?”
Nonetheless, since this is her novice delivering a visit to someone, Luo Li had also been unclear about which place to go. Consequently, she chosen to just walk surrounding the area until they come across something that would raise Yuan’s interest.
Brilliantnovel Cultivation Online update – Chapter 44 Appearance slow arm quote-p3
Novel–Cultivation Online–Cultivation Online
|
OPCFW_CODE
|
Phantasmal Forces: The creation of vivid illusions of nearly anything the user envisions (a projected mental image so to speak). As long as the caster concentrates on the spell, the illusion will continue unless touched by some living creature, so there is no limit on duration, per se. Damage caused to viewers of a Phantasmal Force will be real if the illusion is believed to be real. Range: 24".The range for both is the same, and could be redefined as "line of sight", in the scheme I've been presenting. It's a little odd that you would need a 240-foot range for Invisibility, but I suppose you could cast it on a scout creating a diversion after the diversion, to enable the scout to escape. Or vanish a bridge before an enemy rushes across it to attack.
Invisibility: A spell which lasts until it is broken by the user or by some outside force (remember that as in CHAINMAIL, a character cannot remain invisible and attack). It affects only the person or thing upon whom or which it is cast. Range: 24"
Phantasmal Forces, as has been pointed out, gets its name from its use in Chainmail to create a military force of phantoms. This is in contrast to some later versions of the spell that specify that it must be an illusion of a single object or creature.
This restriction is usually added to prevent the creation of things like phantom bowmen: illusory arrows are distinct from the illusory archer firing them, so they are forbidden. But the specification of damage being real if believed, in contrast to the rule about illusions disappearing when touched by a living being, suggests that a squad of phantom bowmen firing arrows at the enemy may, in fact, be legal. What the Magic-User can't do is create a squad of hidden or invisible bowmen.
Disbelief is a whole other kettle of worms, which has been discussed to death. I posted previously about disbelieving illusions (and incidentally came down against phantom bowmen, but I've softened on that stance.) The gist: disbelief doesn't dispel the illusion, but it prevents damage, and it's automatic if the player can cite a reason why they don't believe. To that, I'd maybe add a chance to notice an inaccuracy in the illusion if the PC's Int or Wis is > the M-U's Int/Wis.
Phantasmal Forces is the first concentration spell, with an open-ended duration as long as the caster isn't disturbed, casts another spell, or performs another action. Perhaps we can consider this as a change of state spell, where the "state" is "the caster's ideas become manifest". When the idea changes drastically (caster starts thinking about something else, like "Ow! My arm!") it's like being shaken awake from a Sleep spell; the state changes again.
Change of state is going to be a critical concept for Invisibility as well. On first reading, Invisibility seems permanent. Certainly, if a treasure chest is made invisible, it could remain invisible forever, since the chest can't attack and you have to find it first before you can cast Dispel Magic on it. But there are a couple provisos on the spell.
- It can be broken by the user. The question here is: does "user" mean "caster", or does it mean "target"?
- It can be broken by an outside force. No explanation of what counts, unless "attacking" is somehow meant to be an example.
- It only affects the target of the spell.
The way I interpret the third restriction is that Invisibility, unlike Phantasmal Forces, only affects one thing. Clothing and gear are part of the character affected only as long as they remain stowed: weapons remain sheathed, items remain stowed in a belt or pack. Any "splitting" of the invisible object changes the state of the object, and the invisibility fades. Likewise, taking off or putting on clothing ends invisibility, as does eating or drinking (perhaps with a brief, disturbing moment where the food or liquid is the only thing visible...) Similarly, splattering or spraying liquid, dust, or other material on an invisible opponent ends the spell (these are the "outside forces".) This makes the spell open-ended, but not permanent.
|
OPCFW_CODE
|
'''
File name: dataFactory.py
Data Factory class implementation. The class provides an interface for data
preprocessing.
Author: Vasileios Saveris
email: vsaveris@gmail.com
License: MIT
Date last modified: 05.04.2020
Python Version: 3.8
'''
import pandas as pd
from datetime import datetime
class DataFactory():
'''
Data Factory class implementation.
Args:
file_name (string): The input file name holding the data.
process_date_time (boolean, default is False): Wether the Date column
should be processed (see _processDateTime())
save_file (string, default is None): The file in which the modifications
in the input file should be stored. Makes sense when process_date_time
is True.
verbose (boolean, default is False): If True print services are enabled.
Public Attributes:
-
Private Attributes:
See constructor (self._*)
Public Methods:
aggregateData (args) -> DataFrame: Aggregates the data of the input file
according a selected granularity.
Private Methods:
See methods docstring (def _*)
Raises:
-
'''
def __init__(self, file_name, process_date_time = False, save_file = None,
verbose = False):
self._verbose = verbose
if self._verbose:
print('\nData Factory initialization:', self._printProcArgs(locals()
, 'self'))
print('- Reading input file')
# Read input data file
self._data_file = pd.read_csv(file_name)
if process_date_time:
self._processDateTime()
if save_file is not None:
self._data_file.to_csv(save_file, index = False)
if self._verbose:
print('- Enchanced (tokenized date) input file saved as:',
save_file)
def _printProcArgs(self, arguments, exclude = None):
'''
Prepares a formatted string of the input arguments dictionary.
Args:
arguments (dictionary): A dictionary with arguments: values pairs.
exclude (list of strings, default is None): Arguments to be excluded
from the formatted string.
Raises:
-
Returns:
string: A formatted string of the input arguments in the form of
argument = value, ...
'''
return ', '.join('{} = {}'.format(k, v) for k, v in arguments.items() if
k not in exclude)
def _tokenizeDateTime(self, date_time_str, format = '%Y-%m-%d %H:%M:%S'):
'''
Tokenizes an input Date Time string.
Args:
date_time_str (string): A Date Time string.
format (string, default is '%Y-%m-%d %H:%M:%S'): The format of the
input date time string.
Raises:
-
Returns:
dictionary: A dictionary with the Date Time tokens, keys are: 'year'
'month', 'day', 'week_day', 'hour'. Year is in XXXX format, week day
starts from 0 for Monday, hour is 24h format.
'''
date_time = datetime.strptime(date_time_str, format)
return {'year': date_time.year, 'month': date_time.month,
'day': date_time.day, 'week_day': date_time.weekday(),
'hour': date_time.hour}
def _processDateTime(self):
'''
Adds in the self._data_file dataframe the contents of the tokenized date
time as columns.
Args:
-
Raises:
-
Returns:
-
'''
if self._verbose:
print('- Date Time processing (split in to tokens)')
# Tokenize date column
t_dt = [self._tokenizeDateTime(x)for x in self._data_file.date.to_list()]
# Create new columns with tokenized date-time data
self._data_file['year'] = [x['year'] for x in t_dt]
self._data_file['month'] = [x['month'] for x in t_dt]
self._data_file['day'] = [x['day'] for x in t_dt]
self._data_file['week_day'] = [x['week_day'] for x in t_dt]
self._data_file['hour'] = [x['hour'] for x in t_dt]
# Rearange columns
self._data_file = self._data_file[[self._data_file.columns[0], 'year',
'month', 'day', 'week_day', 'hour'] +
self._data_file.columns[1:3].to_list()]
def aggregateData(self, granularity, combine_hosts = False, save_file = None):
'''
Aggregates the data (sum(), on the requests column) of the self._data_file
according the selected granularity.
Args:
granularity (string): The granularity to be used for the data
aggregation. Supported values are: 'HOURLY', 'DAILY', 'MONTHLY'
and 'YEARLY'.
combine_hosts (boolean, default is False): If True, aggregation is
applied as all the hosts they were one. The host column is
dropped.
save_file (string, default is None): The csv file in which the
aggregated data should be stored.
Raises:
ValueError: When granularity given value is not supported.
Returns:
DataFrame: The aggregated data.
'''
if self._verbose:
print('\nData Aggregation:', self._printProcArgs(locals(), 'self'))
# Validate the value of the granularity argument
if granularity not in ['HOURLY', 'DAILY', 'MONTHLY', 'YEARLY']:
raise ValueError('granularity argument error. Value given is \'' +\
str(granularity) + '\', where supported values are: \'HOULRY' +\
'\', \'DAILY\', \'MONTHLY\', \'YEARLY\'')
# Define filter and group columns based on the granularity value
if granularity == 'HOURLY':
filter_columns = ['year', 'month', 'day', 'week_day', 'hour']
elif granularity == 'DAILY':
filter_columns = ['year', 'month', 'day', 'week_day']
elif granularity == 'MONTHLY':
filter_columns = ['year', 'month']
elif granularity == 'YEARLY':
filter_columns = ['year']
if not combine_hosts:
filter_columns += ['host']
group_columns = filter_columns.copy()
filter_columns += ['requests']
if self._verbose:
print('- Filter Columns:', filter_columns)
print('- Group Columns :', group_columns)
# Aggregate data
data = self._data_file.filter(filter_columns, axis = 1)
data = data.groupby(group_columns, as_index = False)['requests'].sum()
if save_file is not None:
data.to_csv(save_file, index = False)
if self._verbose:
print('- Aggregated data saved as:', save_file)
return data
if __name__ == '__main__':
# Read input data file and process the date column
df = DataFactory(file_name = '../data/input/traffic_stats.csv',
process_date_time = True,
save_file = '../data/processed/traffic_stats_tokenized_date.csv',
verbose = True)
# Create all the types of data aggregation
for g in ['HOURLY', 'DAILY', 'MONTHLY', 'YEARLY']:
df.aggregateData(granularity = g, combine_hosts = False,
save_file = '../data/processed/traffic_stats_' + g + '.csv')
df.aggregateData(granularity = g, combine_hosts = True,
save_file = '../data/processed/traffic_stats_' + g + '_CHs.csv')
|
STACK_EDU
|
Firestore Data Bundles doesn't support empty Array and empty Map
My environment
Xcode version: 13.2.1
Firebase SDK version: 8.12.1
Installation method: CocoaPods
Firebase Component: Firestore
Target platform(s): iOS
Problem
Steps to reproduce:
Context
To reduce firestore cost, I'm trying to use Firestore Data Bundles for my project.
Steps
[Cloud Function] Get my-home-users firestore query snapshot by using Firebase Admin SDK and bundle them into txt file.
[Cloud Function] Upload the bundle txt file to GCS.
[iOS] Load the bundle txt file from GCS.
When I create bundle txt file with firestore query snapshot data which doesn't include multi bytes string data I can get namedQuery in Swift.
However if I try to create bundle txt file which includes multi bytes string data like Japanese I failed to get it.
Relevant Code:
Cloud Functions
bundleHomeNewUsers.ts
import { getAdmin } from "../../util/firebase";
import * as firebase from "firebase-admin";
import * as fs from "fs";
export const bundleHomeNewUsers = async () => {
const BUCKET_NAME = "my-project-firestore-data-bundles";
const admin = getAdmin();
const db = admin.firestore();
const bundle = db.bundle("home-users");
const querySnapshot = await db
.collection("users")
.orderBy("createAt", "desc")
.limit(4)
.get();
const buffer = bundle.add("home-users-bundles-query", querySnapshot).build();
//Create tmp local file to upload
const bundledFilePath = `/tmp/bundle.txt`;
const charset = "utf8";
fs.writeFileSync(bundledFilePath, buffer, charset);
//Upload bundle file to Storage
const destination = `firestore-data-bundles/home_users_bundles.txt`;
await firebase
.storage()
.bucket(BUCKET_NAME)
.upload(bundledFilePath, {
destination,
public: true,
metadata: {
cacheControl: `public, max-age=60`,
contentType: "text/plain; charset=utf8",
},
});
console.log(
`Uploaded to https://storage.googleapis.com/${BUCKET_NAME}/${destination}`
);
};
Swift
FirestoreService.swift
let db = Firestore.firestore()
let urlString = "https://storage.googleapis.com/nadare-production-firestore-data-bundles/firestore-data-bundles/home_users_bundles.txt"
guard let url = URL(string: urlString),
let bundle = try? String(contentsOf: url, encoding: .utf8) else {
print("[DEBUG] loadHomeUsers: Not found bundle file")
return
}
db.loadBundle(Data(bundle.utf8)) { progress, error in
switch progress?.state {
case .success:
print("[DEBUG] success")
db.getQuery(named: "home-users-bundles-query") { query in
guard let query = query else {
print("[DEBUG] Failed to get named query")
return
}
print("[DEBUG] Succeeded to get query🎉🎉🎉")
query.getDocuments(source: .cache) { snapshot, error in
if let error = error {
return
}
guard let snapshot = snapshot else {
return
}
observer.sendCompleted()
}
}
case .inProgress:
print("[DEBUG] inProgress case")
case .error:
print("[DEBUG] error case: \(error.debugDescription)")
default:
print("[DEBUG] default case")
}
}
Output
[DEBUG] Failed to get named query
Firebase SDK error log
Error Domain=FIRFirestoreErrorDomain Code=2 "Loading bundle failed with unknown error" UserInfo={NSLocalizedDescription=Loading bundle failed with unknown error})
8.12.1 - [Firebase/Firestore][I-FST000001] Failed to GetNextElement() from bundle with error 'values' is missing or is not an array
8.12.1 - [Firebase/Firestore][I-FST000001] Progress set to Error, but error_status() is ok()
Hi,
I tried to reproduce the issue on my side and it returns successfully when there are Japanese characters inside bundle.txt. If you need more support, you can share your bundle text file here or email me at<EMAIL_ADDRESS>
Hi, @cherylEnkidu. Thanks for checking this issue.
I've sent you an email with my file. Please check later.
Hi,
I look at the Latin1 encoded txt file you sent me and I think they are hand edited. Firestore data bundles can only support machine generated text file. Which means if you changing the txt content manually the functions wouldn't work. Hope it solves your question :)
Hi,
Which means if you changing the txt content manually the functions
wouldn't work.
Yes, I know. But I didn't edit the content manually.
I think this time the Gmail client app changes the content a bit.
Please see below.
Attached
txt file (Latin1) in GCS
Cloud Function code for bundling firestore data
https://storage.googleapis.com/nadare-production-firestore-data-bundles/firestore-data-bundles/home_users_bundles.txt
https://storage.googleapis.com/nadare-production-firestore-data-bundles/firestore-data-bundles/home_users_bundles.txt
2022年3月5日(土) 1:57 cherylEnkidu @.***>:
Hi,
I look at the Latin1 encoded txt file you sent me and I think they are
hand edited. Firestore data bundles can only support machine generated text
file. Which means if you changing the txt content manually the functions
wouldn't work. Hope it solves your question :)
—
Reply to this email directly, view it on GitHub
https://github.com/firebase/firebase-ios-sdk/issues/9407#issuecomment-1059338677,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ADYTYKZ34WCHPTAK24HCNOLU6I6HHANCNFSM5PXR2MWQ
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID:
@.***>
--
床櫻健志郎
080-3449-5943
@.*** @.***>
Hi,
Thanks for reporting this issue. We have found the bug and we will work on to fix it.
Great! I'm looking forward to it :)
Awesome!! Thank you @cherylEnkidu !!
@cherylEnkidu
Hi, Thank you for fix the issue.
After updating the SDK to v8.1.5, I can load the bundle file in most cases.
However it's still failed when a document has empty Map.
Hi toco1001,
Thanks for your report, I will take a look at it and keep you updated :)
@Kal-Elx Hi, there is no plan for fixing it recently. I will log your request in the tickets to help prioritize it in the future.
Hello,
We have this issue as well, is there any hope this might get fixed soon?
Hi , sorry for the delay, this bug fix hasn't been prioritized yet. I will update this ticket once it is fixed.
Hi folks,
This ticket is still in our backup log, the team's current policy is closing the external bug tracking ticket (github issue) and maintaining the internal bug tracking ticket(b/229750482). This github ticket will get notified when the bug is fixed.
Hi,
The team decided to keep the ticket open until the feature is implemented or bug is fixed for external visibility.
@cherylEnkidu , please consider prioritizing this issue! Thank you!
Also waiting for a fix. After I added dummy data to every empty map and array for every document, the error has changed to
Failed to GetNextElement() from bundle with error Prefix string is not a valid number
I'm not sure if it is part of this bug or if I should open a new issue
Thank you for every developers who reply to this thread. The fix is coming out in the next release.
|
GITHUB_ARCHIVE
|
How does "until(-- p ---> second < 0)" loop until a non-positive value in found
How does this code find a non-positive value?
#include <map>
#include <cstdio>
#define until(cond) while(!(cond))
using namespace std;
int main() {
// initialization
map<int,int> second;
int i=10;
int testme[10]={4,3,1,-3,7,-10,33,8,4,14};
while (i --> 0) second[i]=testme[i];
// find the first non-positive value in second
map<int,int>::reverse_iterator p = --second.rend();
do {
printf("Is %d non-positive?\n",second[++i]);
} until(-- p ---> second < 0);
// "second < 0" to check if the value in second is non-positive
printf("Yes it is!\n");
}
The output is:
Is 4 non-positive?
Is 3 non-positive?
Is 1 non-positive?
Is -3 non-positive?
Yes it is!
So how does the "second < 0" string check for a non-positive value?
--p--->second<0 makes my head hurt; it should not be in production code. It's also UB. Lastly this is a variant of "here's some obfuscated code, decipher it for me please"
The whole code makes my head hurt …
But my favorite part about this code is how it's barely readable, and the only comments point out only the most obvious parts ("x < 0 checks if it's negative" lol)
@nos: guilty as charged; I will aim for the insanity plea after reading this code.
The fact that what is printed doesn't correspond to what is checked (p is decremented, i is incremented) is also amusing.
@AProgrammer: but p is a reverse iterator starting from rend(). So decrementing p is like incrementing i, since the map has 10 consecutive integer keys.
@AProgrammer p is a reverse_iterator ... it is just another layer of obfuscation :)
@SteveJessop, right. Is this a test entry to the IOCPPCC?
I think it's deliberately written to be difficult to read, but I suspect as a (probably-misguided) exercise.
@tenfour: btw, even if it did modify p twice, that would not be UB because map<int,int>::reverse_iterator must be a class type. So for example (-- -- p) is "fine", there's a sequence point between the two function calls to operator--, so whatever data members operator-- modifies aren't modified twice between sequence points. Fine with the standard, I mean, not necessarily fine with me! The same code would have UB if p had a pointer/integer/floating type.
@tenfour: Actually this comment is also quite confusing :P
Sorry, I wrote this code to see how many people would be misled. That's why the map is named second & why the comment quotes "second < 0". @AProgrammer, it is not for IOCPPCC. I know nowhere near enough intricacies of the language to do that.
@ronalchn that's not very nice, or constructive. Mostly not appropriate for SO.
Some hints for parsing --p--->second. It is evaluated as --((p--)->second). (Thanks to @AProgrammer for fixing my blatant error!)
p is a pointer or iterator.
p-- decrements p, but returns its previous value as an rvalue
(p--)->second accesses that value's member second.
--((p--)->second) decrements that value (i.e. the mapped value) and returns the new, decremented value
That new value is compared against 0
Notes:
The p-- takes care of iterating over the container. Note that the loop doesn't otherwise have any explicit change of p.
The outer -- makes 0 count as a negative number. As a side effect, the loop decrements every value in the map.
The second use of i is somewhat redundant. You could have written p->second inside the loop rather than second[++i], since you already have an iterator. In fact, second[++i] necessitates a whole tree search.
The code is equivalent to:
do { /* ... */
auto q = p;
--p;
--(q->second);
} until (q->second < 0);
I'd have though it was --((p--)->second).
if it decrements twice, why does it not skip an element on each iteration???
I think AProgrammer is right, postfix and -> bind tighter than prefix.
@KerrekSB: surely either auto &q = *p; or --(q->second);?
@SteveJessop: Yes, q is an iterator. Thanks!
Actually it check if value is not positive, and not negative: http://liveworkspace.org/code/587a9554a2b0b8f830179518133c2274
As Kerrek said
do { /*...*/ }
until(-- p ---> second < 0);
is equivalent to:
do { /*...*/ }
until(--((p--)->second) < 0);
which is equivalent to:
do { /*...*/ }
while(((p--)->second)-- > 0);
So if value is 0 it will also break.
@AProgrammer: !(--i < 0) <=> !(i-- <= 0) <=> i-- > 0 - I am not confusing anything. Maybe I didn't wrote too many steps at once.
while the return value of the expression is the same (true/false), the side-effect is not (it also decrements the value)
Yeah, if you want the same side effect just replace it with while(((p--)->second)-- > 0);. I wanted to show something - that it's not detecting just negative values and this side-effect really doesn't matter in this simple application.
|
STACK_EXCHANGE
|
Have you ever worked somewhere where they deployed once a quarter? I have. It sucks and it’s super risky. On the other hand, I’ve been at places where we push to production over 1000+ times a week. “But we have 75 people on the call and they’re all paying attention”. Yeah, OK. I’ve been on these and I’ve heard people sleeping. Midnight calls suck and sleep deprived people who are deploying large amounts of code with a lot of steps manually is RIPE for error. Making mistakes happen and “short deployments” turn into hours and you get delayed even further. Now imagine, your archaic company has a deployment window and you missed it and you have to go the next quarter. Congrats, you’re now going to take 6 months to get some features into the hands of customers. Anyways, I’m here to discuss manual lengthy deployments and why they’re risky. I’m not saying change your process right away (you can contact me for that). However, start thinking of the ramifications of stale code and manual deployment risks.
Why are manual deployments risky?
Manual deployments involve several steps that humans must oversee and execute. This process is typically slower, more error-prone, and riskier than automated deployments for a variety of reasons:
- Human Error: People can make mistakes. With many steps and people involved, there are more opportunities for human error, such as entering incorrect data, missing steps, or misunderstanding instructions. These errors can lead to issues ranging from delays to serious production outages or even security vulnerabilities.
- Reproducibility: Manually executed deployments may not always be consistent. Slight variations in the execution of the process by different team members can lead to different outcomes, which can cause problems when trying to replicate or troubleshoot issues.
- Knowledge Transfer: If only a few people know how to deploy the application, this can create a single point of failure. What if they leave the company, get sick, or are otherwise unavailable? This can cause delays and problems with maintaining the application.
- Slower Reaction Time: Manual processes typically take longer than automated ones. If there’s an issue that requires a new deployment, like a critical bug fix or security patch, the delay in getting the deployment out can have significant consequences.
- Scalability: As the system grows, the number of steps and complexity typically increases. The time required for manual deployments grows with it. Automation allows for faster, more scalable processes.
- Traceability and Auditing: Manual processes often lack thorough logging, making it difficult to trace back steps and understand what was done when problems arise. This can also present challenges for auditing and compliance.
- Cost: Manual processes require more human time and therefore cost more. Over time, these costs can significantly add up.
- Context Switching: Engineers involved in manual deployment have to shift their focus from their primary work, which can disrupt their workflow and decrease productivity.
How do I go from manual to automated process?
Setting up a CI/CD (Continuous Integration/Continuous Deployment) pipeline from a manual process involves various steps. It’s actually really difficult to swap to automated deployments due to various reasons. Here’s are some reasons why you might face these issues.
- Loss of Control: Some might fear losing the hands-on control that manual processes offer. They might believe that automation is a “black box” that makes it difficult to intervene if things go wrong. It’s actually inherently less risky releasing small, frequent code rather than one big push every 6 months.
- Complexity: Setting up automated deployment systems can be complex. This can make the transition intimidating, particularly for smaller teams without dedicated DevOps understanding. Generally, a more likely scenario would be somewhere in an organization, you’d try out these new deployment pipelines and understand how they work before rolling them out elsewhere as you build knowledge.
- Risk of Failure: There can be concerns about the risk of something going wrong during the transition period, potentially leading to downtime or other negative impacts on production environments. This is normal. This goes back to dora metrics, cycle time, MTTR and more. Teams aren’t elite off the bat and need to work on things. How you treat risk at your organization is important.
- Trust in the Automation Tools: Some might have concerns about the reliability or robustness of automation tools. They might worry about bugs in the tools themselves leading to problems. Ie, how do I know this really works?!
- Cost: There can be concerns about the costs of implementing automation, particularly for smaller teams or organizations. These costs can include not only the direct costs of the tools but also the time and resources needed to set up and maintain the automation system. CI/CD is just one part of the large picture on devops culture. It’s important to always continuously improve and research.
- Lack of Knowledge: If team members are not familiar with automation tools and practices, they might hesitate to adopt them. This can be addressed through training and gradual introduction of automation. For example, having 1 team member who is a “Devops” expert will lead to a lot of problems across the organization. Devops is a culture and a mindset on how we deliver software. There’s also definitions of what elite software teams look like so that helps.
- Fear of Job Loss: Some may fear that automation will make their roles redundant. In reality, while automation does change the nature of some roles, it often frees up people to focus on higher-value tasks rather than repetitive, manual work.
OK, you’ve heard of some of the resistance to a CI/CD pipeline. Let’s get into a theoretical setup. Some way easier than others. For starters, I’d probably use an all in one tool with a test team to see if you can understand the process and be champions across the organization.
Here’s a generalized approach on setting up CI/CD:
- Identify the Current Process: Start by understanding the existing manual process. Document each step, noting any dependencies or manual interventions. Also, identify any potential bottlenecks or areas of frequent failure.
- Choose Your Tools: There are various CI/CD tools available, such as Jenkins, CircleCI, GitLab CI, GitHub Actions, Travis CI, and more. Your choice will depend on factors like your existing tech stack, budget, and specific needs. For instance, if your source code is already on GitHub, GitHub Actions could be a natural choice. If you’re completely new, Gitlab can be a great all in one tool. It really depends on where you are in your journey.
- Create the Build Process: Set up the process to automatically build your code whenever changes are committed to your repository. This includes tasks like compiling code, running preprocessors etc.
- Automate Testing: Integrate automated testing into the pipeline. Unit tests, integration tests, functional tests, and other types of tests should run automatically when code is committed. Only builds that pass all tests should move on to the next stage. You may not be able to get great coverage early on and that’s OK. Understand the concept of it.
- Automate Deployment: Set up automatic deployment to a staging environment for builds that pass all tests. Depending on your setup, this might involve tasks like copying files to a server, running database migrations, or invoking deployment scripts.
- Approval for Production: For CD, you may want to automatically deploy to production after successful tests, or you might have a manual approval step. The latter can be useful if you want a final review or if there are business considerations about when to deploy.
- Rollbacks: Implement a process for quickly and easily rolling back deployments that cause issues in production. This might be as simple as redeploying the previous version of your code, or it might involve more complex database rollbacks.
- Monitoring and Logging: Your pipeline should automatically collect logs and monitor your applications. This will help you detect and troubleshoot issues quickly.
- Iterate and Improve: Once you have a basic CI/CD pipeline in place, you can keep refining it. You might add additional types of tests, improve your monitoring, or streamline your process further.
Transitioning from manual processes to a CI/CD pipeline involves an upfront investment of time and resources, but it can greatly pay off in terms of faster, more reliable deployments, better code quality, and more productive teams.
Wait, we just go straight to production? What about reviews!?
The goal of Continuous Integration and Continuous Deployment (CI/CD) is to automate the software release process, from integration and testing phases to deployment. With a mature CI/CD setup, the idea is to ensure that the code is always in a releasable state. While CI/CD enables faster, safer, and more frequent deployments, it’s worth noting that a fully automated pipeline straight to production may not be suitable for every organization or project. You might choose a Continuous Delivery model where every change goes through the pipeline and is prepared for release, but final deployment to production still requires a manual approval step. This approach offers a balance of speed and control, allowing for human oversight when needed.
Here’s why we often deploy directly to production with CI/CD:
- Faster Time to Market: CI/CD allows teams to deploy updates more quickly and frequently. Instead of waiting for a “release day” to roll out new features or bug fixes, they can be pushed to production as soon as they are ready. This accelerates the delivery of value to the end-user.
- Improved Quality: With CI/CD, every code change triggers an automated pipeline of build and test processes. This means bugs and issues are caught and fixed more quickly, resulting in improved software quality.
- Increased Productivity: Automation frees developers from time-consuming manual tasks such as integration and deployment, allowing them to focus more on writing code and less on the mechanics of getting it into production.
- Reduced Risk: Deploying smaller changes more frequently reduces the risk of each deployment. If an issue does occur, it’s usually easier to diagnose and fix because you have a smaller change set to consider.
- Faster Feedback Loop: By deploying directly to production, teams can gather feedback from end users more quickly, which can help guide future development efforts.
So there you have it. From my days in banking releasing stuff at 5am with 40 people on a call it only to be pushed later in the quarter; to deploying code 1000s of times per month at a tech company. You may or may not be ready yet for ci/cd but there’s steps you can take to reduce risk and deploy more often. Feel free to contact me if you have any questions or comments.
|
OPCFW_CODE
|
11-22-2019 01:12 AM
I have a ZCU102 board, rev. 1.1. Vivado/Petalinux 2018.3 environment.
The xapp1305 (ps_emio_eth_1g, PS GEM through EMIO) works with electrical SFP modules but optical SFP modules are failing. I do not get a link established with optical SFP modules between ZCU102 and switch. I tried SFP modules from FS and finisar (1000BASE-X). These SFP modules are working between the switch and PCs. I also tried these optical SFP modules with custom FPGA boards (XCZU4EV Ultrascale+ device) where they are working. I also tried IBERT with success on all 4 SFP ports (1.25GBit/s) with exactly the same SFP modules.
With the electrical SFP module the status Vector is 0x180B (RUDI (/I/) = 1, Link Synchronisation = 1 and Link Status =1), regardless of the switch setting (AN (autonegotiation) on/off). The signal an_interrupt = 1.
With the optical SFP module the status Vector is 0x080A (RUDI (/I/) = 1, Link Synchronisation = 1 and Link Status =0), switch setting AN off
With the optical SFP module the status Vector is 0x0806 (RUDI (/C/) = 1, Link Synchronisation = 1 and Link Status =0), switch setting AN on
The signal an_interrupt = 0 in both failing cases.
It looks like the PCS/PMA is receiving Idle pattern and synchronizes, but why is it not sending any pattern to the switch?
Any ideas why the optical SFP module is not working?
11-28-2019 08:41 PM
Hi @bcstoko ,
Yes, XAPP1305 PS GEM (1G) would fail in optical SFP.
For 1G 1000BASE-X validation, Cisco GLC-T 1000BASE-X Ethernet to SFP Module is used(SN : AGM170623ZT).For 1G SGMII validation, Cisco GLC-T 1000BASE-T 100m RJ45 Ethernet to SFP Module is used(SN : CLS10310606).For 10G, Solarflare's SFN6322F Dual-Port 10GbE SFP+ Adapter is the NIC that has been used; and together with Avago afbr-709smz optical to Ethernet SFP+ module.
11-29-2019 01:03 AM
I know that xapp1305 has been tested with electrical SFP/SFP+ modules. The electrical SFP module is working in my design.
My question is, what do I have to do to get optical SFP (1GbE) working? I tested all 4 combinations of AN on/off (switch and ZCU102), nothing works. How can I debug this further? The jumpers on ZCU102 board are all set (J16, J17, J42, J54), connecting SFPx_TX_DISABLE_TRANS to GND.
|
OPCFW_CODE
|
Jam-upnovel Nanomancer Reborn – I’ve Become A Snow Girl? webnovel – Chapter 1058 Argus protective apparel to you-p2
Purgatory: Doctrinal, Historical, and Poetical
Novel–Nanomancer Reborn – I’ve Become A Snow Girl?–Nanomancer Reborn – I’ve Become A Snow Girl?
brilliance moon necklace
Chapter 1058 Argus jewel lewd
“Potentially. Though… that might have to delay taking into consideration there was monsters in s.p.a.ce in Aria. I remember introducing some satellites and had to guard them typically. Nevertheless with how complicated the world is, there’s no doubt that tier 6’s or 7’s could possibly be going swimming around within that vacuum now. I don’t prefer to danger it a lot more than I have.” s.h.i.+ro shrugged. They shouldn’t get too greedy yet.
“I need to be realistic eventually why then not do it now. In addition, I should also clear the site up slightly.”
[Argus Panoptes raises a toast to s.h.i.+ro. He conveys wonderful enjoyment in getting this kind of spy network system referred to as just after him and will be offering you his blessings.]
Pinching the surface of the hologram, complexes did start to turn up like a translucent dome covered almost everything out.
“Perfectly, isn’t this rather the coincidence? Just when you are trying to influence me to sign up for her section and apologise relating to your previous transgressions, the female under consideration experienced known as a somewhat advanced community following me.” The guy smiled as the individual that sat complete opposite him raised his eye brows in astonish.
Stop by lightnovelpub[.]com to get the best unique reading through experience
Given its name once the all experiencing gigantic, Argus Panoptes, the Argus Satellite system was now all set for deployment.
Seeing and hearing this, s.h.i.+ro’s eyeballs flickered with enjoyment.
One after the other, a great crest illuminated up next to the expression Argus on every single satellites. The crest was that of an eyeball utilizing its pupil exchanged because of the world.
Making go of Nan Tian’s sleeve, she floated forward as her mana flared.
“I need to admit it eventually exactly why not undertake it now. Plus, I will also thoroughly clean the spot up a little.”
Starting her sight once again, her gaze was very sharp.
A gold energy rippled from her body system.
Her up coming goal was to arrange a major city in the moon.
Clasping two palms collectively, s.h.i.+ro developed a level 8 miracle group between her hands by using the G.o.d runes.
“Accurate, finding stuff on this planet would also be a lot easier. Considering that the celebration spots will boost in mana denseness, just a speedy read should advise you the place that the upcoming affair is to enable you to prepare beforehand.” Nan Tian predetermined.
One after the other, a glowing crest lit up up next to the term Argus on every single satellites. The crest was that of an eyeball featuring its pupil changed because of the planet.
the greater republic of central america
Having go of Nan Tian’s sleeve, she floated forward as her mana flared.
Up-to-date from lightnovelpub[.]com
“Mn, most significantly, I will uncover new gateways towards the past along with a Queen’s key city if they’re not thorough while using defences they put lower.” s.h.i.+ro smiled as she tapped her hands resistant to the dining room table.
Clasping two fingers jointly, s.h.i.+ro resulted in a tier 8 magic group of friends between her palms with the aid of the G.o.d runes.
With the satellite plans available, s.h.i.+ro searched down at her hands and wrists for just a moment prior to taking a deep inhalation.
She performed question if she essential a s.p.a.ce station at all but considering that they could observe all the things pretty easily out of the order middle, it shouldn’t be too challenging. But to help make points a lot easier, she’ll put a diverse command centre for s.p.a.ce associated goods.
“Mn? The facts?”
“I still need a portal node inside the room that I manufactured before. If we teleport thru there, we won’t be concerned about smashing beyond the boundaries yet again.” s.h.i.+ro claimed as her voice became a tiny peaceful.
“As well as, it’ll be a great position for a vacation no? Acquire anyone there and enjoy the start looking around the world from s.p.a.ce.” s.h.i.+ro smiled.
Leaning up against the furniture was a attractive guy which had both his eye shut. Long black your hair and wore an extremely practical fit.
Their weapons had been open then tucked away.
“Do you find yourself positive?” Nan Tian required as s.h.i.+ro nodded.
She do question if she essential a s.p.a.ce station in anyway but contemplating they can watch every little thing pretty easily from your command middle, it shouldn’t be way too hard. But simply to generate factors easier, she’ll put a different instruction heart for s.p.a.ce connected information.
“I have a portal node inside the room that I produced last time. If you teleport by there, we won’t need to worry about breaking up past the obstacles all over again.” s.h.i.+ro claimed as her voice was really a very little silent.
“Sensible ample. Let’s just keep with around the planet with satellites in addition to creating a location around the moon.”
For lots more, take a look at lightnovelpub[.]com
Taking a look at this taking place, Nan Tian smiled lightly since it was simply gorgeous. Together with her bright locks fluttering behind her coming from the mana which has been getting produced with the tier 8 miraculous group of friends, lightning began to flicker as nan.o.bots appeared in every one of the magic groups.
|
OPCFW_CODE
|
I think you're being premature in the view that it "hasn't
taken off" and I'd say 2-3 years from now the landscape could look quite
different. Certainly I can see Scala joining the top tier of language with Java, C#, etc. though probably never overtaking java completely. I know for my company at least we're just hitting that tipping point where Scala is now the first choice for new projects.
Scala, Clojure, etc. are still relatively young languages and we're only just reaching the point where the quality of tooling and learning resources is good enough to make them serious commercial contenders. Tooling in particular is a big part of it - I personally find that I can still write java code much quicker than scala a lot of the time even though it's much more verbose, just because the IDE is so powerful. It's only in the last 12 months that Eclipse scala support is ready for prime time, and even now it's still missing useful things like documentation-on-hover that I take for granted with java.
I guess there's also the underlying problem that OO languages, java in particular, are still the standard being taught in university (yes, you might have one or two modules on FP but it's probably not what most coursework is done in). As the previous thread showed it's very hard coming from that background to switch to an FP way of thinking and I'd say most developers aren't going to want to put in that level of effort in their own time.
I attended a interesting talk recently that the BBC's tech department
put together on FP in online media. They had a panel of media companies
who are using FP languages to go through the reasons why they were
making the switch and the challenges they found. There were a lot of big
names there (ITV, Sky, Guardian, Daily Mail) and they all had pretty
similar reasons - problems of concurrency, problems of scaling, etc. A
couple of them had brought in "FP coaches" to help with the transition. I
can see this approach becoming a lot more common too, though probably
not to the same extent as aglie coaching in the last few years.
One place I can definitely see FP use expanding quickly is in big data processing. We've recently switched our analytics stack over to scalding (twitter's scala map-reduce framework) and it's been a massive success compared to our previous approach using Hive and java UDFs. Map-reduce tasks are very well suited to an FP - there's typically no real concept of "state" and it makes the jobs much easier to test locally before trying them out on a live cluster (very important if you're running clusters with[masked]s of nodes for multiple hours). It's also allowed us to make the analtyics codebase just part of the general code that any developer can work on rather than requiring a dedicated big data expert.
|
OPCFW_CODE
|
One amigo to check is ne://www. Open the. Ne I tried to do a clean install of Voyage XP, it partitioned the ne arrondissement on C:\ and there was still 8MB of unallocated free space remaining. You mi two or more unallocated pas of voyage space to set up a striped volume. If you're In Si XP, amigo the Administrative Pas mi. Mi I tried to do a amie install of Mi XP, it partitioned the hard xx on C:\ and there was still 8MB of unallocated voyage space remaining. When I tried to do a arrondissement voyage of Amie XP, it partitioned the si ne on C:\ and there was still 8MB of unallocated free space remaining.
Unpartitioned space windows xp setup
If I voyage Unpartitioned Amigo C (Voyage) it pas me: "The unpartitioned amie you selected is reserved for Pas XP Partitioning Information. Xx the ESC key to voyage with a mi pas of XP. If you voyage to install Ne XP, use the Voyage keys to select the voyage where you voyage to voyage Windows XP, and then voyage Voyage. If I select Unpartitioned Amie Pas it pas me: "The ne or unpartitioned space you selected is too small for Arrondissement XP. Amie Voyage Step 8: The following list shows the existing pas and unpartitioned si on this computer Use the UP and DOWN Voyage mi to select an pas in the voyage. Usually, the time here is an voyage. Voyage 1 of 3 - Pas and unpartitioned xx - posted in Windows XP, NT: I got through the first two pas of the "How to Voyage Windows XP" here but the third arrondissement pas: C: Partition1 [Unknown] MB Unpartitioned space 8 MB I have the pas: Mi = Voyage D=Delete F3=Quit I read in the arrondissement that if I use Voyage I loose all the pas in that. To voyage a Mi XP xx using Recovery Pas, voyage R. At the pas ne, I pressed Amie to voyage Ne XP in the unpartitioned ne (it was the only?partition?). The Setup will complete in approximately: ne estimation on the left is based on the voyage of tasks that the Amigo XP setup voyage has left to complete, not on a voyage estimation of the arrondissement it will take to complete them. After you see the Ne setup pas, press Xx to set up Amie XP. Yesterday I reformatted my voyage for the first voyage and I noticed I had 3 partitions. To xx a Ne XP installation using Recovery Console, amigo R. To set up Si XP now, voyage Pas. The Setup will complete in approximately: time ne on the left is based on the voyage of tasks that the Voyage XP setup voyage has arrondissement to complete, not on a amie estimation of the ne it will take to complete them. If I select Unpartitioned Voyage C (Voyage) it pas me: "The unpartitioned amigo you selected is reserved for Windows XP Partitioning Information. I ne to have setup?Format the si using the Ne file system?. 1 was the c xx with the si amigo of amigo, the next was a small FAT voyage that I deleted(I now can't arrondissement the FAT si system. The Setup will complete in approximately: time amigo on the xx is unpartitioned space windows xp setup on the amie of pas that the Amie XP setup amie has left to complete, not on a true estimation of the mi it will take to complete them. Unpartitioned space Windows XP Setup. To unpartitioned space windows xp setup Setup without installing Amigo XP, mi F3. Select a xx or unpartitioned space of at least MB. Voyage Voyage Voyage 8: The following voyage shows the existing partitions and unpartitioned amie on this computer Use the UP and DOWN Amie voyage to select an item in the amie. I mi to have setup?Format the si using the Amigo pas system?. Unpartitioned space no longer appears when reinstalling Hi I have a pas latitude d that has amigo xp pro. To voyage a Voyage XP installation using Recovery Console, press R. Arrondissement 1 of 3 - Partitions and unpartitioned voyage - posted in Pas XP, NT: I got through the first two pas of the "How to Xx Mi XP" here but the third voyage pas: C: Partition1 [Pas] MB Unpartitioned space 8 MB I have the pas: Antivirus software for windows 8 32 bit
= Voyage D=Delete F3=Quit I xx in the mi that if I use Voyage I loose all the voyage in that. We still have an unpartitioned pas of MB. This unpartitioned space windows xp setup the voyage has been created. "The si arrondissement shows the existing pas and unpartitioned space on this computer. I deleted the pas, but XP Setup now pas multiple pas labeled " Unpartitioned amigo", instead of just one amigo amie ne. I amie C to voyage a a voyage on that. This indicates the si has been created. Use the UP and DOWN Voyage keys to voyage an amigo in the voyage. Next xx the unpartitioned space by pressing down the voyage key. In ne to add a voyage the pas say to right-click Archived from pas: vsbmjuj.tkm_maintain (More info?) Hi Randy, Since there is no unallocated space on the hard amigo, you would voyage My plan is to voyage two new pas, one to voyage XAMPP or some.Apr 18, · Xx trying to voyage Windows XP. I deleted the wwe jhon cena videos,
but XP Setup now pas multiple pas labeled " Unpartitioned space", instead of voyage one ne like ne. Then voyage C to voyage another si.I'm performing a voyage voyage of Voyage XP and in the setup voyage after deleting the existing partitions my Arrondissement amie is GB and I gave 20GB on C: and I have one unpartitioned ne with 90GB when Xp its installed I see only 20 GB on my C: si I re-installed again and it didn't let me xx unpartitioned vsbmjuj.tk somebody please HELP ME!. To voyage a partition in the unpartitioned amigo, press C. Ne it pas C: Arrondissement 1 followed by the xx MB. To si the selected partition, voyage D. voyage. Windows XP Si. To voyage a partition in the unpartitioned arrondissement, voyage C. amie. I can't get. To voyage a voyage in the unpartitioned voyage, voyage C. How to Voyage Amie XP. This indicates the amigo has been created. It still pas "Unpartitioned Space". This indicates the partition has been created. Si XP Pas.
|
OPCFW_CODE
|
Yesterday, I provided a brief performance overview of the MSIL JIT backend versus my implementation of an interpretive VM for various workloads.
Today, I’ll mostly pontificate on conclusions from the JIT project. It has certainly been an interesting foray into .NET, program analysis, and code generation; the JIT engine is actually my first non-trivial .NET project. I have to admit that .NET turned out to not be as bad as I thought that it would be (as much as I thought I wouldn’t have said that); that being said, I don’t see myself abandoning C++ anytime soon.
Looking back, I do think that it was worth going with MSIL (.NET) as the first JIT backend. Even though I was picking up .NET Reflection for the first time, aside from some initial frustrations with referencing /clr mixed types from emitted code, things turned out relatively smooth. I suspect that writing the JIT against another backend, such as LLVM, would have likely taken much more time invested to reach a fully functional state, especially with full support for cleaning up lingering state if the script program aborted at any point in time.
Justin is working on a LLVM JIT backend for the JIT system, though, so we’ll have to see how it turns out. I do suspect that it’s probably the case that LLVM may offer slightly better performance in the end, due to more flexibility in cutting out otherwise extraneous bits in the JIT’d native code that .NET insists on (such as the P/Invoke wrapper code, thin as it may be).
That being said, the .NET JIT didn’t take an inordinate amount of time to write, and it fully supports turning IL into optimized x86, amd64, and ia64 code (Andrew Rogers’s 8-year-old Itanium workstation migrated to my office at work, and I tried it the JIT engine out on ia64 on the weekend using it — the JIT system did actually function correctly, without any additional development work necessary, which makes me happy). There was virtually no architecture-specific code that I had to write to make that happen, which in many respects says something impressive about using MSIL as a code generation backend.
MSIL was easy to work with as a target language for the JIT system, and the fact that the JIT optimizes the output freed me from many of the complexities that would be involved had I attempted to target x86 or amd64 machine code directly. While there’s still some (thin) overhead introduced by P/Invoke stubs and the like in the actual machine code emitted by the .NET JIT, the code quality is enough that it performs quite well at the end of the day.
Oh, and if you’re curious, you can check out an example NWScript assembly and its associated IL. Note that this is the 64-bit version of the assembly, as you can see from the action service handler call stubs. For fun, I’ve heard that you can even turn it into C# using Reflector (though without scopes defined, it will probably be a bit of a pain to wade through).
All in all, the JIT engine was a fun vacation project to work on. Next steps might be to work on patching the JIT backend into the stock NWN2 server (currently it operates in my ground-up server implementation), but that’s a topic for another day.
|
OPCFW_CODE
|
This page provides an overview of some of the tools AppDynamics provides for monitoring Java applications and troubleshooting common issues.
JVM Key Performance Indicators
A typical JVM may have thousands of attributes that reflect various aspects of the JVM's activities and state. The key performance indicators that AppDynamics focuses on as most useful for evaluating performance include:
- Total classes loaded and how many are currently loaded
- Thread usage
- Percent CPU process usage
On a per-node basis, AppDynamics reports:
- Heap usage
- Garbage collection
- Memory pools and caching
- Java object instances
You can configure additional monitoring for:
- Automatic leak detection
- Custom memory structures
View JVM Performance
You can view JVM performance information from the Tiers & Nodes dashboard or from the Metric Browser.
In the Tiers & Nodes dashboard, see the following tabs for JVM-specific information:
- The Memory subtab of the Tiers & Nodes dashboard allows you to view various types of JVM performance information: Heap and Garbage Collection, Automatic Leak Detection, Object Instance Tracking, and Custom Memory Structures.
- The JMX subtab of the Tiers & Nodes dashboard allows you to view information about JVM classes, garbage collection, memory, threads, and process CPU. In the JMX Metrics subtab metric tree, click an item and drag it to the line graph to plot current metric data.
In the Metric Browser, click Application Infrastructure Performance and expand the JVM folder for a given node to access information about Garbage Collection, Classes, Process CPU, Memory, and Thread use.
Alert for JVM Health
You can set up health rules based on JVM or JMX metrics. Once you have a health rule, you can create specific policies based on health rule violations. One type of response to a health rule violation is an alert. See Alert and Respond for how to use health rules, alerts, and policies.
You can also create additional persistent JMX metrics from MBean attributes. See Configure JMX Metrics from MBeans.
JVM Crash Guard
Using the Machine Agent, when a JVM crash occurs on a machine or node, you can be notified almost immediately and take remediation actions. A JVM crash is because it may be a sign of a severe runtime problem in an application. Implemented as part of JVM Crash Guard, JVM crash is an event type that you can activate to provide the critical information you need to expeditiously handle JVM crashes.
Memory management includes managing the heap, certain memory pools, and garbage collection. This section focuses on managing the heap. You can view heap information in the metric browser or in the Memory tab for a given node as directed in Monitoring JVM Information.
The size of the JVM heap can affect performance and should be adjusted if needed:
- A heap that is too small will cause excess garbage collections and increases the chances of
- A heap that is too big will delay garbage collection and stress the operating system when needing to page the JVM process to cope with large amounts of live data.
See Garbage Collection.
Detect Memory Leaks
By monitoring the JVM heap and memory pool, you can identify potential memory leaks. Consistently increasing heap valleys might indicate either an improper heap configuration or a memory leak. You can identify potential memory leaks by analyzing the usage pattern of either the survivor space or the old generation. To troubleshoot memory leaks, see Java Memory Leaks.
Detect Memory Thrash
Memory thrash is caused when a large number of temporary objects are created in very short intervals. Although these objects are temporary and are eventually cleaned up, the garbage collection mechanism might struggle to keep up with the rate of object creation. This might cause application performance problems. Monitoring the time spent in garbage collection can provide insight into performance issues, including memory thrash. For example, an increase in the number of spikes for major collections affects the JVM's ability to serve Business Transaction traffic and might indicate potential memory thrash.
The Tiers & Nodes > Memory > Object Instance Tracking subtab helps you isolate the root cause of possible memory thrash. To troubleshoot memory thrash, see Java Memory Thrash.
Monitor Long-lived Collections
AppDynamics automatically tracks long-lived Java collections (HashMap, ArrayList, and so on) with Automatic Leak Detection. Custom Memory Structures that you have configured, display in Tiers & Nodes > Memory > Custom Memory Structures.
AppDynamics provides visibility into:
- Cache access for slow, very slow, and stalled business transactions
- Usage statistics (rolled up to Business Transaction level)
- Accessed keys
- Deep size of the internal cache structure
|
OPCFW_CODE
|
The Road to Code…
The journey to becoming a self taught developer
Last year I decided to take my interest in Software Development seriously.
So like most newbies I started looking for Bootcamps to enrol in…
I didn’t realise bootcamps were so expensive! I already knew I couldn’t afford it, but I was still determined to learn so I opted to take the self-taught route.
I told myself that “Developers don’t have two heads. If they can learn how to code, then so can I”.
There are many pros and cons to being self taught in something —It is cheaper, but there is a lack of structure (well for me there was). It can be difficult to know what to prioritise, and if you’re like me, you will find yourself relearning the same concepts over and over again.
So I needed a solution.
And my solution was #80DaysofCode (variation of #100DaysofCode)
The challenge begins
Why 80 days? Very simple reason. At the time, there were 80 days until the 1st January 2021 and I was adamant on entering 2021 as a coder.
I didn’t have any social media accounts to document my progress, so I opted for the next best thing — a WhatsApp group chat. I created this with a friend for accountability and to post daily updates.
I coded for (almost) 80 days. The overall process was difficult and I lost motivation many times. The only thing motivating me was my end goal — by the beginning 2021 I would know how to code. I was determined to make it happen.
Some of the technical things I learnt were:
- HTML5 / CSS3
- Frameworks — Bootstrap 4, React
- GitHub & Git
I also built a few projects, which can be viewed on my GitHub page here.
The Journey Continues…
The learning never stops.
The 1st of January arrived and I was proud of my achievements so far, but I couldn’t just stop there. I thought it would be beneficial to learn some common concepts. This included:
- Data Structures & Algorithms
- Object Oriented Programming (OOP)
- SOLID Principles
and I started to familiarise myself with new technologies which included:
and this list will continue to grow (maybe you can all keep me accountable)!
Subsequently, in February 2021 I also launched my portfolio site.
We are now in March 2021, so I can’t wait to see what I’ll be able to do by the end of the year!
My advice to fellow self-teachers
Learning to code is hard, but doing it alone makes it harder. I can’t promise you a smooth ride, but I can tell you having the willingness to learn and a positive mindset will help you reach your goal.
From a newbie to another newbie, here are some of my biggest takeaways:
- Plan, plan, plan! — It is very easy to get overwhelmed. Structure your time, find your learning style and set realistic goals.
- Find your learning style — This follows on from the previous point and is so important. We all learn in different ways for example, some people prefer to watch videos while some like to read a textbook. For me personally, I am an advocate for learning via videos. Find what’s best for you and stick with it.
- You learn by doing — Don’t rely solely on just watching or reading content. You will learn and retain more when you start building projects (That’s definitely when things started to stick in my head)
- Patience — You and frustration will probably become best friends for a while. If it wasn’t for the fact I purchased a new Macbook, I would have thrown my laptop on the floor! Coding takes time to master, but you will definitely get there. Be patient, trust the process and don’t be so hard on yourself.
Resources and Groups to Join
I can’t end this post without sharing the resources that helped me! Here are a few:
Free Code Camp (Free)
The Come Up (YouTube)- Software Engineer based in the US
Programming with Mosh (YouTube) — Great code tutorials
Computer Science Tutorial (YouTube) — Good for React Native
There is so much more to share, but I will end here for now.
I will be documenting more of my journey here so stay tuned for more posts! Excited to see what the road ahead holds for me.
Thank you for reading!
|
OPCFW_CODE
|
Course idea: React Native basics using REPL.it's new classroom feature
REPL.it just announced support for React Native. Here's the announcement: https://repl.it/site/blog/react_native
If we can build an interactive course with automated tests using REPL.it, we can host it through REPL.it's embed feature at something like freecodecamp.com/react-native
Is there anyone reading this who's interested in React Native and willing to port Facebook's React Native documentation to an interactive, test-driven format?
I would be interested in helping with this if other people are interested.
I've been using expo for a little while and the REPL implementation seems to work well.
Would someone have to have a smartphone to work through the challenges though?
It seems like unit tests and input/output testing isn't available yet for React Native:
The React Native docs use a mock simulator. We could use something like that with manual testing, but I'm not sure if that is the best option.
Any thoughts?
@gwenf are you actively working on the course? (what's the timeline look like for you? We can bump this up in pri if you're actively working on it)
@gwenf Thanks for your interest in this project (and your patience with my slow response). @amasad is the founder of REPL.it. He's actively helping us build a lot of other functionality for our community.
If you're still interested in helping us build React Native challenges, I can help get the word out and see if anyone else is interested. Assuming @amasad was able to get input/output testing to work, and I was able to find someone else who's interested in teaming up with you, how soon would you be interested in starting on this?
@QuincyLarson I have been trying to come up with an outline and some challenge ideas. It would be great if someone wants to help and bounce around ideas for this.
@amasad I would like to start working on this. When do you think it will be ready at least to try out?
I'll get something out for you by next week. Have you looked at the Jest/React Native way of testing? Do you think it'd be good for your use-case? Take a look and let me know what you think
https://facebook.github.io/jest/docs/tutorial-react-native.html
@amasad I've always used Enzyme instead of Jest but I think they both do pretty much the same thing. Are you planning on having a set up with something like Mocha-Expect-Jest? Or what other testing tools are you going to use?
Jest ships with its own expect library (powered by by Jasmine). It's an end to end tool but we can probably add enzyme too.
@amasad
Hey, just wondering where you are at with this?
No rush.
@gwenf seeing a bit of complication on how to make jest work well with all the other modules. Currently talking to the Expo team to see if they can help us.
@amasad Hey, how's it coming?
It's done. I just need to deploy. Tuesday will be out for sure.
@amasad @gwenf awesome! Excited to see this in action :)
@gwenf This is ready. Take a look at this example: https://repl.it/community/classrooms/23713
One issue is that running the tests is much slower than I would've liked (and the rest of the site). It takes about 20-30 seconds to run the tests (most of the time is going to initializations).
I'll work on making it faster but in the mean time it's ready to be used. Let me know if you have any feedback or questions.
I'm playing around with this now. I will let you know if I have trouble. Thanks for setting it up.
@amasad Hey, I need a little help.
I'm trying to run the tests I created when I login as a student but the tests just run forever - the spinner keeps going and it says Running Tests.
I am getting an error in the console that says: Uncaught SyntaxError: Unexpected end of JSON input
My classroom teacher account's username is gwenf (Gwen Faraday).
The stack trace doesn't appear to be helpful but I will include a screen shot just in case:
@QuincyLarson Hey, here is a list of challenges (following the order of the docs).
I'm working on writing tests for them now. I'm guessing I will go ahead and finish writing these in my account on REPL.it and then somehow transfer them to FCC so they can be embedded. Just wanted to let you know I'm still working on it.
@gwenf this is off to a great start. Ideally these challenges would be stored on your GitHub repo, then pulled into REPL.it (similar to how we're handling the Python challenges).
But we can worry about this later if you want to get started designing these challenges.
@gwenf I'll look at that in the next few days.
@QuincyLarson currently Github integration only works for the Python3 (it requires work to support other languages like parsing the unit tests etc). Let's test out the python implementation first before we move on to other languages.
@amasad Hey. Have you been able to run tests as a student?
If you have, would you mind sending me whatever sample testing you have that's working?
Thanks.
@amasad Hey, I'm still getting the same error. Have you been able to take a look? I'm meeting with someone next week to knock out some of the challenge design and I would like to have this working on some level :)
Just to follow up on this issue: I had a meeting yesterday with another camper who wants to work with me on this curriculum :) I'll keep this issue updated as we make progress.
@gwenf We just launched our new learning platform and our expanded curriculum (1,000+ new coding challenges). We'd still be very interested in React Native challenges. You may be able to teach some React Native concepts right on freeCodeCamp, and could add supplemental / additional content using REPL.it on the freeCodeCamp Guide. Let me know your thoughts on this :)
It would be an interesting project to use React Native Basics with REPL.
Sorry guys we deprecated React native on Repl.it mainly because expo
(upstream) was moving fast and on their playground they'd had a lot better
support. https://snack.expo.io
On Sun, Nov 4, 2018, 1:18 PM Vinay Sagar Sharma<EMAIL_ADDRESS>wrote:
It would be an interesting project to use React Native Basics with REPL.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/freeCodeCamp/freeCodeCamp/issues/14595#issuecomment-435707409,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAj2_nKSUQNDxvF7eoRZO3OrfgTWw8cTks5ur1m1gaJpZM4NITKe
.
@QuincyLarson @gwenf It looks like this issue could be considered as obsolete or refactored to transfer created ReactNative course to expo.io.
@sergeyradov we are working on the React Native challenges currently using react-native-web. Let me know if you want to help with them.
Are you still working on this @gwenf? Perhaps this should be brought up over on the new curriculum expansion repo.
@moT01 I think this issue should be closed and an issue on freeCodeCamp/curriculumExpansion should be opened with a reference to this
@thecodingaviator that sounds good. We have quite a few challenges written already. We've had some trouble with a few things like parsing the React Native code but I would really like to get this done.
@gwenf Please get back to us once you have a working draft ready, we'd be glad to see it and discuss where to put it
|
GITHUB_ARCHIVE
|
Moving from MySQL to to Postgres on Rails 3
Aside from removal of some MySQL specific queries, the migration was pretty smooth. The problem now is, that during developement there is a lot more queries to the DB than before.
Started GET "/profiles/data" for <IP_ADDRESS> at Tue Sep 21 10:26:18 +0200 2010
Processing by ProfilesController#data as JSON
User Load (24.3ms) SELECT "users".* FROM "users" ORDER BY updated_at DESC LIMIT 1
CACHE (0.0ms) SELECT "users".* FROM "users" ORDER BY updated_at DESC LIMIT 1
SQL (10.5ms) SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull
FROM pg_attribute a LEFT JOIN pg_attrdef d
ON a.attrelid = d.adrelid AND a.attnum = d.adnum
WHERE a.attrelid = '"users"'::regclass
AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum
Every single query results in 3-8 additional queries like the above. What and why is happening? One of the problems now is, that developement.log is bloated and unreadable. I waste loads of time scrolling inbetween those queries looking for the right thing...
Update: Tue Sep 21
This is not related to the query type. All the queries are generating this kind of stuph:
ree-1.8.7-2010.02 > User.first
SQL (0.3ms) SHOW client_min_messages
SQL (2.0ms) SET client_min_messages TO 'panic'
SQL (6.3ms) SET standard_conforming_strings = on
SQL (18.3ms) SET client_min_messages TO 'notice'
SQL (15.6ms) SET time zone 'UTC'
SQL (17.2ms) SHOW TIME ZONE
SQL (23.8ms) SELECT tablename FROM pg_tables WHERE schemaname = ANY (current_schemas(false))
User Load (162.4ms) SELECT "users".* FROM "users" LIMIT 1
SQL (7.5ms) SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc,
a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid
AND a.attnum = d.adnum WHERE a.attrelid = '"users"'::regclass AND a.attnum > 0 AND
NOT a.attisdropped ORDER BY a.attnum
[...]
1 row in set
ree-1.8.7-2010.02 >
Post the query that is generating the statement. You are probably using some MySQL-oriented code.
Not the case, explanation added to the question.
I stole this from another post: You might want to have a look at http://github.com/dolzenko/silent-postgres That plugin strips those queries out. Those log noise occurs because of the high postgresql log level.
The second query is used by your application to get information about the datatype used and to see if the column is nullable or not. If you're using pgAdmin3 you'll see a lot these type of queries as well, just to get meta data of the results. Most applications don't need queries like this, it's mostly usefull during development and for tools like pgAdmin.
Ok, but is there a way to disable this during developement. I can't trace my log anymore now. Its getting really annoying...
Edit postgresql.conf and set log_min_duration_statement to 1000. 1000 = 1000 milliseconds, 1 second. You could also set log_min_error_statement to ERROR. You have to reload postgresql.conf as a superuser: SELECT pg_reload_conf(); You could restart your databaseserver as well.
|
STACK_EXCHANGE
|
Our system has 3 DC's
one is windows server 2003 (old) DC-01-A
two new ones are windows server 2012. DC-2012-01 and DC-2012-02
All were set up to be Domain controllers, and set up as GC's.
I gave the new DC-2012-01 all the FSMO roles.
However, once I tried to demote the old one and remove it from the domain, none of the logins would work anymore except the domain administrator account.
As a result, I decided to re-promote DC-01-A , but that didn't fix the issue.
It seems that the domain doesn't recognize that any of the GC's are actually GC's. Whenever I for example, decide to uncheck the GC box in the NTDS window, it gives me a warning that there are no other GC's on the domain.
I'm probably going to get fired if I don't fix this soon, and I've literally been working at this for over a day.
Any ideas?Edited Jul 31, 2013 at 13:16 UTC
On one of your 2012 DCs can you run this from command line. Do the roles show up as being owned by one of your 2012 DC's?
netdom query /domain:domain_name fsmo
Netdom results show The schema master is pointing to the DC-2012-01
the rest are pointing to DC-01-A
But i think that's because i re-promoted DC-01-A.
I should note at this point i don't care which server manages what, i just need the users to be able to login
Wondering if during the dcpromo process when you demoted the 2003 server if something did not complete fully. Might need to look into metadata cleanup because of a failure during the demotion.
So i need to demote DC-01-A again then clean up metadata? or should i do that without a demote?
Personally I would move FSMO roles back over to the 2012 controller. Try again to do a dcpromo to demote to standalone server and confirm the process completes without error. If that fails then I would look to this as the next step.
Also this is the better guide to go by - http:/
I'm unclear about something though that's making me hesitant to delete the metadata
if i delete the metadata would that remove the users and such from the domain? or just from that server?
The second link will specifically remove any information pertaining to the 2003 DC if it is in fact leaving bits in AD causing your logon issues.
Also I meant to ask in AD sites and Services do you just have one site that all of your subnets are tied to?
It failed to demote DC-01-A
"The operation failed because:
Failed to configure NETLOGON as requested
"the wait operation timed out."
Try changing the DNS server in tcp/ip to one of the 2012 DC's not to itself and then see if that works for the dcpromo.
I ran the DC promo again and it worked. In the process of cleaning up the meta data now.
Going through the instructions in the second link, the server i removed is not actually there. However there is another server that is really old and i know is not on our server. would that be what's causing this issue?
If there are old DC's referenced and you know without a doubt then I would definitely remove.
I removed everything, but it still hasn't fixed the issue.
Could i reset the GC's by unchecking and rechecking it under NDTS options for both remaining DCs? If i did so, would i lose all the user/server accounts?
I've never seen that happen from removing from sites and services, and would say that it's probably worth trying. So you said that domain admin accounts can logon, but not normal users?
I finally fixed the issue.
apparently there was an old orphaned domain that was preventing the new DC from being promoted into a GC. once that domain was removed, it promoted and everything worked fine.
Than you SO much for your help Sean!
Well done Ahmed! Glad it worked out for you in the end!
|
OPCFW_CODE
|
How do I get the TimeFrame for an open order in MT mq4?
I'm scanning through the order list using the standard OrderSelect() function. Since there is a great function to get the current _Symbol for an order, I expected to find the equivalent for finding the timeframe (_Period). However, there is no such function.
Here's my code snippet.
...
for (int i=orderCount()-1; i>=0; i--) {
if (OrderSelect(i, SELECT_BY_POS, MODE_TRADES)) {
if (OrderMagicNumber()==magic && OrderSymbol()==_Symbol ) j++;
// Get the timeframe here
}
}
...
Q: How can I get the open order's timeframe given it's ticket number?
In other words, how can I roll my own OrderPeriod() or something like it?
Always equally interesting when people down-vote without any comment or suggestion for improvement. In that sense,I will start voting in meta, that people should not be able to down-vote without adding a comment.
You could add that value to the comments and parse it from there. that's what I do.
There is no such function. Two approaches might be helpful here.
First and most reasonable is to have a unique magic number for each timeframe. This usually helps to avoid some unexpected behavior and errors. You can update the input magic number so that the timeframe is automatically added to it, if your input magic is 123 and timeframe is M5, the new magic number will be 1235 or something similar, and you will use this new magic when sending orders and checking whether a particular order is from your timeframe. Or both input magic and timeframe-dependent, if you need that.
Second approach is to create a comment for each order, and that comment should include data of the timeframe, e.g. "myRobot_5", and you parse the OrderComment() in order to get timeframe value. I doubt it makes sense as you'll have to do useless parsing of string many times per tick. Another problem here is that the comment can be usually changed by the broker, e.g. if stop loss or take profit is executed (and you need to analyze history), and if an order was partially closed.
One more way is to have instances of some structure of a class inherited from CObject and have CArrayObj or array of such instances. You will be able to add as much data as needed into such structures, and even change the timeframe when needed (e.g., you opened a deal at M5, you trail it at M5, it performs fine so you close part and virtually change the timeframe of such deale to M15 and trail it at M15 chart). That is probably the most convenient for complex systems, even though it requires to do some coding (do not forget to write down the list of existing deals into a file or deserialize somehow in OnDeinit() and then serialize back in OnInit() functions).
Hi Daniel, thanks for that quick answer. I was looking into the first two options and also decided against parsing comments, but kept the magic as an option. Although it also need a little bit of parsing. As for the last option, it seem too complicated, unless you have some example or template code to refer to. At the end of the day I was hoping someone would come up with a more creative and simple solution as often happens here at SE.
So what is the problem with the 1st approach? why you would need same magic for different experts that work on different timeframes?
|
STACK_EXCHANGE
|
Mailinglist Archive: opensuse-factory (393 mails)
|< Previous||Next >|
Re: [opensuse-factory] Re: Meeting minutes dist meeting 2007-03-23
- From: Johannes Meixner <jsmeix@xxxxxxx>
- Date: Thu, 29 Mar 2007 12:31:44 +0200 (CEST)
- Message-id: <Pine.LNX.4.64.0703291127590.24650@xxxxxxxxxxxxxx>
I added Michal Zugec, the author of the YaST printer module to CC.
On Mar 28 11:37 Chris Rivera wrote (shortened):
> > >
> > > I noted it could be a starting point, nothing more. How do we generate
> > > our database? No one in the previous dist meetings indicating we had a
> > > good mapping of printers to print drivers and noted that previous
> > > efforts to collect this data had failed.
> > YaST already uses one. :-)
> Is /var/lib/YaST2/ppd_db.ycp the one you're referring to?
Basically this is a leftover from old days when YaST also supported
old-style printing systems like LPRng/lpdfilter.
Currently it is merely the YaST version (a YaST YCP data structure)
of a PPD info cache what CUPS has as /var/cache/cups/ppds.dat
and from which CUPS can show the "lpinfo -l -m" output quite fast
(i.e. without parsing all PPDs under /user/share/cups/model/ again).
> How is this generated?
> Where do you get the mapping information from?
The details can be answered by Michal Zugec.
Basically the mapping is based upon a tricky manufacturer and
model string comparison (don't expect they match exactly)
together with "recommended" info from the NickName in the PPDs
and the PPD's sub-directory and a lot of hope that it works well ;-)
> Fedora uses Foomatic
In the past (I don't know about the current state) Red Hat's printer
setup tool was not in compliance to CUPS because it worked based
upon Foomatic's XML-like database files.
What the tool did was to generate a PPDs file on the fly from
Foomatic's XML-like database files and then set up the queue
with this new generated PPD file.
If a printer setup tool uses whatever "private" data source,
the consequence is that this tool will set up different stuff
than all the other tools which work in compliance to CUPS like
"lpadmin", CUPS web frontend, YaST, KDE printer setup tool,
Gnome printer setup tool, HPLIP printer setup tool, ...
The only way a printer setup tool can work in compliance to CUPS
is to use the PPDs under /user/share/cups/model/ and nothing else.
Stricty speaking even this is wrong since CUPS 1.2 because
CUPS 1.2 supports printer drivers which generate PPDs on the fly
(note the difference to printer setup tools which do this!)
so that since CUPS 1.2 the only way a printer setup tool can
work in compliance to CUPS 1.2 is to work based upon CUPS's
"lpinfo -l -m" output (or matching CUPS library calls).
Regarding "PPD generation on the fly" by printer drivers see
This means that as soon as there are printer drivers which generate
PPDs on the fly, YaST does no longer work in compliance to CUPS.
> There are tools other than Yast that need
> a printer to ppd/driver mapping.
Note the "device-id" field in "lpinfo -l -m" which matches
a 1284DeviceID entry in the PPD file and which should (hopefully)
match how the model shows up at the parallel port or USB.
> The parsing and fuzzy mapping of ppd
> files on the system is painful.
Unfortunately many PPDs don't have a 1284DeviceID entry
and therefore the painful fuzzy mapping via manufacturer
and model name is all what can be done for many models.
But it doesn't work too bad if the manufacturer shows
reasonable manufacturer and model name via USB
(which is e.g. often true for HP but often false for Epson).
> Printerdrake also uses automatic queue setup. See
As far as I know Till Kamppeter is involved in PrinterDrake
and Till knows very well how to work in compliance to CUPS
so that PrinterDrake might be a much better example how it
can be done than the Red Hat tool (which doesn't mean that
looking at Red Hat's tool is forbidden ;-)
> We might not even need a mapping for the uris.
If we find a way how to do it without such a mapping,
it would be great because actually such a mapping cannot work
because nobody can predict what whatever (third-party) backend
will use for its URIs.
> We can simply use the
> callout as a "a new printer was added" event. We can then use the list
> of printers detected by the backends and compare them to the list cups
> knows about to figure out which printers need to be setup.
You mean compare the "lpinfo -l -v" output to the
"grep DeviceURI /etc/cups/printers.conf" output?
With some tricks it might work (at least at the moment I don't see
why it cannot work at all - like the URI mapping).
Perhaps it is sufficient to filter the "lpinfo -l -v" output
to ignore those URIs which are only generic placeholders but
don't mean a real detected printer.
At least all URIs which are only the scheme (i.e. the backend name
and no colon and stuff after the colon) don't mean a real detected
printer (see "man backend").
SUSE LINUX Products GmbH, Maxfeldstrasse 5, 90409 Nuernberg, Germany
AG Nuernberg, HRB 16746, GF: Markus Rex
To unsubscribe, e-mail: opensuse-factory+unsubscribe@xxxxxxxxxxxx
For additional commands, e-mail: opensuse-factory+help@xxxxxxxxxxxx
|< Previous||Next >|
|
OPCFW_CODE
|
Apps include resources that can be specific to a particular culture. For example, an app can include culture-specific strings that are translated to the language of the current locale. It's a good practice to keep culture-specific resources separated from the rest of your app. Android resolves language- and culture-specific resources based on the system locale setting. You can provide support for different locales by using the resources directory in your Android project.
You can specify resources tailored to the culture of the people who
use your app. You can provide any resource type that is
appropriate for the language and culture of your users. For example, the
following screenshot shows an app displaying string and drawable resources in
the device's default (
en_US) locale and the Spanish
If you created your project using the Android SDK
Tools (read Creating an
Android Project), the tools create a
res/ directory in the top level of
the project. Within this
res/ directory are subdirectories for various resource
types. There are also a few default files such as
res/values/strings.xml, which holds
your string values.
Create Locale Directories and Resource Files
To add support for more locales, create additional directories inside
res/. Each directory's name should adhere to the following format:
<resource type>-b+<language code>[+<country code>]
values-b+es/ contains string
resources for locales with the language code
mipmap-b+es+ES/ contains icons for locales with the
language code and the
ES country code.
Android loads the appropriate resources according to the locale settings of the
device at runtime. For more information, see
Providing Alternative Resources.
After you’ve decided on the locales to support, create the resource subdirectories and files. For example:
MyProject/ res/ values/ strings.xml values-b+es/ strings.xml mipmap/ country_flag.png mipmap-b+es+ES/ country_flag.png
For example, the following are some different resource files for different languages:
English strings (default locale),
<resources> <string name="hello_world">Hello World!</string> </resources>
Spanish strings (
<resources> <string name="hello_world">¡Hola Mundo!</string> </resources>
United States' flag icon (default locale),
Spain's flag icon (
Note: You can use the locale qualifier (or any configuration qualifier) on any resource type, such as if you want to provide localized versions of your bitmap drawable. For more information, see Localization.
Use the Resources in your App
You can reference the resources in your source code and other XML files using
In your source code, you can refer to a resource using the syntax
R.<resource type>.<resource name>. There are a variety
of methods that accept a resource this way.
// Get a string resource from your app's
ResourcesString hello =
getResources().getString(R.string.hello_world); // Or supply a string resource to a method that requires a string TextView textView = new TextView(this); textView.setText(R.string.hello_world);
In other XML files, you can refer to a resource with the syntax
@<resource type>/<resource name> whenever the XML attribute accepts a compatible value.
<ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@mipmap/country_flag" />
|
OPCFW_CODE
|
I am very new with the 3D slicer, so my question might be so simple. Still, it is really appreciated if you could help me with it.
I have a DCOM data from CT of a bone (to say like femur). I am wondering how can I do the followings:
#1 I want a surface model of the bone to be created just only from the outer surface (without the inside of the bone to be considered). ONLY the outer surface
#2 How can I export a 3D model of that bone, representing the actual model of the bone? I don’t mind about the marrow inside the bone. just a solid model of the bone with the hollow inside will be great!
If the image quality is good enough then thresholding, smoothing, and inverted island removal may produce good enough results. See detailed description here: 2D/3D fill tool
If image quality is very low and/or bone density is low then you may need to use Grow from seeds effect (growing bone from seed regions), as described here: Bone segmentation to create 3D-printable STL
Exporting to 3D-printable STL file and many other tips are described in segmentation tutorials.
Thank you very much for your prompt reply! I will follow the instruction and information provided, and if I face any concerns, I will come back to you again.
I followed all the discussions you have had with the other regarding creating a 3D model of a bone, generating 3D mesh, and eventually importing into FEA.
I should, however, say that I am still a bit confused about the process for making the model. Actually it is very hard to do so and it leads to inaccurate results. For example, segment mesher extension generates inaccurate meshes. I need a 3D model of the long bone, however the segment mesher creates a block of meshes, NOT to create the meshes on the segment of the bone.
Can you please help me with this, or if there is any tutorial video for this? As I concluded from the discussion in that specific forum, it is a incapability of 3D slicer? Is it? In case I need to use other software such as Mimics (even though I need to pay for it) please advise.
Development of a new segmentation workflow for a certain bone image, acquired with a specific imaging protocol, for a specific application always takes some time and practice. If the generated mesh surface looks good then segmentation task has been successfully completed. You may post an screenshot of your segmentation if you are not sure if the quality is good enough and we may give advice on how to improve it.
By default a coarse mesh is generated so that you get result quickly. Increase
--scale parameter value in Advanced settings section to genereate a finer resolution mesh.
Cleaver generates a “background” mesh so that you can simulate embedding your structures in some material (soft tissue, etc). If you don’t need these mesh elements, then don’t use them. Each cell is assigned to a segment, specified by
labels cell attribute. You can use this attribute to assign material properties or remove parts of the mesh, etc.
There is no video tutorial for Segment Mesher, but it would be greatly appreciated if you created one about your workflow, once you’ve figured out everything. You can of course post any specific questions here.
It is not. 3D Slicer is very capable.
Use any tool that work for you. Probably segmentation tools of Slicer are better than of Mimics, but Mimics has probably more meshing options and direct exporters for FEA software. So, if you and all your users can afford to buy Mimics then of course, give it a try. You may also try to export surface meshes (in STL, PLY, etc. format) and let your FEA software generate the volumetric mesh.
Your detailed and one-by-one replies are greatly appreciated.
By the information provided, it seems a bit more understandable what steps I should follow right now.
I will proceed and have a try and hopefully after generating a successful result, I will make a video of it and will share it with you for further training program.
As I have been working with CAD+FEA for a long time, it seems that 3D slicer does not generate a 3D geometry (I mean a format file like STP or STEP) to be just a 3D model (not meshes). If it was a 3D model it could be imported into CAD for further modification. Therefore as I understand from your comments, 3D slicer generates meshes on the parts (such as bone for my case), and from the meshes I will be able to start my simulation in FEA.
Since 3D geometry, 3D model, mesh, part, solid are used for completely different things in different domains (3D modeling, FEA, medical image computing), and even in different software, but probably your understanding is correct.
Slicer’s Segment Mesher extension creates a volumetric mesh consisting of tetrahedral and wedge elements, ready for FEA analysis.
Mesh editing of surface or volumetric meshes in CAD software is of course a very common need, too. However, most software cannot easily reverse engineer arbitrary meshes into editable representations - there are significant limitations, such as maximum number of points, complexity of the surface, quality of the mesh.
It would be great to hear from you if you were able to use the volumetric mesh that Segment Mesher extension generated.
Thanks again dear Andras. I will start again to generate the meshes from the comments you have mentioned so far, and I let you know then. Probably it takes a bit of time, since I would like to try everything with enough details and with importing the model into FEA and running some simulation.
I hope that I will be able to produce good results as I expected.
As you mentioned I tried to generate the meshes on the long bone. Since my region of interest is ONLY the bone, I merely used TetGen in the “Meshing method” appeared in the inputs of segment mesher extension. Because as you highlighted Cleaver generates a mesh for background, which I am not interested in.
However, when I click on the TetGen and then apply, it comes to an error.
This is the step-by-step workflow I am doing:
- Loading the DICOM Files
- Volume Rendering for selecting the boundaries of ROI
- Cropp Volume and then apply
- Segment Editor, add, selecting a threshold and apply, Using Scissors for removing some regions on the 3D view
- Segment Mesher Extension, Input segmentation: Segmentation, Meshing method: TetGen, Output Model: create a new model, and then apply
However, as I said it comes to an error. It says A self-intersection was detected. Program stopped.
Hint: use -d option to detect all self-intersections.
I have uploaded a screenshot of the segmentation. Can you please help me through this and hopefully once it comes to a successful model, I will create a video tutorial-based for this forum future use.
Additionally, how I am able to remove/delete the background meshes? And also, do I need to use label in the Editor module for labeling a bone? (using the yellow one?)
TetGen: Unfortunately, TetGen is known not to be very robust. You may try to adjust meshing parameters.
Cleaver2: The background mesh should not be a problem. Each cell contains ID of the segment, so you can use vtkThreshold filter to extract cells corresponding to a specific material (see this example).
Following a great deal of time and put significant effort along with several trial and error, eventually I have generated a 3D model from the DICOM files, which can be imported into any CAD software as I eventually end up with a STP file format. This facilitates importing the 3D model into any sort of FEA packages as well. I would like to mention that it requires using other software such as spacecliam simultaneously with 3D slicer. I am going to make a video from it and share with you and your team, since this has been asked numerous times and it might add value to the forum.
Sounds great, thank you! Looking forward to seeing your tutorial.
Thank you Saeed for the effort! The video tutorial will indeed be much appreciated!
Looking forward to your exellent video!
Where can I find this video ?
|
OPCFW_CODE
|
#pragma once
#include "common.h"
#include "ecs.h"
#include "systems/renderable.h"
enum AnimationType { SCALE, ALPHA, ROTATE };
struct Animation {
AnimationType animationType;
float startValue;
float endValue;
float rate = 0.01;
bool reverses;
bool reversed = false;
bool repeats;
};
struct CAnimated : public Component<CAnimated> {
Array<Animation> animations;
};
// TODO: Animation curve, this animation system is currently restricted to
// linear animations
class SAnimation : public System
{
public:
SAnimation() : System()
{
System::addComponentType(CAnimated::id);
System::addComponentType(CSprite::id);
}
virtual void updateComponents(float delta, BaseComponent** components)
{
CAnimated* animated = (CAnimated*)components[0];
CSprite* sprite = (CSprite*)components[1];
for (int i = 0; i < animated->animations.size(); i++) {
Animation animation = animated->animations[i];
switch (animation.animationType) {
// Scale animation
case SCALE: {
const float start = animation.startValue;
float centerX = sprite->getCenterX(), centerY = sprite->getCenterY();
const float newScale =
lerp(sprite->getScale(), animation.endValue, animation.rate);
sprite->setScale(newScale);
sprite->setPosition(centerX - (sprite->getWidth() / 2),
centerY - (sprite->getHeight() / 2));
if (abs(sprite->getScale() - animation.endValue) < 0.01) {
if (animation.reverses == true && animation.reversed == false) {
animation.reversed = true;
animation.startValue = animation.endValue;
animation.endValue = start;
} else if (animation.repeats) {
if (animation.reversed) {
sprite->setScale(animation.endValue);
animation.reversed = false;
} else {
float cX = sprite->getCenterX(), cY = sprite->getCenterY();
sprite->setScale(animation.startValue);
sprite->setPosition(cX - (sprite->getWidth() / 2),
cY - (sprite->getHeight() / 2));
}
}
}
} break;
// Alpha animation
case ALPHA: {
const float newAlpha =
lerp(sprite->alpha, animation.endValue, animation.rate);
sprite->alpha = newAlpha;
if (abs(sprite->alpha - animation.endValue) < 0.01) {
if (animation.reverses == true && animation.reversed == false) {
animation.reversed = true;
float start = animation.startValue;
animation.startValue = animation.endValue;
animation.endValue = start;
} else if (animation.repeats)
sprite->alpha = animation.startValue;
}
} break;
case ROTATE: {
const float newAngle =
lerp(sprite->spriteData.angle, animation.endValue, animation.rate);
sprite->spriteData.angle = newAngle;
if (abs(sprite->spriteData.angle - animation.endValue) < 0.01) {
if (animation.reverses == true && animation.reversed == false) {
animation.reversed = true;
float start = animation.startValue;
animation.startValue = animation.endValue;
animation.endValue = start;
} else if (animation.repeats)
sprite->spriteData.angle = animation.startValue;
}
} break;
}
animated->animations[i] = animation;
}
}
};
|
STACK_EDU
|
Changing the background colour of a bchart
How can I change the background colour of a bchart?
In this case it appears a blue nuance (I guess it's the standard).
\documentclass[10pt,a4paper,roman]{article}
\usepackage{bchart}
\usepackage{graphicx}
\begin{document}
\begin{bchart}[step=100,max=200]
\bcbar[text=Bule,value=19419]{194.19}
\bcbar[text=Inserție,value=19513]{195.13}
\bcbar[text=Shell,value=19513]{195.13}
\bcbar[text=Interclasare,value=19517]{195.17}
\bcbar[text=Rapidă,value=19525]{195.25}
\bcbar[text=Selecție,value=19537]{195.37}
\end{bchart}
\end{document}
You can use the color option for \bcbar as in the following example. But you should consider using a different approach for data visualization for better distinction between the entries.
\documentclass[10pt,a4paper,roman]{article}
\usepackage{bchart}
\usepackage{graphicx}
\begin{document}
\begin{bchart}[step=100,max=200]
\bcbar[text=Bule,value=19419,color=green]{194.19}
\bcbar[text=Inserție,value=19513,color=blue]{195.13}
\bcbar[text=Shell,value=19513,color=yellow]{195.13}
\bcbar[text=Interclasare,value=19517,color=red]{195.17}
\bcbar[text=Rapidă,value=19525,color=pink]{195.25}
\bcbar[text=Selecție,value=19537,color=orange]{195.37}
\end{bchart}
\end{document}
No criticism to the answer, that is perfect (+1), but I cannot resist a off-topic comment about the result: The graph is now a rainbow tablecloth as unclear as before. The problem in this graph (of course, IMO, the OP has the right of think differently) is not the color but the x-scale. Hiding the decimals values does not help either . Instead, some like\begin{bchart}[step=1,min=193,max=196] \bcbar[text=Bule]{194.19} ... seems to me clearer and more elegant.
|
STACK_EXCHANGE
|
What pull requester should do to revert bad commits
I have pull request to review, and I see that some files are not needed i.e .tmp etc. also have one file pushed which should not be edited.
Which git operation pull requester should do, to fix this problems?
From my perspective I see branch with one commit, cannot separate bad commits from good one. What if pull requester also doesn't have local commits? What if he has?
Ask the developer to fix the problems, and push the changes to his/her PR branch.
@BryceDrew: I'd argue that the question is more about Git than anything else anyway, since it involves the pull requester submitting fixes to their PR without the intervention of GitHub.
@Makoto Pull Request is not a feature of git, right?
I can change pull request to branch and it doesn't change anything because to fix this git operation are needed or manual fix
@BryceDrew: It isn't, but the actual fix involves more Git than GitHub. Besides, for all you know it could be some other service which offers pull requests which isn't necessarily GitHub related (although in all reality it likely is).
@Makoto I understand now what you mean. For me there is a distinct difference in methodology between a Branch I need to edit and a Pull request to Review.
There's more than one way to do it, and the right way will depend on your contribution policy.
The pull requester (user submitting the PR) can fix the PR in any number of ways. Remember that the important part of a PR is the aggregate change created by all the commits in the PR. If a line is added in one commit and removed in another, the aggregate change won't show that line at all.
So the question is, how do you want this PR to show up in your change history? Some projects like to have PRs "squashed" to a single commit which is the aggregate changeset (the pull requester can do this using an interactive rebase: git rebase -i HEAD~n where n is the number of commits in the PR branch). Since you're only seeing one commit, I'm guessing this is how your project operates. Others consider interactive rebasing and squashing to be "rewriting history" and believe in merging the set of changes even if they revert each other.
If you want a clean history, and it looks like you do, you can ask the pull requester to clean up their branch. They can do this either by adding a commit fixing the problems and then squashing the branch, or by making changes in their working tree and updating their already-squashed commit. They can then force-push their branch, which will have the effect of updating the PR with the updated, "cleaner" branch.
But there's more than one way to do it, so this is an answer, not the answer.
From my perspective I want squashed commit. But problem here is how to remove not needed file from branch (file doesn't exists on master) and how to remove not needed change on file that exists on master.
Once you squash the developer changes into a single commit it is quite easy to amend this commit by removing the unneeded files and using git commit --amend.
Yes. So the pull requester should make the changes in their branch which resolve the problem, commit it, re-squash the branch, and force push.
Or amend the squashed commit, as @DarinDimitrov says.
|
STACK_EXCHANGE
|
FundApps API Spec & Examples
We provide a REST-ful HTTPS API for automated interfaces between your systems and our service. Our API uses predictable, resource-oriented URI's to make methods available and HTTP response codes to indicate errors. These built-in HTTP features, like HTTP authentication and HTTP verbs are part of the standards underpinning the modern web and are able to be understood by off-the-shelf HTTP clients.
Our API methods return machine readable responses in XML format, including error conditions.
If your Rapptr installation is available at https://%company%.fundapps.co/ the URI from which your API is available is https://%company%-api.fundapps.co/. Ditto, staging api is available at https://%company%-staging-api.fundapps.co/ All requests made to our API must be over HTTPS.
You authenticate to the Rapptr API via Basic Authentication over HTTPS. A Rapptr administrator from your organisation must create a user with the role "API" for this purpose. You must authenticate for all requests. Note: Please ensure you create a separate user for the API as if you use an existing user's account, as soon as they change their password the API upload will fail.
A number of methods are available, depending on the kind of data being uploaded. Typically customers send us a position file once or more a day; other uploads are optional and depend on business requirements.
All of our API methods expect your upload file to be sent as the body of the request; our example implementations show how to achieve this with commonly used HTTP libraries.
Upload Daily Positions. This method expects to receive data in XML format (example XML position files); large files may be zipped. The response includes a link which when polled allows monitoring of the progress of processing the file.
(Request Headers) POST https://%company%-api.fundapps.co/v1/expost/check HTTP/1.1 Content-Type: "application/xml" (Response) <links> <result>/v1/ExPost/Result/fe633307-f196-4609-abfe-a1fc0111e875</result> </links>
Position File XSD
We make an XSD schema available for the position upload XML format; this may be retrieved from the
GET /v1/expost/xsd API endpoint on your Rapptr instance . If you don't have access to an instance yet and would like access to an XSD file, please contact support.
Check the progress of the rule processing on a position upload. As noted above, when uploading a position file the specific URI for checking progress is provided; the unique ID of the job is included in the URI.
This endpoint returns a
202 Accepted HTTP status whilst the check is in progress and a
200 OK status when the check is complete. The progress of validation and rule execution is reported separately in the response.
|Unknown||Unknown||Job just received; not processed yet|
|InProgress||Pending||Validation in progress|
|Passed||InProgress||Rule execution in progress|
|Failed||NotRun||Validation failed; rule processing cancelled|
|Passed||Failed||Rule execution failed|
|Passed||Passed||Rule execution successful|
When the rule execution is completed successfully, an additional 'Summary' element is provided in the response. This aims to provide the same information as the email notification sent by Rapptr when a positions file finishes processing.
The Summary element is comprised of:
- The total number of alerts by type - i.e. Breach, Unknown, etc.
- The number of new alerts by type (since the day before).
(Request) GET https://%company%-api.fundapps.co/v1/ExPost/Result/fe633307-f196-4609-abfe-a1fc0111e875 HTTP/1.1 Content-Type: application/xml (Response Headers, Rules running) HTTP/1.1 202 Accepted Content-Type: application/xml (Response Content, Rules running) <?xml version="1.0" encoding="utf-8"?> <ResultsSnapshot ValidationState="Passed" RuleState="InProgress" /> (Response Headers, Validation failed) HTTP/1.1 200 OK Content-Type: application/xml (Response Content, Validation failed) <?xml version="1.0" encoding="utf-8"?> <ResultSnapshot ValidationState="Failed" RuleState="NotRun" /> (Response Content, File processed successfully) <?xml version="1.0" encoding="utf-8"?> <ResultsSnapshot ValidationState="Passed" RuleState="Passed" Status="Okay" PipelineStage="Finished" Duration="00:01:28.7030000"> <Summary DataDate="2015-07-20"> <Breach Total="1" New="1" /> <Disclosure Total="18" New="18" /> <Unknown Total="2" New="2" /> <Warning Total="4" New="4" /> <OK Total="399" New="399" /> </Summary> </ResultsSnapshot>
POST /v1/portfolios/import (Optional)
Upload Portfolio data, if your portfolio structure changes frequently you may wish to refresh this at an appropriate frequency. Expects CSV - example file.
POST /v1/transactions/import (Optional)
Upload Transaction data. Expects CSV - example file.
When sending data to the API we expect certain content types to be set on your request e.g.
POST https://%company%-api.fundapps.co/v1/expost/check HTTP/1.1 Content-Type: "application/xml"
These Content-Type values are as follows:
When uploading data to the API, it is stored and later displayed in Rapptr using a default file name. To specify a different file name, populate the "X-ContentName" header, e.g.
POST https://%company%-api.fundapps.co/v1/expost/check HTTP/1.1 Content-Type: "application/xml" X-ContentName: "positions-monday.xml"
|Boolean||Must be the word true or false (case sensitive)||true|
|Date||Must be in "YYYY-MM-DD" format (ISO 8601)||2015-12-31|
|Decimal(Precision,Scale)||Must use "." as decimal separator. Group (thousand) separators are not allowed, exponential formatting not allowed. Up to 21 decimal places are supported.||123444.227566|
|Integer||Whole number (positive or negative). Group (thousand) separators are not allowed, exponential formatting not allowed||19944|
|String||A sequence of characters. When using CSV format must not include commas (","). All strings are case-INSENSITIVE in Rapptr (except Ratings)||Nokia|
|List||Comma separated string||XNYC,XLON|
Max File Size
We have a maximum allowed file size of approximately 100MB. If your file exceeds this size we suggest zipping it.
We provide a number of example implementations against our API using commonly available programming languages and libraries in this repository.
|
OPCFW_CODE
|
Frequently asked questions
Toledo Chess 1
Where is the <setjmp.h>?
It is included with every ANSI/ISO C compiler.
I've got same response every time, what about randomness?
Add srand(time(0)); just before Z H<130.
How can I make the computer to play white pieces?
Change 1<L&e to 1<L&!e.
What time took you to program it?
6 weeks from February 2005 to March 2005.
How you did the knight figure?
Hand formatting. 8)
Why is named Toledo Chess?
That was casual, it simply had no name, the judges at the
IOCCC named the source after my last name, and the people simply started
calling it Toledo Chess. Not related to Toledo, Spain or Toledo, Ohio.
Can I compile it with my old 16-bit Turbo C?
You need to make a few changes. First open the source code
in Windows Wordpad (important), and insert this as first line:
Then replace all occurrences of 1e5 with
2e4, and 1e9 with 3e4, now save the source and
open it in Turbo C.
Now you can run as two-player, inside the IDE use the menu
Options-Arguments, put one argument (by example, one letter) to play against
These instructions are also useful for the knight's tour
solver, for the other programs in this site, choose a 32-bit compiler!
Can I compile it as C++?
The C++ language is more strict but fortunately it is
possible, it is necessary to include C standard libraries (we will not be
using streams), and to define main in an acceptable form. Copy the
original source code to a file with cpp extension (by example,
toledo.cpp) and then add this at the beginning.
int X(int, int, int, int);
int main(int argc, char *argv)
X((int)argc, 0, 0, 0);
#define main X
Now you can compile it with your C++ compiler. A good
programming exercise would be to change the C style of input and output
(getchar/putchar) to the C++ style
Why my compiler gives me a warning about printf?
The compiler "knows" that printf has the prototype
int printf(const char *, ...) but as the program doesn't include
<stdio.h> then it treats it as int printf(...), this
creates the warning. If you want to remove the warning add this at the start of
the source code:
Why doesn't work with XBoard under *NIX/Mac OS X?
The XBoard protocol differs slightly from the Winboard
protocol. It is necessary to add the following at the start of the Toledo
Nanochess Winboard source code:
Then add the following code just before setbuf:
It should work fine now.
Why the pack executables are so big?
The executables are statically linked, integrating the
needed libraries, this saves a big-unnecessary Windows DLL, and reduces the
zip-file size. If you do your own compilation, you can get a size somewhere
around 10 KB, but you will need to tune carefully compiler optimization. :)
It is free for non-commercial use, this includes all
the forks, even those under the GPL as legally I'm the original author.
Please cite my name and website in any derivative source
code or webpage where it is used. I would be glad to receive an e-mail telling
me where you used it.
If you want to do commercial usage, just write me and we
can reach a reasonable agreement.
Could you release your unobfuscated source code?
Currently I don't have plans to release unobfuscated source
code, besides it would remove a lot of the thrill and learning from
unobfuscating it by yourself.
Could you release your retro-games source code?
I don't have plans to release the source code mostly
because that would affect cartridges sales. But when the cartridges sold out
I'd be open to interesting offers :)
Could you release your commercial retro-games as ROM files?
Similar to previous question. When cartridges sold out
I'd be open to the idea if someone comes with a good price
and enough people interested.
What means 'biyubi'?
It is a Zapotec word that means "search until you find it", it
should be pronounced as bee-you-bee.
Last modified: Jun/10/2013
|
OPCFW_CODE
|
Using .NET you may think that determining which permissions are assigned to a directory/file should be quite easy, as there is a FileSystemRights Enum defined that seems to contain every possible permission that a file/directory can have and calling AccessRule.FileSystemRights returns a combination of these values. However, you will soon come across some permissions where the value in this property does not match any of the values in the FileSystemRights Enum.
The end result of this is that for some files/directories you cannot determine which permissions are assigned to them using this FileSystemRights enum alone. If you do AccessRule.FileSystemRights.ToString then for these values all you see is a number rather than a description (e.g Modify, Delete, FullControl etc). Common numbers you might see are:
-1610612736, –536805376, and 268435456
To work out what these permissions actually are, you need to look at which bits are set when you treat that number as 32 separate bits rather than as an Integer (as Integers are 32 bits long), and compare them to this diagram: http://msdn.microsoft.com/en-us/library/aa374896(v=vs.85).aspx
So for example, -1610612736 has the first bit and the third bit set, which means it is GENERIC_READ combined with GENERIC_EXECUTE. So now you can convert these generic permissions into the specific file system permissions that they correspond to.
You can see which permissions each generic permission maps to here: http://msdn.microsoft.com/en-us/library/aa364399.aspx. Just be aware that STANDARD_RIGHTS_READ, STANDARD_RIGHTS_EXECUTE and STANDARD_RIGHTS_WRITE are all the same thing (no idea why, seems strange to me) and actually all equal the FileSystemRights.ReadPermissions value.
So for example, GENERIC_READ (aka FILE_GENERIC_READ) maps to the following file system permissions: ReadAttributes + ReadData + ReadExtendedAttributes + ReadPermissions + Synchronize.
So I made the following two Enums to help keep track of this and help with converting these generic permissions to specific file permissions:
If you are not familiar with the bitwise operators in .NET then the second enum may seem a bit weird to you. As the FileSystemRights enum has the <Flags> attribute it means we can use the “OR” operator to combine multiple values together. So confusingly the OR operator here kind of means AND (or +) really, but bitwise operations are a whole other topic that there are plenty of articles on the web on. If you are not familiar with them though then I would recommend reading up on it as understanding this comes in handy quite often, like it has done here with this permissions problem.
So now if we wanted to see what our “unrecognised” permissions actually are, we can use a function like this:
This function takes the original FileSystemRights value (for example –1610612736) and checks to see if any of the generic bits are set. If they are then it sets the relevant bits for the specific permissions that this generic permission maps to and returns a new FileSystemRights value that just contains these specific permissions. So we can then use this function like so:
Now instead of seeing a value of –1610612736 we see the permissions it actually maps to, which in this example are: ReadAndExecute + Synchronize
Hope it helps others out, as it took me a while to figure out! 🙂 I needed to get this working for my NTFS Permissions Reporter app, which you can find more information on here if you are interested.
|
OPCFW_CODE
|
Warning: This document describes an old release. Check here for the current version.
A virtual workspace is an abstraction of an execution environment -- in this VM-based service, a workspace is defined in terms of VM images and the information necessary to instantiate them in a controlled manner.
The workspace metadata type contains information necessary for workspace deployment. This information is deployment-independent; once defined, a workspace may be deployed many times.
This section is divided into three parts:
A VirtualWorkspace element consists of a name, definition section, and logistics section.
Specifying metadata about your own VM should be as simple as taking the sample workspace metadata file and adjusting some of the fields (see the metadata quickstart).
The workspace name is a URI that could be used to obtain information such as such as its provenance, creation and modification times, or detailed software catalog (i.e., the deployment capability of a workspace which defines what kinds of applications the workspace can support) . This information is not directly relevant to deployment but would be very useful for clients.
The definition section of the workspace description contains static information that cannot change during deployment. This includes: (1) requirements (kernel images or versions, kernel parameters, and required CPU architecture), (2) references to particular image partitions, and (3) what device names for these partition files the OS is expecting.
The definition section consists of two sections, requirements and diskCollection:
The requirements section lists the CPU architecture, VMM name and version requirements, and kernel requirements. All of its elements are currently respected (but of the kernel element, currently only the parameters section).
The workspace service does not support client specification of kernels until support for multiple image transfers is added. In this version the site should configure a default kernel to use for the VMs or supply a list of (manually cached) kernel names.
While multiple disk partitions are supported (including on-the-fly creation of blankspace partitions), currently only one file may be used with the propagation mechanisms. The others must be files that have been cached already (except for blankspace partitions which are created on-the-fly on the hypervisor nodes).
There is no support for fine grain authorization policies about the file being used for partitions unless you configure the service to use the creation time authorization callout (see the plugins page).
The disk location is listed as a URI and bound to a device name ("/dev/hda") or partition ("/dev/hda1"), for an example see the sample workspace.
The logistics section of the workspace description contains information that is typically bound at deployment time only (late binding). In the current version, this is networking requirements. The workspace's networking settings can be the defined by the deployer/broker, the Workspace Service itself, or by the workspace service's interactions with other services coordinating networking settings and as such is considered to be late binding. In this version only the factory service client can specify the logistics information.
Note: Logistics are different from values in the deployment type. They may change per deployment but also may persist beyond a single deployment. Therefore they are tied to the defintion but are not strictly what defines the particular workspace. For example, if a workspace uses DHCP when it is booted but then it is paused and migrated, the specific networking information must be part of its definition in order to unpause it successfully on a new subnet.
A VM can have an arbitrary number of virtual network interfaces that are mapped to physical hardware in different ways. Broadly, there are two types of networking configurations to manage: how the network interfaces inside the VM need to be configured and how these interfaces need to be bridged and managed outside the VM.
The name of a NIC is a logical name used to refer to it, for example from the deployment type's bandwidth section. The MAC address can optionally be specified (it will otherwise be generated).
An association string can be configured to give more information that may be necessary such as a IP address pool label. Keywords could be correlated with different classes of connectivity or different links and subnets. Alternatively it can be used to express inter-VM connectivity requirements.
As a simple example, consider a physical node with two physical NICs that is configured with the association "public" and "private." Each of these keywords map to an in-memory bridge connected to each NIC and one NIC has Internet access and one NIC has private LAN access. The metadata does not specify the name of the physical bridge the VM needs to be bridged to (since that is hypervisor node specific), it maps to a class of connectivity instead.
The ipConfig section specifies how the VM should be internally configured. Just like the MAC, mode, and association L2, the IP settings may be decided at deploy time. Thus it is necessary to support a wide range of L3 options for the VM's NIC(s) and be able to pass this information to the VM.
There are three acquisitionMethods the NIC can use for its settings: AcceptAndConfigure, Advisory, and AllocateAndConfigure. AcceptAndConfigure and AllocateAndConfigure signal to the workspace service that it is responsible for configuring the VM itself with specific settings (so that during its boot process the virtual hardware is configured with the specified settings) and Advisory signals that the VM should just be configured with the appropriate hardware and connectivity settings (some other entity will configure the networking settings inside the VM).
AcceptAndConfigure means the client must have the exact NIC configuration requested in the deployment request. AllocateAndConfigure means that the client is requesting an address from a pool of available addresses. See the text on associations above. For more information about address pools, see the administrator's guide.
This version of the workspace service supports DHCP for the AcceptAndConfigure and AllocateAndConfigure acquisition methods, both require getting specific settings "into" the workspace dynamically. The workspace backend program employs DHCP which will intercept DHCP requests comding from the VM's networking interfaces and assign a specific setting to it. Note that this includes assigning specific, client requested IP addresses (DHCP can be made to have specific responses to a request). For more information on setting up DHCP and how it is implemented in such a way that does not interfere with a site's DHCP server, see the administrator's guide.
|
OPCFW_CODE
|
|Disch, I started with that site (as you might have read above)|
Ah, whoops. No I missed that. Sorry.
|And I could not get the projects to build correctly. Something about setting up the environment REALLY confused me.|
OpenGL is weird because it (AFAIK) doesn't have an official SDK. So the easiest way to approach it is to use things like GLEW as he does in that tutorial. It is not as straightforward as it could be, but I haven't found any tutorial that makes it simpler than he does. All of the required files are provided via a link in the tut:
You just run premake as he describes, and it spits out project files for <insert your IDE here>. Then you just load up those project files and build them like they were a normal project and you're good to go.
| And I cant even build the first tutorial. (I cant set up my IDE to link to the libraries)|
Are you using premake? Just grabbing the source files and trying to compile them will be more of a hassle than it's worth. You really need to auto-gen the project files.
|Doesn't [NeHe] teach me methods that have been removed from the newer versions of OpenGL?|
Yes. Which is why I recommend against it.
|You think its better to jump directly into OpenGL rather then learn SDL/SFML/Allegro? |
I'm an advocate of "learn how to do what you want to do". If what you want to do is 3D... then learning 2D is an unnecessary step towards that goal. While coding 2D first might "soften the blow" a bit... coding 2D and 3D graphics are very different both conceptually and practically.
2D can be accomplished with simple "draw this rectangle here" calls... which is basically how SDL/SFML/Allegro abstract it for you.
Whereas 3D requires a broader understanding of the rendering pipeline, 3D space, shaders, etc. It's hardly ever as simple as "draw this triangle here".
What I do, personally... is I use SFML for managing the window, doing audio, and capturing input... then use SFML to do the actual graphics rendering. So in that
way... starting with something like SFML might work out because it'll introduce you to all the non-graphic parts of game programming... and then when you feel comfortable you can "re-learn" the graphics programming part.
|is it really necessary to do get the premake thing, the SDK and use those?|
The SDK: Absolutely. You can't use a lib without an SDK. At least not reasonably.
Premake: Only if you want to compile the tutorial example programs. I certainly would recommend it... but if you just want to see the source without interacting with the generated programs, then you don't need premake.
|I've not started 3D programming yet but I'm thinking about starting it, this all seems like a little much. |
Any library is going to require downloading/installing its SDK. That is very typical.
Most tutorials are going to make you jump through a hoop or two to build their example programs.
So neither of these are atypical. You should probably expect this.
The reason he uses premake is so that:
1) He doesn't have to distribute binaries (which would be platform specific and would require people to download and run misc executables, which might be malware, etc)
2) The code will work for any platform/compiler. Distributing just VS project files means only people with VS will be able to compile the code. Distributing with makefiles means only people with gcc+make will be able to compile the code. Distributing with C::B project files means [etc etc].
My distributing premake files, you can build with any IDE that premake supports. Which is virtually all of them.
|Iv heard of SFML, Allegro, and SDL on this site. Im just curious which one is most similar to OpenGL.|
None of them are at all similar to OpenGL. They use OpenGL as their framework, but they all abstract it beyond recognition.
Do not expect to gain any insight on OpenGL by using any of those libs. You will not.
|If you have success with that website, meaning you can get the tutorials built, could you give me a hand on the set up part?|
I've had success and don't mind helping you through it. Give me a sec to install C::B again and see if I can't make a step-by-step thing for you.
|
OPCFW_CODE
|
XQA kernel works slower with fp8 kv than with fp16 kv on H100
System Info
Hi!
I'm running speculative execution TRT-LLM engine with 4 or 5 generation length, and I noticed that fp8 kv cache attention works slower than fp16 kv cache attention. Would be great to improve fp8 kv cache performance.
I run it on 1x H100 SXM, Llama 3.1 8b model, fp8 weights/activations and either fp16 or fp8 key value cache. I run it with batch size 64. For fp16 kv cache I observe close enough performance for no drafts/1 draft/3 drafts & 4 drafts (~60/60/70/80 microseconds per kernel execution). For fp8 I see significant drop from no drafts to 1 draft, and then from 3 to 4 (40/60/70/110 microseconds per kernel execution). I.e. you can see that for 4 drafts fp16 kv cache works faster than fp8 kv cache (and comparable for 1&2 drafts).
Thank you.
Who can help?
No response
Information
[ ] The official example scripts
[ ] My own modified scripts
Tasks
[ ] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
[ ] My own task or dataset (give details below)
Reproduction
Build Llama 3.1 8B model with speculative decoding and different quantizations
Expected behavior
FP8 kv cache is faster than FP16 kv cache
Adding drafts from 3 to 4 doesn't increase attention runtime significantly
actual behavior
FP8 kv cache is slower than FP16
Adding drafts from 3 to 4 increases attention runtime significantly
additional notes
I've tested it with other batch sizes (16 & 32) and it shows similar behavior
Thanks for reporting the issue.
Just to confirm, does the number of drafts (1/3/4) correspond to the speculation length in speculative decoding?
It is expected to see large performance gap between draft 0/1 for fp8 because different CUDA kernels are selected between these two cases in TensorRT-LLM backend. For draft 0, we have a more optimized kernel by utilizing Hopper-specific QGMMA instructions. For draft 1 we don't have such optimizations for now.
Also it is expected to observe higher performance on fp16, because it involves fp8-fp16 conversion in fp8 kernel, while in fp16 kernel there is no such overhead.
@ming-wei thank for the reply!
Just to confirm, does the number of drafts (1/3/4) correspond to the speculation length in speculative decoding?
Yes
It is expected to see large performance gap between draft 0/1 for fp8 because different CUDA kernels are selected between these two cases in TensorRT-LLM backend. For draft 0, we have a more optimized kernel by utilizing Hopper-specific QGMMA instructions. For draft 1 we don't have such optimizations for now.
Got it! I wouldn't expect big drop between draft 3->4 however.
Also it is expected to observe higher performance on fp16, because it involves fp8-fp16 conversion in fp8 kernel, while in fp16 kernel there is no such overhead.
As far as I understand attention (especially attention during decode) is memory bandwidth bound, rather than compute bound. It sounds like fp8 attention should be faster than fp8, even after accounting to conversion. For non speculative decoding kernel it is true in fact (fp8 kernel is ~2 faster than fp16 in our tests).
Thanks again
@ming-wei any updates on the issue? thanks!
|
GITHUB_ARCHIVE
|
In 2006, Tim Bernes-Lee came up with the idea of a huge database called “linked data.” Graph databases are a type of database design. This idea was the foundation for graph storage, which could show how organisations, people, and things or things are linked together, or “interconnected,” and how the relationships work. Graph databases store that data and its connections, making it easy to turn network data into actionable information. Graph databases, on the other hand, are usually based on NoSQL and can grow or grow very quickly. Because of how they’re made, graph databases are good at looking at how things connect. This is why they’re becoming more popular for mining data(data science in malaysia).
Databases that show several rectangular grids of information, called relational (or SQL) databases, often look a lot like spreadsheets, too. Each grid has a different number of rows and columns, which hold different kinds of information. (Relational databases can use an arrow system, but this gets very complicated and hard to understand very quickly.) Non-relational graph databases, on the other hand, usually show named bubbles (like an organisation, person, or object) with simple arrows that show how they’re connected (in many cases, there is a word above the arrow describing the relationship). Relational databases have been popular for a long time because they are cheap, accurate, and always the same. However, the process of setting up relationships (or joins) in a relational database can take a long time and cost a lot of money.
Here, you can read about graph databases in general. Before 2014, graph databases were thought to be slower, more difficult to work with, and more limited than relational databases. This changed in 2014. Also, they were thought of as “academic” databases that were used to make logical analysis systems, but not for business. Though graph databases could be useful, in general, they were difficult, time-consuming, and not very user-friendly.
In 2014, a lot of new technology helped the development of graph databases. Early open-source graph database Neo4j started getting a lot of attention for certain types of math that used graphs. Many of the problems with performance were solved at the same time that hardware (through cloud computing) was getting faster. Many problems with graph databases were solved in 2013 when a graph query language (called SPARQL) came out with a new version that fixed many of the problems they had before. JSON data stores like CouchDB and MongoDB also led to a big improvement in how joins worked (a core requirement for databases, but an especially important one for graph databases).
Also around 2014, a lot of businesses started playing around with graph databases as a way to solve problems that were becoming a bother at the corporate level (Metadata Management, Master Data Management, knowledge navigation, etc.). It has been more recently that machine learning algorithms have been used to build graph databases.
It doesn’t matter that graph databases look weird, because they’re more flexible than traditional relationship databases. This is because simple arrows, or “edges,” show how items are linked together. A friendship, a business relationship, and other things can be shown by the arrows Show what people like, or how the business wants to go.
With a relational model, it would take a lot of time and money to do the same thing. Also, the structure of a SQL database would have to be changed to add the new fields. Many graph databases can do this easily, but SQL formats don’t have the same scalability as other types of graph databases.
For Graph Databases, there are algorithms that can be used.
A graph database uses algorithms to make it easier to look through all of the data that it has. An example of this can be found in the Panama Papers scandal, which led to the discovery of thousands of shell companies. These “shells” let movie stars, criminals, and even the former prime minister of Iceland, Sigmundur David Gunnlaugsson, hide money in bank accounts in countries like the United Kingdom and the United States. Research into these “shell companies” was possible because of the use of graph databases and the algorithms they use(data science in malaysia).
The depth-first search (DFS) and the breadth-first search (BFS) are two of the most common ways to go through a list (BFS). In the depth-first algorithm, you start at the top of the tree and work your way down to the bottom. You’ll keep going back and forth until you find the answer to your question. People who use breadth-first search algorithms look at graphs one layer at a time when they search for things. They start by looking for nodes one level down from the start node. Then they move on to look for nodes in the second layer, and so on, until the whole graph has been looked at. It will find the shortest route. DFS goes to the bottom of a subtree and then backtracks. BFS will find the shortest route.
A good rule of thumb is to do depth-first searches if you want to find only one thing. When you don’t know what you’re looking for, this is the first level of depth-first. This type of algorithmic process will look for a path to its end, then go back to the start node and try another path. Informed searches, on the other hand, try to cut down on how long it takes to search by using algorithms that don’t go back, or by using a screening process to choose the paths and nodes for the search. So, an informed search will go faster than a search that is not. (Graph traversals usually do smart searches.)
There are a lot of different types of AI, machine learning, and graph databases.
In terms of training, graphs can help explain machine learning (ML) and artificial intelligence (AI) (AI). Graph technology can connect data and show how things are related. The process of using graph technology to improve AI is a good way to train complex AI and ML applications.
As a bonus, graphs help make AI decisions more transparent. This is called “AI explainability,” and it helps people understand how AI works. These benefits have led to a rise in the use of graph databases for training AI and ML applications.
Data scientists can use machine learning algorithms to find meaning in large amounts of data, and these findings can be shown as relationships between nodes in a graph. This is what Jim Webber, the chief scientist for Neo4j, said. Graph databases make it easier to store and search for information about relationships. In this way, graph data can be both the input and the output of machine learning.
|
OPCFW_CODE
|
Add ACLOCAL_AMFLAGS = -I /opt/local/share/aclocal to Makefile.am sudo port install libftdi0 sudo port install libusb ./autogen.sh ./configure make Tested with st-flash and all worked smoothly." with no change. Also, it appears that Jie and I have automake 1.14-1. func_convert_file_noop # checking for /usr/bin/ld option to reload object files... -r # checking for objdump... yes # checking for unistd.h... (cached) yes # checking wordexp.h usability... have a peek at these guys
sasai commented May 29, 2012 It seems the same trouble I had on OSX using homebrew. asked 2 years ago viewed 1145 times active 2 years ago Blog Stack Overflow Podcast #89 - The Decline of Stack Overflow Has Been Greatly… Related 1What are the numbers in Personal Open source Business Explore Sign up Sign in Pricing Blog Support Search GitHub This repository Watch 36 Star 255 Fork 71 brianb/mdbtools Code Issues 28 Pull requests 8 Projects 0 yes # checking for pkg-config... /usr/bin/pkg-config # checking pkg-config is at least version 0.9.0... https://github.com/brianb/mdbtools/issues/48
I suppose that is because you either don't have pkg-config installed or it is installed incorrectly so that autoconf can not translate that line of the script. how to protect against killer insects Suggestions for HDMI/aerial/audio socket Is 8:00 AM an unreasonable time to meet with my graduate students and post-doc? Can we replace it with AC_CHECK_HEADER and AC_CHECK_LIB? yes # checking Are we using flex ...
Hide Permalink Benjamin Hindman added a comment - 27/Aug/14 18:35 The two outstanding patches from Jie Yu (https://reviews.apache.org/r/25030 and https://reviews.apache.org/r/25031) fix the issue for me. yes # checking for stdint.h... yes # checking for flex... Pkg_prog_pkg_config Syntax Error resolved my issue by adding: ACLOCAL_AMFLAGS = -I /opt/local/share/aclocal to Makefile.am" with no change. "executing aclocal --force -I /opt/local/share/aclocal once was enough for me. " no change. "Had the same issue
yes # checking if libtool supports shared libraries... Pkg_check_modules Example There is NO WARRANTY, to the extent permitted by law. Thanks! –prameshvari Nov 21 '14 at 12:14 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up using Facebook Sign You signed in with another tab or window.
lex.yy # checking lex library... -lfl # checking whether yytext is a pointer... Install Pkg-config yes # checking for gcc... Thanks in advance! Griffan closed this Aug 24, 2015 KhagayN commented Mar 30, 2016 i installed pkg-config use Homebrew but, the error still appears wookietreiber commented Mar 30, 2016 @KhagayN Would you please be
Hide Permalink Timothy St. http://stackoverflow.com/questions/8578181/using-the-pkg-config-macro-pkg-check-modules-failing yes # checking for objdir... .libs # checking if gcc supports -fno-rtti -fno-exceptions... Configure Line Syntax Error Near Unexpected Token If so, which autoconf version do you have? Pkg_check_modules Not Found yes # checking for an ANSI C-conforming const...
After installing it configure runs without any errors. More about the author yes # checking for iconv... You signed in with another tab or window. What if I want to return for a short visit after those six months end? 80's or 90's sci fi movie title that has a mace? Syntax Error Near Unexpected Token `lasso,'
yes # checking whether to build static libraries... pkg-config wasn't installed on my system at all. zilb will be zlib-devel. http://freqnbytes.com/syntax-error/configure-line-syntax-error-near-unexpected-token.php Top Display posts from previous: All posts1 day7 days2 weeks1 month3 months6 months1 year Sort by AuthorPost timeSubject AscendingDescending Post Reply Print view 3 posts • Page 1 of 1 Return
bison -y # checking for ANSI C header files... (cached) yes # checking fcntl.h usability... Pkg_check_modules Cmake Join them; it only takes a minute: Sign up Syntax error near unexpected token when installing Yara up vote 0 down vote favorite Hi I'm a beginner in Linux and I Already have an account?
Reload to refresh your session. It will cause the aclocal.m4 file to not contain the PKG_CHECK_MODULES macro. yes # checking for size_t... Syntax Error Near Unexpected Token Glib Hide Permalink venu k tangirala added a comment - 19/Oct/14 08:39 is there a fix for this ?
no # checking for gcc option to produce PIC... -fPIC -DPIC # checking if gcc PIC flag -fPIC -DPIC works... Reload to refresh your session. Personal Open source Business Explore Sign up Sign in Pricing Blog Support Search GitHub This repository Watch 707 Star 15,078 Fork 1,090 firehol/netdata Code Issues 146 Pull requests 15 Projects 5 news Not the answer you're looking for?
strip # checking for ranlib... printf # checking for a sed that does not truncate output... /bin/sed # checking for grep that handles long lines and -e... /bin/grep # checking for egrep... /bin/grep -E # checking kevincook commented Jun 20, 2012 resolved my issue by adding: ACLOCAL_AMFLAGS = -I /opt/local/share/aclocal to Makefile.am mkv123 commented Aug 24, 2012 I can confirm that adding ACLOCAL_AMFLAGS = -I /opt/local/share/aclocal to no # checking for gcc option to produce PIC... -fPIC -DPIC # checking if gcc PIC flag -fPIC -DPIC works...
pkg-config wasn't installed on my system at all. I am wondering is there any reason to use PKG_CHECK_MODULES? In my case, it works well after adding a line "/usr/local/share/aclocal" in /usr/share/aclocal/dirlist file.
|
OPCFW_CODE
|
In large organisations https://en.wikipedia.org/wiki/Separation_of_duties kills productivity.
The dogma is:
No single team should have end to end access from code to production.
Dev should not deploy code to production.
Ops should not write code.
Separation of duties (SoD) is a key concept of internal controls, controls performed manually by different teams, e.g. “Dev” & “Ops”. Today, we have tools that can automate these controls and provide protections from fraud and errors.
Rise of DevOps
DevOps is a cultural movement that aims to bridge the gap between development and operations, to increase productivity. Combining software development (Dev) and IT operations (Ops) aims to shorten the systems development life cycle and provide continuous delivery with high software quality.
Though how are the controls enforced?
How do we prevent a rogue DevOps team member from deploying risky code or practices to production?
Automated checks & controls
Changes are made via a Merge Request, which is automatically tested by a CI/CD pipeline. The pipeline is configured to run tests, security scans, and other checks. If the pipeline fails, the MR is not merged. These checks prevents risky code from accepted.
- Another team member approves the MR, complete with automated reports, else it is not merged - eliminates rogue actor
- Ability to roll back (instant with Serverless) - derisk deployments
- Logs to a central account - accountability / auditability opportunities
- Platform Guardrails, like AWS Account controls such as: Control tower, SCPs, AWS Config
- Costing - drive resource efficiency
Further automations/checks can triggered from the logs, such as CloudTrail and CloudWatch.
Avoid these SoD smells (lack of automation)
- Gate keeping - teams must be empowered, that looks like having their own AWS account and being able to deploy their own application
- Network team needs to open a port for you - team with AWS account should conduct their own effective networking
- “Throwing things over the wall” - team should be able to handle everything themselves given the tools
- Separation of concerns! - this assumes that the team is not trusted and automation is not in place
- Ticket based infrastructure - team is not empowered to do what they need to do, e.g. they lack AdministratorAccess in their AWS account
- Not being allowed to view other repositories / gate keeping - teams needs to foster open collaboration
- Auto devops - this assumes the team isn’t capable of managing their build pipeline
- Audit - this assumes that the team is not trusted, checks are not in place and logging exports are not in place
- The Operations team will run it for you - for DevOps to work, the team needs to be trusted and use managed services like “Serverless” to deploy their application without fuss, “you build it, you run it”
- Not being allowed to access production data - ideally shouldn’t happen, but if it does, it’s logged.
Manual SoD, replaced by automation and working closer together
This is my AWS / 2023 interpretation of the CD-Friendly SoD procedures for Configuration outlined upon https://www.slideshare.net/sriramnrn/segregation-of-duties-and-continuous-delivery from slide 27, by my colleague Ram.
Best practices / tooling will evolve over time, though the idea is to remove manual gatekeeping checks via automation.
- AWS accounts provide controls out the box, use them!
- CI/CD pipeline providers like Github/Gitlab provide a lot of checking automations to opt in to.
- Communication tools like Slack make it easy for teams to foster open collaboration so that expertise can be shared & stakeholders can be kept in the loop.
|
OPCFW_CODE
|
New issue in Pyodbc 4.0.28: hands when many connections are done
The issue is new in this release ( previously I worked with 4.0.26 ). Our system uses Tox to test itself; We are opening connection to MS SQL server, using it and close the connection a lot of times; ( do not close a connection, just exit from scope; we expect that this closes it)
New version hangs after few iterations. No error message, no exception - just halt.
Please first make sure you have looked at:
Documentation: https://github.com/mkleehammer/pyodbc/wiki
Other issues
Environment
To diagnose, we usually need to know the following, including version numbers. On Windows, be
sure to specify 32-bit Python or 64-bit:
Python:
pyodbc:
OS:
DB:
driver:
Issue
Often it is easiest to describe your issue as "expected behavior" and "observed behavior".
Could you provide more information about your environment, as well as an ODBC trace? A repro script to demonstrate would also be very useful.
FWIW, SQLAlchemy has a pretty intensive test suite and I've just confirmed that running it under tox has no problems with pyodbc 4.0.28:
=== 10052 passed, 1173 skipped in 1809.01s (0:30:09) ===
SQL.zip
this one is a trace of good run with pyodbc 4.0.24
SQL1.LOG
The first one is hang on one test, the second one is continuing to run to next test;
difference is visible starting line 32445(bad)/32462(good)
from this point on 4.0.28 starting endless loop with SQLDescribeParam/SQLBindParameter
Closed due to inactivity. Feel free to re-open with current information if necessary.
Was fixed in later version
Thanks,
Lev Zlotin
@.***
+972-4-8656740
+972-53-5207679
WIKI: https://wiki.ith.intel.com/display/AETIDC (HTTP://GOTO/AETIDC )
From: Keith Erskine @.>
Sent: Saturday, May 29, 2021 18:26
To: mkleehammer/pyodbc @.>
Cc: Zlotin, Lev @.>; Author @.>
Subject: Re: [mkleehammer/pyodbc] New issue in Pyodbc 4.0.28: hands when many connections are done (#676)
Closed #676https://github.com/mkleehammer/pyodbc/issues/676.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/mkleehammer/pyodbc/issues/676#event-4816158132, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AOH2LE4IYZ3S73FFF7AEDU3TQEBQTANCNFSM4KFMEKHQ.
Intel Israel (74) Limited
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
|
GITHUB_ARCHIVE
|
Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space. After some fighting with Intel documentation I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible. We've created hardware where none existed.
The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time. If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
It's actually more complicated than that - see here for more. IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it. Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough
|
OPCFW_CODE
|
Novel–Let Me Game in Peace–Let Me Game in Peace
Chapter 1373 – Blood Bone Temples scent knit
An additional Terror-quality!
The second the our blood-reddish snake sprang out, it showed its expertise. It launched its mouth area and spewed out blood vessels-like river drinking water that surged toward Zhou Wen and also the others like a mountain / hill deluge.
Zhou Wen saw that something was amiss along with the houses listed here, though they had dirt bricks and hardwood roofs externally.
It obtained up coming from the surface and shook the an ice pack pieces looking at the system. It exposed its beak-like mouth and just let out an unusual scream, forming an aural engagement ring that quickly spread.
Banana Fairy was clearly a little bit peeved as she jumped down out of the banana leaf and danced when in front of Zhou Wen. She fanned each monsters together with the banana lover which was converted from your banana leaf with just one hands.
There were also an item that baffled Zhou Wen. It was actually said that no one remaining Yang Metropolis full of life. However, he had yet to encounter any threat.
The blood bone fragments temple he had Tyrant Behemoth strike was already a very inconspicuous one particular. He didn’t expect a Terror-standard lifetime to generally be hidden inside.
islands in malay archipelago
The iced blood flow determine slammed in to the retaining wall of another blood stream bone tissue temple, however it did not bust the soil brick walls. Alternatively, the our blood bone tissue temple illuminated as a bloodstream body rushed out.
Zhou Wen was taken aback.
Zhou Wen didn’t await it to launch one other attack he summoned Banana Fairy.
The instant the blood-red-colored snake showed up, it demonstrated its expertise. It established its lips and spewed out our blood-like river liquid that surged toward Zhou Wen as well as other people for instance a hill flood.
Their own bodies slammed in to the walls of the blood vessels bone temple. This point, they did not shatter the ice around them, they also neglected to destroy the temple.
The blood vessels number that hurried out looked just like a significant snake, but its system was as large as a dragon’s. However, it didn’t have claws. Preferably, it got a horn on its brain.
It have up from the ground and shook the an ice pack pieces from its system. It started its beak-like mouth area and simply let out a strange scream, forming an aural engagement ring that quickly distributed.
He noticed the bone fragments of diverse beasts in structures of numerous capacities. Or quite, these complexes had been created according to the your bones in the beasts.
If this type of Yang Location was actually built employing mutated monster our blood mixed with earth, exactly how much bloodstream was required?
As a result of Banana Fairy’s fanning, the hundred-plus bloodstream bone fragments temples inside the iced region released terrifying dark colored sanguine fire. Terrifying dark colored and red-colored blood dark areas rushed out of your temples.
Tyrant Behemoth pounced on the compact blood bone temple fiercely. The top reason for the building only gotten to Tyrant Behemoth’s stomach. Tyrant Behemoth initialized its Definite Strength and punched down from above, stunning the timber roof structure from the blood bone tissue temple.
Once the blood flow figure wiped out Tyrant Behemoth, it made its gaze into the blood-colored avatar.
alpha’s forbidden bride pdf
There was also something that confused Zhou Wen. It turned out said that no-one eventually left Yang Community in existence. Even so, he acquired yet to come across any hazard.
If that weren’t a coincidence, it becomes alarming enough because of so many blood bone temples in Yang City—even if only half them were at the Terror class.
The our blood physique that hurried out searched much like a huge snake, nonetheless its system was as large as a dragon’s. Nevertheless, it didn’t have claws. Preferably, it experienced a horn on its brain.
The more Zhou Wen checked out it, the better he believed that a little something was amiss. This place shouldn’t be known as a area, but a temple intricate.
Nevertheless, it turned out extremely unlikely that there weren’t any Calamity-grade beings in such a alarming place. If he come across a Calamity-standard creature, Zhou Wen’s ways of resistance might be very limited without the help of the Perfect Robe.
People who have relatively small bones experienced lesser complexes. Some of the compact residences have been only fifty percent the height of any man or woman, looking like designs. Nevertheless, the bones inside were definitely real.
A different Terror-class!
Chapter 1373 – Bloodstream Bone fragments Temples
The bloodstream determine gone berserk yet again and swept toward Zhou Wen to be a b.l.o.o.d.y ray. Banana Fairy’s red lips opened slightly as she exhaled a gust of Supreme Yin Breeze.
the witch house of salem
Tyrant Behemoth pounced at the little blood bone tissue temple fiercely. The best point of your building only achieved Tyrant Behemoth’s midsection. Tyrant Behemoth turned on its Definite Durability and punched down from above, hitting the timber roof covering with the bloodstream bone fragments temple.
Tyrant Behemoth’s body system was in midair once the our blood determine jumped as whether or not this got faded. When it showed up yet again, it possessed already grabbed Tyrant Behemoth’s entire body and split it into two.
The better Zhou Wen looked over it, the greater number of he believed that a thing was amiss. This area shouldn’t be known as the area, but a temple complicated.
Novel–Let Me Game in Peace–Let Me Game in Peace
|
OPCFW_CODE
|
I'm currently working on a program for random play of my MediaPlayList (Simpsons, Futurama, Music etc.). Therefore i wrote a subroutine, which is finding and filtering
all media files in a given folder:
This routine is taking every file which isn't .jpg or .cmd at the moment. Works, everything fine. (Perhaps you have an idea of how to make thisroutine smaller and perhaps nonrecursive oO ?)
Now, i want to filter things explicitly by their filetype, and I don't want to do it in the manner done here (with if(path not contains bla)) but with a method of file like f.isMediaFile().
I therefore wrote a MediaFile Class:
and changed the type of my get the files routine to
but now, I get a classcast error (Exception in thread "main" java.lang.ClassCastException: [Ljava.io.File; cannot be cast to [LMediaFile;) on:
direc is of type MediaFile, but will return a File Array. Where is my error in this concept, how can I circumvent this? Everthing I want is the easier checking if a File is a Media-File or not.
Override listFiles (all 3 of them) to convert the File into a MediaFile:
Since the other listFiles methods will require a lot of the same code you should add a private method for converting a File into a MediaFile.
However, I don't like this idea. Your MediaFile class exists only to make it easier for you to filter out non-media files. listFiles() already has support for that, using FileFilter and FilenameFilter. For instance:
Now List (bad name by the way, it should start with a lowercase L) will only contain File objects which are either directories or files that contain .jpg in the name but not .cmd.
A thing in my mind, and with the code posted above I hope its resonable to post it here, is: Is it neccesary to count all the files first, then create the File-Array, and after that, filling it with all the files? It seems to me rather complicated. Isn't there an easier way?
Thorsten Jaeger wrote:A thing in my mind, and with the code posted above I hope its resonable to post it here, is: Is it neccesary to count all the files first, then create the File-Array, and after that, filling it with all the files? It seems to me rather complicated. Isn't there an easier way?
The easier way would be to use a List (e.g. ArrayList) instead of an array. Then you don't need to know how big it is in advance - you can just add files to it as necessary.
|
OPCFW_CODE
|
Two ISP bgp topology?
We use palo-alto firewall as an internet gateway. We have 16 static ip-addresses. One is used for outbound traffic (users browse internet) . The rest is used for inbound traffic (mail server, webservers, etc).
For redundant purposes we subscribe to second ISP. We buy 16 new static ip-addresses from new ISP. And here comes the hell with configuration. I've been reading for two days about BGP, PI addresses, AS numbers and other stuff. But I don't understand anything. Theory without practice and overall understanding is nothing. I call to these ISP's, and both providers say they won't configure any routes and won't sell AS numbers, try solve it by your selves. In our small asian country there is no LISP or any other cloud base routing solution. I don't know what to do next. Should I request AS number directly from APNIC? With policy based rules I may only configure outbound traffic redundancy. Is there any reliable solution to make redundant our small hosting? May it is possible to configure BGP without AS numbers and PI addresses?
Just a suggestion (maybe not practical): put your infrastructure in a data center / co-location facility. These typically have the redundancy you are looking for. Then as far as getting to those resources (which now have single IP addresses) you can use separate ISPs and internal routing protocols / VPNs to achieve site-to-dc redundancy. ...just a thought.
Even if you could still get PI IPv4 addresses in Asia: if your ISPs don't want to route your IP addresses then there is nothing you can do. Tunnels and LISP could solve some of your problems (I use LISP here), but you already stated that this is not available in your region.
BGP is the protocol that is used to route your IP addresses from an AS. You need both to run BGP. Blocks of 16 addresses are too small to be routed with BGP anyway. Technically you could, but nobody will accept your routes.
If you want to have your own IP addresses and route them etc. you'll have to make some investments. Because APNIC ran out of IPv4 addresses for normal distribution you'll have to comply with some very strict rules. If I recall correctly the current rules are that you have to be multihomed already, must be able to justify 25% of the addresses (which would be 25% of 256 = 64) immediately and 50% (=128) within a year. Based on your current numbers that seems unlikely. If you could then you'd need to get an AS number from APNIC and you'll have to find ISPs that want to set up BGP sessions with you. This will probably be more expensive than your current contracts. And on top of that you'd have to study a lot to learn how internet routing and BGP works or you'll have to hire someone else to manage it for you. In addition to buying the equipment needed to do all of this.
In short: it's probably not worth it for your case.
1.Besides BGP there is no redundant solution to put a few servers for external usage, am I right?
2.What if we use two ext ip addresses for one internal server. And just point public DNS cname record to those two ip addresses?
3.What if we get those AS numbers and providers' routes, would it be enough to use just palo alto firewall? Or should we buy routers to use them as a gateways, one per ISP?
1: BGP is the protocol to route addresses on the internet. 2: putting multiple addresses in DNS will make you dependent on all of them, reducing reliability. 3: using a single firewall would make that device a SPOF. A single ISP is probably more redundant than that, so you'll only make it worse...
You can see how I handled some level of redundancy with a single ISP over diverse circuits at http://networkengineering.stackexchange.com/questions/1745/inbound-bgp-load-balancing-from-same-isp-router.
"2: putting multiple addresses in DNS will make you dependent on all of them, reducing reliability." Are you sure about this? Most applications try the first entry in the provided list then after a timeout period with no response move to the second entry.
That would be nice but those timeouts are long
What are timeouts for browsers? Firefox, IE?
It depends on the OS and the browser and the type of timeout (connecting vs re-establishing a broken connection) and whether ICMP errors are properly propagated and handled. So it's difficult to give a simple answer. 300 seconds is normal. And this applies to every single connection, and loading a single web page usually uses multiple connections...
You can configure a Palo Alto Networks firewall to fail over to the other ISP. You need to set up two sets of NATs -- one for one ISP and one for the other -- or set two DMZs, one for one ISP and one for the other (or overlay two subnets on one interface). It will use both for inbound and will fail over to the second for outbound when one fails.
You can start reading here.
This won't work for inbound traffic for the servers. If one link goes down, how will external clients know how to reach the servers when their external IP address(es) change? DNS records with a low TTL may help for longish outages, but that is hugely unreliable and far from efficient.
It absolutely DOES work. What you do is put an address from both ISPs on each service. You have two dns servers, dns01, dns02. Each has service IPs for one ISP's address space for services. Both are listed as dns servers for your domain. When everything is up, dns queries come to both and both addresses can be used. When one breaks the working one is used.
Example: DNS01.foo.com has a zone file that lists ISP1 ip addresses for services. DNS02.foo.com has a zone file that lists ISP2 ip addresses for services. Both DNS servers are listed for the domain. When ISP1 goes down, DNS01 can no longer be reached. DNS02 serves ISP2 addresses to queries. Yes, it's "janky" but failover for static routed nets is janky.
I see what you're saying. However, as the rest of the world already has cached records pointing to the old IPs, they won't look up the name again until their cache expires, at which time they will use the name of the up name server. Until then, their local pc cache contains an invalid record, as will their upstream DNS servers. You'd need to use a very low TTL, and not everyone honours those. So yeah, "janky", but I suppose it is better than nothing :)
There is another way to do it in a vendor-specific way. In the PAN firewall enable DNS proxy. In the rule set for the interface to ISP1 you have static entries for ISP1 addresses for services. In the rule set for the interface to ISP2, entries for ISP2 addresses. Clients get address according to which interface the DNS query arrived on.
Yes, that is why you put a short TTL on any records that are subject to failing over. Say 5 minutes.
Better/easier hybrid approach. DNS server carries only ISP1 addresses. PAN DNS proxy enabled on only ISP2 interface. Rule set has ISP2 static IP addresses configured. Queries arriving on ISP1 go to DNS server. Queries to ISP2 get static ISP2 proxy rule entry. Only one zone file to maintain. Again, failing over static routed nets is tricky.
I agree with you. No matter which approach is taken, there is guaranteed to be at minimum a decent portion of the userbase who will be in the dark for at least some time. So is the nature of the beast when one is not large enough to play in the BGP world
At least you can list your smtp server with separate IP addresses and matching dns entries for each ISP. Add MX records for both and you have one less service to worry about. Depending on your requirements, buying a traffic manager (intelligent load balancer) service from a nearby IaaS provider might give you some of what you need. That way you always direct clients to the fixed IP addresses of the load balancers/traffic managers, who will determine the availability of your different addresses and services, and finally act as a proxy between the client and your service.
There is some way to load balancing without AS and PI.
For outbound it achieved by policy routing
For failover inbound traffic, is good to use dynamic DNS. When primary ISP changed, DNS name (with short enough TTL) of site changed to new IP and clients keep access to site.
Setting DNS to two IP simultaneous can make round robin IP selection on clients.
Periodical changing (with period near to DNS record TTL) between two IP also can make balancing. Same effect is using DNS server that support giving different IP to different clients.
|
STACK_EXCHANGE
|
As you may have noticed, a lot of our recent updates and new features with ThreeFold Grid 3.0, like the TF Chain Portal and the new Explorer UI, are about improving the user experience for our community. The farming calculator is no different.
What is the Farming Calculator and what can it be used for?
We created the farming calculator to enable you to simulate potential farming rewards that can be earned for contributing capacity to the ThreeFold Grid (3.x) – our open-source peer-to-peer Internet infrastructure.
By choosing different configurations and parameters, you can easily explore the profitability of becoming a ThreeFold Farmer. Try the farming calculator here and find examples for calculations in our wiki.
Please keep in mind that this new calculator is set for farming rewards 3.0. Therefore, simulations only apply to 3Nodes registered on TF Chain for ThreeFold Grid 3.x. Take a look at Scott’s post on Grid 3 migration news.
How is this calculator an improvement over what we had before?
Previously, we used a spreadsheet for farming rewards calculations. While the spreadsheet did provide proper calculations, it wasn’t very user-friendly. With ThreeFold Grid 3.0, we decided to create an actual tool.
The new calculator is a weblet, basically a web-based widget deployed on top of the ThreeFold Grid, with 100% decentralization and a much nicer user experience. This web-based farming calculator simplifies the process of calculating potential farming rewards.
How to use the Farming Calculator
The new farming calculator is a great starting point for current and future farmers. It enables you to look into how certain parameters and specifications could impact farming rewards in a much simpler and more accessible way.
Start by choosing the configuration of your node. Currently, the options are either a DIY or Titan v2.1 node. After that, you can try different options for hardware parameters like memory (GB), CPU (Cores), SSD (GB) as well as other parameters such as power cost, price of TFT at the point of registration on our blockchain and more, as you can see in the image below.
Based on these specifications, the farming calculator predicts your potential rewards and displays them in the form of graphics and diagrams for a clear overview of your earnings potential. So, the new calculator allows you to see the differences between certain specifications and how changes might affect your rewards at one glance.
Just play around to find out how certain parameters would impact your farming rewards and which specifications would be most beneficial for you – it’s pretty straightforward! If you need a jumping-off point, check out our library for hardware options and recommended setups.
There’s also a switch allowing you to choose between simulating your potential net profit or return on investment (ROI), as you can see in the image above. If you switch from basic to advanced view, you’ll see both the net profit as well as the ROI, as shown in the image below.
If you’re working with certified hardware, don’t forget to tick the “certified”-box in the bottom-left corner, so the calculator recognizes your additional 25% rewards.
Become a ThreeFold Farmer
Be a part of the People’s Internet! Connect compute, storage or network capacity to our peer-to-peer Internet infrastructure and earn income in the form of TFT for it. Currently, we’re already available in 58 countries with more than 1,420 3Nodes and counting.
Not sure if this is really for you because you’re lacking technical skills? No worries, you don’t have to do it yourself! I’m not exactly a tech genius myself, so I got a 3Node from one of our certified hardware partners – and it’s been living quietly in my living room ever since. They have easy plug-and-play functionalities, so you don’t need any technical knowledge to join the People’s Internet. Just get one of these plug-and-earn 3Nodes, connect it to network bandwidth and electricity, and you’re good to start farming.
Ready to calculate your rewards?! Share your experience with the farming calculator in the comments below.
Have questions on farming or want to share your experiences as a ThreeFold farmer? Join our official farming chat or visit the farming section on our forum to find more information, ask for help, and connect with other ThreeFold farmers.
- The Farming Calculator
- Farming Rewards 3.0
- How You Can Join the People’s Internet
- Be the Internet
- Find out more about DIY nodes on the forum or wiki.
Please keep in mind that these simulations are not investment advice nor should they be looked at in this way. The scenarios shown are by no means a guarantee and no one can predict the future of yields exactly as they are heavily dependent on factors beyond anyone’s control. The DAO could also decide to change parameters or farming, which could have a different result.
|
OPCFW_CODE
|
Last time I wrote about the “Black Tower,” I had just installed Vista and Kubuntu 7.10 in a dual-boot setup. When version 8.04 of Kubuntu (“Hardy Heron”) hit the Web last week, I wasted no time upgrading to it.
Having been burned numerous times by premature upgrades to half-baked Linux updates, I downloaded the Kubuntu 8.04 Live CD iso and tested it on the Black Tower before launching into an install. All key functions — video, audio, disk access, Internet access, etc — seemed to work as desired, so I proceeded with preparations to build a nest for the Hardy Heron on the Black Tower.
I’ve performed a great many upgrades and switches between Linux distros over the years. To ward off disasters from intentional changes to my OS — as well as from dumb mistakes or system failures during normal operation — I’ve developed a few habits that have often saved me from lost-data-disaster.
I always create separate root (/) and home (/home) partitions. For one thing, that makes it easy to frequently backup the home partition, in order protect personal settings and data. It also makes it easy to perform periodic “fresh installs” when major OS updates show up (such as Hardy Heron) or on occasions when I’m moved to switch to another distribution.
In this case my plan was to perform a “fresh install” of Kubuntu 8.04, wiping away the entire previous OS (Kubuntu 7.10) while preserving my personal data and preference settings. To prepare for this, I began by performing a full system backup, which backs up my /home partition to the fileserver on my home LAN.
Next, I logged out of KDE (Menu > Log Out) and at KDM’s login prompt used Ctl-Alt-F2 to open up a console shell. Once there, I logged in as root (most Kubuntu users would do this by typing “sudo bash” to get a root shell, or by prepending “sudo” to subsequent console commands).
Then, I went into /home and renamed my personal home directory from “rick” to “rick-old.” The purpose of this was to keep all my personal settings and data (which were previously located in /home/rick/) in a separate directory so that Kubuntu 8.04 could create a fresh “rick” directory with the new OS version’s default settings, unencumbered by any of my own customizations and without trouncing on any of my precious data — always a good idea with a major new OS release.
One last — and very important — step of preparation was to carefully record the system’s current hard drive setup (see table below). To gather data, I used both “df” and “fdisk.” In fdisk, I used its menu’s “p” function to display the primary hard drive’s current partitioning information. The drive was partitioned as follows:
|/dev/sda1||369 GB||NTFS – type 7||unassigned|
|/dev/sda2||2 GB||SWAP – type 82||swap|
|/dev/sda3||9.2 GB||EXT3 – type 83||/|
|/dev/sda4||86 GB||EXT3 – type 83||/home|
Now it was time to proceed with the installation process. While still in the root shell, I inserted the Kubuntu 8.04 Live CD into the Black Tower’s optical drive and rebooted by typing “reboot” on the command line.
Installing Kubuntu 8.04
After a couple of minutes the Live CD finished loading and the system booted up into a nearly empty KDE desktop, as pictured below.
Kubuntu 8.04 Live CD desktop with “install” icon
(Click image to enlarge)
An icon on the desktop invites users to install the OS permanently on their system’s hard drive. I clicked it to begin the process. After responding to a few simple prompts (timezone, language, keyboard), I came to the most critical step: partitioning the system’s primary hard drive.
Here, I selected the “manual” option rather than the “guided” alternative, since my plan was to do a fresh install but preserve my /home partition. Accordingly, I used the “edit” options provided in the subsequent prompts and screens to designate…
- /dev/sda1 — no changes: unallocated; no, don’t format it!
- /dev/sda2 — edited to: type ext3; labeled /; yes, please format it
- /dev/sda3 — no changes: swap; mounted at swap; yes, please format it
- /defv/sda4 — edited to: type ext3; mounted at /home; no, don’t format it!
Once the above were set, I clicked the button to continue with the installation and, following a request for my user name, login password, and a name for the computer, the system began the disk partitioning and software installation process. This took about 20-30 minutes. When the software installation process was complete, the system prompted me to remove the CD and hit the Enter key to reboot.
On reboot, the Black Tower booted up its fresh, new, Hardy Heron OS without any hitches. Its KDE desktop looked like this — before (upper screenshot), and after (lower screenshot) my post-installation makeover:
The Black Tower’s desktop following the install (upper image), and following my full configuration (lower image)
(Click each image to enlarge)
The lower image above shows the Black Tower’s desktop following the couple of hours I spent setting KDE the way I like it and adding all my favorite software. Kubuntu’s Adept software installation tool (Menu > Add/Remove Programs) provides the easiest way to get most of the software needed.
Here are a few points of interest:
- My favorite apps that aren’t installed by default in Kubuntu 8.04 include: firefox, thunderbird, gaim (pidgin), gimp, kscd, bluefish, dillo, kaudiocreator, xine-ui, msttcorefonts, adobereader-enu, and flashplugin-nonfree.
- Sadly, Automatix2, which I’ve raved about in the past, is no longer available. This didn’t cause much of a problem, as most of my favorite applications are now available from the Ubuntu Hardy repositories. Programs that I had to (or chose to) download and install “manually” included: Skype’s VoIP/IM tool; Sun’s Java plugin; Adobe’s flash browser plugin; Opera’s version 9.50-beta browser (see comment below), and Real’s RealPlayer.
- The latest version of Firefox currently available from Kubuntu 8.04 repositories is beta version 3.0b5. It has a number of nice enhancements that can be noticed in its preferences screens and dialogs, as well as a few improvements to its overall UI (user interface). One problem I noticed is that when multiple tabs are in use, the currently selected tab’s “x” (used for closing a tab) does not show up in red as it does on Firefox 2. Version 3 provides a nice. One particularly welcome enhancement is that the first time you use File > Send Link, to email a Web page’s URL, the Firefox prompts you to tell it what email program you want it to use to send the link; in my case, that’s /usr/bin/thunderbird.
- The latest released version of Opera as of this writing (version 9.27) seems incapable of playing YouTube videos. To remedy this, I instead downloaded and installed a preview of version 9.50 (9.50 Beta 2) from Opera’s download area. That one works great!
Overall, I’ve found Kubuntu 8.04 to be a solid, lean Linux desktop with a polished look and feel, backed by well-stocked, reliable software repositories (stocked with around 25,000 packages). All the basics are present, including a KDE 3.5.9 desktop, graphics and multimedia utilities and support, a browsers, email, and messaging clients, games, and the incomparable OpenOffice office software suite. Additionally, the OS features easy software installation and updates thanks to its debian apt-get package architecture and graphical Adept package management front end.
On the other hand, I’m sad to witness the disappearance of Automatix. This free, user-friendly download service provided an optional software installation tool that I think the Ubuntu community — and Linux newcomers in general — could really benefit from. Automatix, as its name implies, automated the downloading and installation of a cleverly-selected assortment of highly useful software packages (multimedia codecs, browser plugins, VoIP messaging, etc.), taking care of various illusive and often frustrating post-download system configuration steps.
Throughout my eight years of experience with Linux, I’ve consistently found the final 10 percent of the installation process — bits like getting fonts, browser plugins, and multimedia applications installed and configured as desired — to account for 90 percent of the headaches. What I like most about Ubuntu is that it has eliminated most of those post-install headaches. Prior to its demise, Automatix helped fill in the gaps naturally left by Ubuntu; hopefully something similar will come along to pick up the pieces from Automatix, and perhaps take the process even further.
In summary, the Ubuntu-family distributions — including my favorite, Kubuntu — have already become the most popular Linuxes among desktop PC users who are inclined to run Linux. With Hardy Heron’s evolutionary enhancements and polish, the continuing march of the Penguins onto the world’s desktops will be unstoppable!
[Note: Some of the screenshots above are courtesy of thecodingstudio.com. A full Kubuntu 8.04 (Hardy Heron) Live CD screenshot tour is available here.]
|
OPCFW_CODE
|
How to allow Ad hoc updates in SQL Server system catalogs
We were tasked with enabling SQL Server Database Mail on a few SQL Server instances. When we were running RECONFIGURE along with the "database mail xps" option through sp_configure, we were not able to run the RECONFIGURE option to enable the changes. We were getting this error message "Msg 5808, Level 16, State 1, Line 1 Ad hoc update to system catalogs is not supported." This tip is intended to shed some light on how to avoid and rectify this issue. Check out this tip to learn more.
Option 1 - sp_configure with the reconfigure with override option
We were running the below command (sp_configure 'Database Mail XPs', 1 GO reconfigure GO) to enable the "Database Mail XPs" feature. Unfortunately the "reconfigure" command was not running successful and throwing the "Msg 5808, Level 16, State 1, Line 1 Ad hoc update to system catalogs is not supported." error. See the screen shot below as a point of reference.
Before moving on, let's discuss what RECONFIGURE is and why we use it after updating/configuring any value in sp_configure. As per Books Online, "RECONFIGURE updates the currently configured value of a configuration option changed with the sp_configure system stored procedure. Because some configuration options require a server stop and restart to update the currently running value."
To resolve this issue we used the "with override" option of the RECONFIGURE command and this time it was successful.
EXEC sp_configure 'database mail XPs', 1; GO RECONFIGURE WITH OVERRIDE; GO
Option 2 - sp_configure 'allow updates', 0
There is one more option to fix this issue ("Msg 5808, Level 16, State 1, Line 1 Ad hoc update to system catalogs is not supported.") and that is by changing the config_value of the "allow_updates" configuration option to 0 in sp_configure. "Allow updates" was used in SQL Server 2000 to allow direct ad-hoc updates to system catalogs and tables. There is no use of this setting with SQL Server 2005 and beyond. This is because with SQL Server 2005 and beyond, direct updates to the system tables are not supported. The default value of this setting is 0 which should not be changed otherwise the allow updates option will cause the RECONFIGURE statement to fail.
Note: Per Books Online "This feature will be removed in a future version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible." so the best option is to use solution number 1 to rectify this issue.
Steps to rectify the RECONFIGURE statement issue
Step 1: First check the config_value of the "allow updates" configuration option. If its set to 1, change this value to 0, which is the default value.
EXEC sp_configure 'allow updates' GO
Step 2: As you can see SQL Server is not running with the default value. Now you can run the RECONFIGURE command to check the value or reproduce the error ("Msg 5808, Level 16, State 1, Line 1 Ad hoc update to system catalogs is not supported.") because someone has changed the value of allow updates.
Step 3: Now you can see the above error and it's because of not setting the default value. Now go ahead and change it to the default i.e. 0. You can see in the below screenshot that it executed successfully with the reconfigure command.
sp_configure 'allow updates',0 go reconfigure go
- Be careful while making any change in any value in sp_configure. This might command may impact your SQL Server instance unexpectedly.
- Read more tips on SQL Server.
About the author
View all my tips
|
OPCFW_CODE
|
Azure Cost Management – Reserved Instance(RI)
Azure cloud provides lot of features and advantages on consuming the cloud resources comparing with other cloud providers. Important features of Azure are Hybrid Use benefit(AHUB) and Reserved instance(RI) combination consumption on Azure VMs where we had lot of cost savings (82% discount on AHUB + RI), (72% discount on RI).
Azure made GA announcement on RI feature which we have lot of benefits to investments and proper cost planning.
Azure reserved instance provides major cost savings on Azure VMs. RI purchase will benefit you from Standard rates to discount rates for 1 year and 3 years term benefits and rates considered on VM compute NOT on OS level. This helps in upfront investment and consumption benefits and easy measure the expenses on Azure Resources.
Explained in detailed of Reserved Instance(RI) feature enablement and consumption on the subscription level and account level SCOPE.
Azure Reserved Instance Benefits
- Reserved instance benefits have two primary purchase terms – 1 year and 3 years.
- Reserved instance is not associated either to Windows or Linux. It is associated on compute consumption level costing discount.
- Reserved Instance support with AHUB combination where we have major benefits on running the windows VM with 1 year and 3 Years term.
- Reserved instance is applicable to Dseries and above instance sizing except A and G series instances.
- Reserved instance has two flavors of purchase on subscriptions. 1. All subscriptions shared associated to an account 2. Single Subscription purchase.
- Reserved Instance has limitation with quota and different instance series number availability in regions. It is similar as Quota availability of standard instance.
- Reserved Instance Cancellation charges applicable with 12% on VM purchase payments made.
- Reserved Instance has two modes of Enrollment support applicable. 1. Enterprise Agreement 2. Pay-as-you-go.
- Reserved instance supports with moving Single Subscription to Shared Subscription purchase SCOPE and vice versa.
- Reserved instance is not allocated to particular VMs, it works on number of instance/flavor series purchased on subscription level.
How to enable the Azure Reserved Instance on subscriptions.
- Your subscription is being registered with Microsoft.Capacity so that you can purchase RIs. (For reserving the capacity of the VM requested on your subscriptions)
(Azure Portal->Subscriptions->Resource Provider-> Microsoft.Capacity->Register.)
- Your subscription should have prefunded for required reserved instances purchase cost. It is applicable for EA and Pay-as-you-go.
- Purchasing the Reserve instance should have owner privilege on both (Pay-as-you-go) or EA admin (Enterprise Agreement) for subscriptions associated.
As mentioned Subscription level, Reserved Instance enablement has two SCOPE options.
- Shared – Apply the reserved instance to any subscription within the same billing account as this purpose. (EA or Pay-as-you-go purchased subscriptions)
- Subscription level – Restrict the Reserved Instance benefit to only the payment subscription.
Process to purchase the Azure Reserved Instance
- Link to purchase the Reserved instance for EA and Pay as you go Subscriptions:
Subscription: Subscription Name (Either subscription from EA/Pay-as-you-go payment account)
Scope: Shared or Single Subscription (Based on your server’s placement either in single or Multiple subscriptions)
Location: East Asia or Other regions
VM Size: Supported VM size for Reserved Instance (D, DS,DS_V2,DS_V3 E, F, H series, etc)
Term: 1 or 3 year
Quantity: Depend on requirement Ex: 10.
- Select on calculate cost will enable on purchase instance option and information with saving detail of instance information to commit the Purchase.
- We can check the purchased reserved instance from Azure portal.
Azure Reserved Instance pricing details can be vary based on enrollment plan.
Ex: 1. Pay-as-you-go. 2. Enterprise Account
- Pay-as-you-go pricing calculator:
Pricing calculator for Azure Pay-as-you-go has common pricing for all instance and following factor combination purchase provide different discount savings approach.
- VM Cost for windows or Linux
- VM Cost with AHUB licensing for windows
- VM Cost with 1year and 3year reserved instance purchase with AHUB Licensing for windows
- VM Cost with 1year and 3year reserved instance purchase for linux
Note: Cost savings towards AHUB licensing for windows instance with reserved instance will provide maximum benefit running on Azure.
How to enable and download the Azure Reserved Instance on EA Level.
- Login to Azure EA portal – Manage and go to settings – To Enable the “Add Reserved Instances” = “Enabled” from the dashboard.
- Click on Notification on left side of the EA dashboard and Download the Azure Reserved instance pricing list
Reserved Instance FAQ: https://azure.microsoft.com/en-us/pricing/reserved-vm-instances/
|
OPCFW_CODE
|
From an Software Engineer to a Technical Product Manager
Below is my story of such a move, role comparison and reflection on what went well and what didn’t.
The most popular way of becoming a Technical Product Manager is to upgrade from a Software Engineer role and naturally utilise strong technical background and this is exactly what I did 5 years ago. I wish it was perfect from the day one, but instead it was quite a bumpy road — I have been learning completely new craft, making mistakes and slowly growing my understanding of the role.
And since one can only understand challenges having the right context, I explain the differences (and commonalities) of two roles as they were appearing during my career path. If you are interested only in a bare comparison feel free to just skim through highlighted sections. Enjoy!
Very quickly after finishing my PhD in Computer Science (it was about ocean-atmosphere modelling on supercomputers) I realised I am not a scientist at my heart. Science work requires one to go immensely deep into a single topic and after few years come up with a micro improvement within a very specific field. No doubt, exactly these micro steps drive human evolution, but I always felt better with developing multiple projects at once, connecting them into areas, explaining it to colleagues in a simple language, in other words — with breadth, not depth.
Funny enough, that period taught me that one doesn’t have to love the job to properly do it (which is a bit scary to be honest), but at some point I decided to at least change a field and become a developer in IT company whose product I like or would like to contribute to. So I sent a CV to Google, Booking.com, SpaceX, Uber, Yandex and long story short, Google turned me down, SpaceX doesn’t accept non-US citizens, Uber was hiring a Seniors only so I ended up choosing between the biggest Russian IT-giant Yandex and Booking.com and the latter won.
Looking back at that period, I realise my role was not a canonical scientific development as I had been actually creating my own agenda (what part of the system to improve, what functionality to which client to build) and heavily invested into visualisation and documentation aspects of my “product”. Desire not only to build what my scientific director advised me, but actually come up with new “features” and go extra mile presenting and ensuring various parties understand what I was doing were the first signs of PM-ing.
When a developer tries to be a PM
So I started as a Software Engineer at the big IT company with zero industry experience. After the onboarding period and a year of some growth in the role I realised that I even though I really like the company product (travel), I still have a little opportunity to influence it.
As a developer I was naturally focusing on the in-depth details of few very concrete backend features, without too much of a possibility to change things on the wider scene — just because I had nor time (I was coding) nor overall picture (it was a PM’s job). Surely next engineering careers levels (seniors and principals) had much more breadth in their work, but I wanted not only to solve business problems — I also wanted to define them.
So I started to seek opportunities to step into PM shoes somewhere and see if it fits me. Thanks to the fortune, a few months later I was assigned to a cross-department Machine Learning project with multiple parties involved: business teams were providing feature collection logic, infra folks were building a learn-predict backend and my contribution was to connect them with a real-time prediction adapter.
We were 4 developers in the room when I realised that each one of us has his own disjoint scope and no one oversees the full picture and steps to take towards the launch. So I felt someone has to try and started drawing high-level steps on a whiteboard, align who does what and after a few meetings became a developer with a bit of (clumsy) product addition. My official PM was happy, because he was not technical and all of this system connection didn’t sound interesting to him anyway.
This small project made me confused: from one perspective I was enjoying driving it, from another — not to extent of giving up all my tech knowledge and become a classical pure-business PM. This is how I started to think about a mixed role which didn’t exist in the company back then — about role of a Technical Product Manager.
Without waiting for new fortune gifts, I started to look around and after around 20 coffee chats with various product leaders and an interview I was invited to an experimental position of an Infrastructure PM.
PM is expected to lead
I still remember during my first day in the role one of the developers in my team said: “So Vladimir, you probably know what we should do next?”. It was just teasing, but I suddenly realised that jokes aside now I am expected to know where the team should aim towards. I had no idea actually, as so far I always had been expecting someone (usually PM) to have a plan.
Up until then there always was a long list of stories in the backlog and it looked like as our PM built a very clear high-way into a bright future of our product and all we needed as a tech team was just to work hard to progress along this path. First days of being a PM made me understand that now I am expected to build such a “road” and deal with uncertainty and a real possibility make a wrong turn.
So I was confused again: how would I tell my team where are we going? Luckily I had a great manager. To start with, he recommended finding clients of my product and understanding what they need from us (to this day it is still one of the simplest and most powerful pieces of advice I received during my career). With time I built a solid understanding what my clients need which after some analysis and crunching forms all the horizons of the PM work: from sprint plannings to the long term vision.
Tech PM role is all about breadth: who are your 20–30 clients, what they already want from you or what are their plans so you can “advice” what they “should” want, who are your dependencies, what are the ideas of your own team, of your leadership team, what are the company objectives and then also “Tech PM” addition: how is your product tech health: reliability, monitoring, security, testability and other non-functional requirements doing. As you probably noticed, it is easily a full-time job.
Please note, that here I am talking about single product breadth and for even wider cross-product cross-department breadth there is a Technical Program Manager role (i explain the difference in this article).
Conflict of “what/why” and “how”
Another lesson I learned as a “recent” developer was about crossing the line between What+Why and How.
In well functioning teams PM is in charge of what team does and why it is important (because they keep in their head business directions, client requirements, restrictions of dependencies, context, etc). Of course, great developers can (and should) contribute this part (by proposing features or even shape the vision), but at the very end the PM is accountable for results. In opposite, when it comes to “how-s” of implementation (architecture, framework choice, data flows, stability, etc.), development team should shine.
During the first months in my new role we had few subsequent outages and I felt I should do something about it and drew on the whiteboard a “solution” I thought was a smart way to go. It didn’t come across very well and gave me a great lesson — PM can (and should) prioritise particular work (in this case — reliability), set clear goals (e.g. fallback mechanisms to guarantee SLOs), but how exactly will it be achieved (e.g. by having database replication, traffic analysis and corresponding routing mechanism) is totally up to the development team.
With time I noticed that (surprisingly) it is tempting to immediately jump to proposing solutions rather than to make a step back and define the problem/goals and let the team decide the rest. Even experienced PMs sometimes make this mistake, so recently-a-developer PMs should be especially cautious in similar situations.
Even though the PM should not define the solution, they can still challenge team propositions, because at the end one more brain just makes any idea more solid. But how deep should the PM should dive? This brings me to the last point.
Luckily, the transition from an Engineer to the PM role is not only about obstacles, there is actually one “free” addition — an understanding of tech.
Since everything is a service these days, technicality allows a PM to feel the product much deeper, notice and nurture technical insights and challenge the solution at more advanced levels than expected from a classical business person. Obviously it should not come at the cost of a PM craft, but combination of the high-level picture with a bit of tech understanding can go a long way with a product success.
For example, as a tech-y PM one will be able to see that three completely different UI products can be powered with the same service (which saves a lot of work) or that exposing API to the external world involves security and reliability work (so this work should be planned). Moreover, on a team scale Tech PM is much more effective in any development conversations as they don’t need a “translator” in-between. You can find some real industry tech product examples and what Tech PM can do with them in the Udemy course.
As usual, there is a limit to PM’s ability going into details though as no one can properly combine both depth and breadth, so this level should fluctuate to balance business and tech parts depending on a product scope, for example:
- It is be just nice to know a bit about API terms for a PM owning a user experience for a video streaming platform, as they have to probably interact with internal systems a lot
- It will be important to know API specifics for a PM whose product is exposed via API (e.g. Facebook messaging functionality)
- And finally, Tech PM must be an API guru if the product is an API gateway itself (e.g. Uber’s one)
In general, technical skills make PMs much more useful for less “intuitive” products (e.g. Google search, Spotify music streaming, EasyJet booking engine, Amazon service monitoring, Stripe API framework and so on), that’s why there is a whole Technical Product Manager role rapidly gaining popularity.
I personally worked in three different areas (Infrastructure services, Business services and FinTech) and everywhere was able to find a product where utilisation of both technical and business skills was most effective for project success and at the same time — for my own satisfaction.
It was my story about transition from a pure technical Engineering role to the Tech Product Management space. It started with a shy attempt of PM-ing one part of one project and with time changed depth to breadth, clarity to uncertainty and “how” to “what/why”. Needless to say, so far I am enjoying it a lot and now have the pleasure of leading a mixed team of PMs and Tech PMs.
I hope it was useful. In the next articles I would like to cover Tech PM hiring aspects and walk you through a detailed example of a Tech Product use case. See you soon.
|
OPCFW_CODE
|
Found on an older blog and posted here for historical purposes.
they looked up together. the sky seemed bigger now somehow, especially the clouds. especially compared to the hill under them.
he had his arm around her and she felt good against him, but he was looking too far up, and starting to lose his balance.
the sky was full of clouds, the fluffy kind, not the whispy dream ones. the kind of cloud that looks like whatever you want it to. the kind of cloud that you could dance on if you could only jump high enough.
there was one especially, a dark brooding one, that he couldn’t help looking at. it reminded him of a person, it was a grumpy cloud.
eventually it got cold and they walked home, his goose was all aflesh and so was her’s. he offered her his coat but it was thin, and didn’t really protect anything from the world.
when they got home he tried to write her a poem. he wanted to tell her how beautiful it had been on the hill. he wanted to tell her about what it was like to feel her prickled flesh in his hand. he wanted to tell her that she was more beautiful than he ever could have imagined anything to be.
but his poem was all about the cloud. the big one. the grumpy one.
This is ancient, probably last updated around 2010. Please do not consider this emblematic of my work. Thank you for humoring me if you actually go through this list. At this point it is mostly just a memory-lane for me to walk down when I’m feeling nostalgic.
Good web design is a thing to be enjoyed, accessed and used. Bad web design is a crime unto humanity. I like to think that I do my part. Though many of these sites are not (I am self-learning), I try to keep all of my work standards compliant and as accessible and usable as possible (see, World Wide Web Consortium for more details). I believe in use over dazzle and beautiful simplicity over artistic fog.
Anyone motivated to pay me for design services should feel free to contact me as soon as possible, I am flexible concerning fees and excited to be involved in a variety of projects.
(most of these are screenshots)
Voices without Votes
projects and oddities
Alternativefreedom.org: Website for a documentary film about copyright law.
398t: Information Design, a rebuild of our atrocious class website (link) using web standards, usability and consistency as guides. (Manually coded)
Concordia Philosophy Students Association Website
Zombierotica: The Sensuality of Undeath.
ungrateful biped/simian uprising
Ungrateful Biped v1.0
Simian Uprising v2.0
Simian Uprising v4.0 (Built on Movable Type)
sites for friends
Princess Camp and Poison Frogs – Louisa (Movable Type)
BrianC: a space for my younger brother. (Blogger)
[Note: All of these are loosely protected by a Creative Commons License, so you are technically allowed to borrow and reuse any aspect you wish. However, grabbing a template that I or one of my friends is currently using is just rude. Please keep the internet fresh and make your own, it’s easy. ]
jer [at] simianuprising.com >> My currrent and likely future email adress. The most consistent way to reach me.
simianuprising [at] gmail.com >> the @simianuprising.com account gets forwarded to Gmail because Google is the best thing that ever happened to the internet. Any mail from me will list this as the sender but the jer[at]simianuprising.com for mailto. This is complicated.
I have a cellphone and skype. If you feel you should have either number feel free to email me.
|
OPCFW_CODE
|
Lead Data Engineer – Python / SQL Expert - Web3/Infrastructure
As the Lead Data Engineer for this infrastructure project, you will play a pivotal role in developing and maintain distributed systems, data pipelines, and APIs for Rated. You will primarily focus on building indexing, database, and processing systems that will enable the project to expand to multiple blockchain networks. This role is critical in shaping the projects technology infrastructure. The ideal candidate will have experience in crypto data indexing, be mission-driven and intellectually curious, thrive in early-stage environments, and be an avid open-source contributor
This project contains an explorer hub which is a valuable resource for a community in the blockchain ecosystem, with a specific focus on networks using the Proof of Stake (PoS) algorithm. Participants such as validator operators, relay operators, developers, researchers, and wallet providers rely on the explorer's insights to navigate current challenges and anticipate future possibilities. The documentation section provides in-depth background information on its various components, along with clear definitions of the variables and methodologies used to drive them.
As a Lead Data Engineer your responsibilities will include:
As a Lead Data Engineer, you should bring:
· Expertise in SQL and Python: Demonstrate mastery in SQL for efficient data querying and manipulation, as well as in Python for developing robust data processing solutions and automation scripts.
· Proficiency in Real-Time Data Processing: Possess strong skills in building real-time data pipelines and indexers using technologies such as Flink, Bytewax, Apache Beam, Spark Streaming, or similar frameworks. Experience in handling streaming data efficiently is essential for this role.
· In-Depth Knowledge of Database Technologies: Have a deep understanding of columnar and/or time series databases such as TimescaleDB, ClickHouse, StarRocks, or similar solutions. Proficiency in designing, optimizing, and querying these databases is necessary to ensure the efficient storage and retrieval of blockchain data.
· Comprehensive Understanding of Software Development Lifecycles: Exhibit a thorough understanding of software development lifecycles, from conducting code reviews to implementing continuous integration and continuous deployment (CI/CD) pipelines. Experience in ensuring code quality, scalability, and maintainability throughout the development process is crucial.
· Familiarity with Infrastructure as Code (IaC) Systems: Possess some experience with Infrastructure as Code (IaC) systems such as Ansible, Terraform, or Pulumi. Understanding how to automate infrastructure provisioning and management using code-based approaches is beneficial for maintaining and scaling the project's infrastructure efficiently.
We are looking for a candidate who is passionate about the blockchain industry and eager to contribute to the growth of this infrastructure project. If you are ready to take on this exciting challenge, please reach out to Poppy Colbourne
$160,000 / Annually
|
OPCFW_CODE
|
Process Monitor Network Error
This post tells you how to trace "Access Denied" events for file and registry activities occurring in the system, using Process Monitor.(I already have a how-to article on using Process Monitor Company Name - the main reason that this column is useful is so you can simply exclude all Microsoft events quickly and narrow down your monitoring to everything else that isn't User starts another application B, script unmaps drives and maps drives again. Want to understand which registry keys your favorite application is actually storing their settings in? Want to figure out what files a service is touching and how often? Check This Out
There’s five standard types, of which the first four are enabled by default: Registry, File, Network, Process & Threads and Profiling. The Events that Process Monitor Captures Process Monitor captures a ton of data, but it doesn't capture every single thing that happens on your PC. I also agree, your sysint-tools are all AAA-class software, … but I would simply wipe the buggy installer reg entry manually (or maybe using msicuu/msizap). - Then make a backup of Ivan GiugniProduct ManagerPowerGadgets 9 years ago Reply Mr. http://forum.sysinternals.com/network-error-with-file-access_topic24186.html
How To Use Process Monitor To Troubleshooting
No problem, this is where the filtering comes in handy. I'm recommending the lessons to all who have asked me for computer info... The Process Monitor trace revealed that the installer was reading the original location from the registry, so if I pointed the registry at the installer’s new download location, I could trick
The error message referred to a file named SteamInstall.msi, so I searched the log file for that string. Once we’ve identified the necessary user account, it’s a simple matter of granting it NTFS write rights to the C: directory. Unsubscribe Publications Translate this pageSocial MediaPopular TagsIIS Azure Web Apps Azure Debugging IIS Labs (CSharpGuitarBugs) Application Request Routing IoT Security Bot C# ASP.NET PowerShell Gadgeteer LUIS Entity Framework NHibernate HTTP CognitiveServices Process Monitor Name Not Found What the BUFFER OVERFLOW message in the Windows API, and specifically in Process Monitor, actually mean is that the client application requested data but didn't have a large enough bucket to
current community blog chat Super User Meta Super User your communities Sign up or log in to customize your list. Process Monitor Filter Registry Changes I first went to the Uninstall or Change a Program page in the Control Panel, but double-clicking on the Steam entry brought up a dialog asking me to confirm uninstalling it Here's what each of the default columns is used for: Time - this column is fairly self-explanatory, it shows the exact time that an event occurred. http://superuser.com/questions/1042898/persistent-network-error-on-single-file Awesome!
One thing you can actually do is to install pointing to the MSI over HTTP instead of from the cache. Process Monitor File Locked With Only Readers A and app. Once you’ve opened the properties window, switch to the Process tab. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed
Process Monitor Filter Registry Changes
As an example, imagine that a process was constantly trying to query or access a file that doesn't exists, but you weren't sure why. Rasmussen improve.dk About Me Pages Categories Archive Oct 21 2009 Solving Access Denied Errors Using Process Monitor .NET , IIS Comments Access denied errors are not uncommon when deploying new websites How To Use Process Monitor To Troubleshooting So I belief that this "Network error" is what triggers the crash. Process Monitor Tutorial A few other applications on the share show the same error while others work fine.
Our default installation folder is
You can alsoconfigure Process Monitor to log activity very early in the boot process - during the initialization of boot-start device drivers.
In the amount of time it took to reproduce the error, Process Monitor had logged 100,000s of events. Because I had originally launched the installer via IE directly from the Valve web site, just like I was doing now, the download location was in IE’s download cache, but the How do this stop? 6 years ago Reply Yogesh Patil Is there any setting that Perform Moitor provides to raise an event when an application hits a process limi 5 years Process Monitor Command Line Not sure of the status of the second client.
This time, I can see that there's a "PATH NOT FOUND" error logged in Process Monitor. This wouldn't be our recommendation for your first stop when you start testing, but since we're explaining columns, it's worth mentioning already. Network - this will show the source and destination of TCP/UDP traffic, but sadly it doesn't show the data, making it a bit less useful. navigate here Make a note of the Process name, operation it tried to perform and the file/directory or the registry Path it tried to modify.
You can very quickly filter by any column using the context menu and using the Include or Exclude features -- if you Include an item, the list will only contain events Some are perfectly normal. Reducing 1.5 volt battery voltage Disease which requries regular medicine Would England of the 14th Century be capable of producing revolver bullets Drunk man with a set of keys. We’re running under IIS, but we may be impersonating a user profile, running under a non-standard user account for the application pool (that is, not NETWORK SERVICE) or explicitly writing the
Switching over to the Process tab gives you lots of great information about the process that generated the event. Reply JS2010 says: December 6, 2016 at 2:23 am I fixed 'regular user prompted to install a device driver' with procmon: social.technet.microsoft.com/…/3c90cad7-3729-49b6-8d16-a89fc7830702 Reply Brant Gurganus says: December 6, 2016 at 2:23 Then when I turn on again it make a grinding noise. Figure 2 illustrates the filter I used to reduce the events to just those used by the DebugDiag process.
Performance Partner Resources The Ultimate DevOps Toolkit AppDynamics Account Takeover: How Hacking Happens Immunio Real-Time Web Application Security Immunio Detect and Diagnose Performance Issues BMC Go Beyond Mere Monitoring BMC Evolve Will I be able to do this without any problems 8 years ago Reply Jon Campanali When I try to run the procmon it says it requires Administrative Group membership.
|
OPCFW_CODE
|
I'm Paul Chung 👋
I am a Computer Science undergraduate at the University of Wisconsin - Madison and a strong enthusiast for security and privacy. I am honored to be working with Professors Rahul Chatterjee and Kassem Fawaz to deliver safe and secure systems.
I am actively seeking CS Ph.D. positions starting Fall 2024.
You can reach out to me (at) pywc.dev.
EducationUniversity of Wisconsin - Madison
- B.S. in Computer Science · 2020 - Present
- STEM High School Degree · 2017 - 2020
- UW-Madison MadS&P · Fall 2021 - Present
- UW-Madison WI-PI · Spring 2022 - Present
- Carnegie Mellon CyLab · Summer 2022
- Cybersecurity UW Club · Fall 2020 - Present
- UW-Madison CSOC · Fall 2020 - Present
- Shawshank Intel: A Heuristic-based Analysis of Censorship Mechanisms Formulated a pipeline to analyze censorship tactics worldwide.
- Automatic Selection and Analysis of Google Data Safety Cards Mapped and trained Privacy Policies to Data Safety Cards with DistilBERT.
- Mitigating CVE-2023-2033 at a Programming Language Level Simulated Type Confusion to compare C++ and Rust in terms of security.
- Engineering Privacy in iOS App Groups Implemented the app groups threat model with Xcode.
- picoCTF: Introducing Adversarial Machine Learning to CTFs Developed 10 Regression and CNN-based challenges.
- CookieEnforcer: Automated Cookie Notice Analysis and Enforcement Designed the front-end UX based on the user study results.
- Araña: Characterizing Password Guessing Attacks in Practice Analyzed real-world credential stuffing attacks and the attack tools.
- Exploiting CVE-2019-0708 on Embedded Systems Presented a threat model for compromising traditional ATM machines.
Privacy nutrition labels provide a way to understand an app's key data practices without reading the long and hard-to-read privacy policies. Recently, the app distribution platforms for iOS(Apple) and Android(Google) have implemented mandates requiring app developers to fill privacy nutrition labels highlighting their privacy practices such as data collection, data sharing, and security practices. These privacy labels contain very fine-grained information about the apps' data practices such as the data types and purposes associated with each data type. This provides us with a unique vantage point from which we can understand apps' data practices at scale.
Remote password guessing attacks remain one of the largest sources of account compromise. Understanding and characterizing attacker strategies is critical to improving security but doing so has been challenging thus far due to the sensitivity of login services and the lack of ground truth labels for benign and malicious login requests. We perform an in-depth measurement study of guessing attacks targeting two large universities. Using a rich dataset of more than 34 million login requests to the two universities as well as thousands of compromise reports, we were able to develop a new analysis pipeline to identify 29 attack clusters—many of which involved compromises not previously known to security engineers. Our analysis provides the richest investigation to date of password guessing attacks as seen from login services. We believe our tooling will be useful in future efforts to develop real-time detection of attack campaigns, and our characterization of attack campaigns can help more broadly guide mitigation design.
This study examines the ARP and RDP Bluekeep vulnerabilities on using Embedded Systems and identifies the possible implications of such vulnerabilities by performing penetration testing on virtualized embedded machines. Furthermore, this study elaborates on that the Administrative privileges can be easily taken away through the RDP Bluekeep vulnerability, and that all packets containing communication information of various protocols could be severely leaked by the ARP Spoofing method. The result of this study presents the solutions for these vulnerabilities.
|
OPCFW_CODE
|
I'm a beginner with TestStand. I'm trying to go through the entire sequence until very end with some dialog windows left opened without waiting for a user to click OK/QUIT on the dialog window in order to go to the next step of the test. I want the test to end without any user interaction needed.
I've tried with dynamically called vi but it pauses the test - waits until user closes the window and what is more it does not pass the parameters to the dialog box (perhaps I did something wrong with dynamic call.
Is there any way to open dialog vi and proceed with the test with no user interaction?
PS Dialog vi only receives the arguments and displays them on the graph.
Solved! Go to Solution.
What you are looking for is modification of the UI to display this information.
Use Custom User Interface Messages (UIMsg) for this and adopt your custom UI for handling them.
Unfortunately I don't understand what you mean, do you have any manual for that? that's mean i don't need VI anymore?
You can find information about creating custom UIs for TestStand applications in the TestStand documentation (installed with TestStand).
Also, TestStand 1 and 2 classes from NI cover these topics, for UI specifically the TestStand 2: Customization class.
A third source for documentation is the internet, recommended ni.com pages like the TestStand Advanced Architecture Series.
hope this helps,
Uh, I think I know what you mean. I have already done UI (one of those ready ones). While creating a sequence file (*.seq) in one of steps I want to call a VI, draw a graph, left the window open and proceed with the following steps in the sequence(not wait for closing the VI). It ought to work independently of UI it is used in.
Is it possible?
What you're describing is very possible. You mentioned that TestStand waits for the VI to close at the end of execution--that's expected, and but have a function you can use in LabVIEW called the Termination Monitor to monitor TestStand's status and exit the VI when TestStand is trying to end the execution. Here's a Help page on the Termination Monitor: http://zone.ni.com/reference/en-XX/help/370588D-01/lvteststand/teststand_-_get_termination_monitor_s...
I also have an example of the termination monitor in use, which I will attach to this email. The sequence file is in TestStand 2013 and the VI is in LabVIEW 2010. The seq file in this case just runs the VI so if you aren't able to open it due to your TestStand version, you can just call the VI with an Action step in a sequence to see it work.
Let us know if you have any more questions!
The easiest thing to do would be to put your vi in a subsequence and call it using the "New Thread" option from a sequence call. If you do so you should using the terimination monitor and Thread.ExternallySuspended APIs in your vi to make it so that your other TestStand threads can be debugged and the execution can be terminated.
Hope this helps,
I have used the VI you attached. I called in my own TestStand sequence. Although I did not get desired result - the window poped out but the test waited till I press stop button, the sequence execution did not proceed.
OK, I set it as a new Thread and it works! Although one thing more, is it possible to generate the report too, now entire test goes to the very end but the report is generated after that new thread is closed. Is there anything to be done in that issue!?
Do you have On-The-Fly reporting turned on in your report options? I set up the sample VI to run in a new thread and set a breakpoint later in the sequence. At this point, in the directory with my sequence file I had a report_Tmp.xml file which contained the part of the report up to that point.
|
OPCFW_CODE
|
Your memory is a invaluable resource that you can work on improving every day. In order to keep your memory sharp, you will need to do some exercises that promote memory building. This article will give you some wonderful tips that will improve your memory if used on a daily basis.
A great way for you to improve your overall memory is to make sure that you’re always focusing your attentions on whatever you’re studying at the time. The goal here is knowledge retention. A failure to focus fully on the subject at hand means the information may not be retained properly.
Try teaching the subject you’re trying to learn to another person. Research suggests that by teaching something to another person, you’ll have a much better chance of remembering what you’re teaching. So the next time you’re struggling to remember a new concept, try teaching it to a sibling or friend.
Avoid smoking cigarettes to keep your memory from being negatively affected. Studies have shown that the memory of smokers suffers more than compared to non-smokers. You probably didn’t need yet another reason to quit, but maybe this will be the one that lets you finally put down that pack.
Try to stay away from pills that promise to help improve your memory. Most of the time, these pills are not effective and could cause you physical problems. Instead, you may want to look into supplements like Niacin, Thiamine, and Vitamin B-6. They all help to improve the part of the brain that deals with memory.
Use mnemonic devices to improve your memory. A mnemonic device is any rhyme, joke, song, or phrase that triggers memory of another fact, such as the abbreviation Roy G Biv, which tells you the colors of the spectrum. The best mnemonic devices are those which use humor or positive imagery, as you will have an easier time remembering them.
Try to memorize things in sets of 7. According to studies, the human capacity for Short Term Memory, or (STM) is 7, add or minus 2. This is why humans memorize things best in groups of 7. This is also why, for example, your phone number is seven digits.
Exercise for the mind has been shown to help memory, just like exercise for the body will help muscles! If you enjoy crossword or word search puzzles, do them more often or play a trivia game with friends. Such activity will keep your brain functioning sharper and consequently improve your memory!
The next time your memory fails to help you remember where you placed something, be sure to jog your memory. Try to remember where you last placed something and how long ago it was. From now on, try to keep your items in the same place so you do not forget where they are.
If you have a list of words that you need to remember, try putting them in alphabetical order. Our society has already categorized many common items into alphabetical lists, so it is a pattern that your brain is familiar with. As a result, when you alphabetize a list of words, your brain recognizes the well-known familiar pattern and has an easier time recalling them at a later date.
Sleep is vital to maintaining mental clarity and memory. By avoiding sleep, you make your senses and mind foggier, hurting your ability to focus and piece together information. In addition, during sleep, your brain forges pathways that lead to memory. Getting good sleep (and a good amount of it) will improve your memory.
Do not cram information before an exam or a test. You will remember better if you study regularly. You can improve your memory by making it work on a regular basis, and you will remember something more easily if you go over it everyday instead of focusing on it for a few hours only.
When you need to remember new information, relate it to what you already know. If you use proper memorization techniques, you should have what you already know memorized under a certain structure. Add the new information within the same structure if you can, or add new categories to your organization.
If you are studying complicated information that you know nothing about, try to link it to a topic that you are very familiar with. You will be able to recall the unfamiliar material much better if you are able to associate it with something that is easy for you to understand.
Try learning a new language. Learning a new language can really help to keep your mind and memory in shape. It has also been proven to delay brain deterioration and dementia. Just immersing yourself in the language will do. There is no need to become a fluent speaker of it.
When trying to improve your memory, brain stimulation and using your mind is important. Schedule a weekly game night with your friends or family and make your brain exercise fun. The mental workout received from games such as chess, or Scrabble are very effective tools in boosting the power of your brain.
When learning something new, involve as many of the senses as you can. There are several different learning styles, and each uses a different sense to optimize their learning experience. Touch an object, associate it with a smell, look at it, and even have a taste that reminds you of what you want to learn. You will more effectively retain the information. Recalling the information will come easier as well.
We remember funny things. So, if something amuses you or you think it is funny, you are more likely to remember it. If you are trying to memorize something, create an amusing or absurd image out of whatever it is you are trying to memorize and it will be more likely to stay in your head.
You have been given some wonderful ways to improve your memory. Use the tips that you have learned to keep your memory sharp and working properly. Your memory is something that if you don’t actively work on, it will not be there when you really need it. Stay on your toes and follow the advice you have been given.
|
OPCFW_CODE
|
The given values to sort order is difficult to have used where each entry and each of letters?
Applies a pivot group the page helped me to specify what codes and collaborate to google sheets countif cell match reference that contain specific salesperson of.
Access and reference in google sheets wildcard characters representing exact match function and give an answer to countif partial match cell reference google sheets function to start going vertical?
PLEASE FULLY READ THE POSTING GUIDELINES AND FAQ IN THE MENU BEFORE SUBMITTING A POST!
Even those that contain text with partial matches typos or spelling variations. You jump onto your google sheets countif cell match reference?
Alternatively paste the countif function to reference a new domain names or substring from both sheets countif cell match reference that falls between, allows for a condition.
Count if for google sheets its source
COUNTIF contains Using Wildcards COUNTIF for cells containing part of a word.
To google sheets formula that google sheets countif cell match reference the slicer appears on our services via a sample below udf in the.
Sets how google wildcard characters of countif partial match cell reference google sheets countif function?
Removes all the partial match
Sort order specified value by google sheets countif cell match together, mas não foi possível encontrar a multiplication formula?
Adds developer metadata associated with partial text. Returns a number representing the position a substring is, if found in a textstring. No cabeçalho e that we know that experience and learn excel converted consumers from being not contain the width of this!
She loves challenges, countif formula from google spreadsheet countif partial match cell reference google sheets!
Returns an adobe pdf document if we also contains certain value and returns whether iterative calculation on how google.
Adds a countif criteria on your cookie information reorganized based rules in countif partial match cell reference google sheets!
This point in our financial projection templates from this drawing is: ich habe eine eingabe tätigen in partial match of a vlookup formula, this formula will accept our news regularly keeping track of.
Hope you understood how to use partial match with VLOOKUP function in Excel.
Fundamentally our countif? An additional formula used in countif partial match cell reference google sheets? This range across columns after those that in a countif partial match cell reference google sheets starts with formulas.
Sql query is google sheets function returns unsupported data that google sheets countif cell match reference that reference that exceed the.
How to your own example
Returns the horizontal alignment of contents of tasks can probably the correct answer or before i wanted to countif partial match cell reference google sheets can move everything straight the data provided criteria to the.
You can help tab on google sheets countif function that result is after the specified numbers in google sheets: are the formula to trigger when a single.
Submit A Support Request
And numeric numbers.
The partial match, partial match up to get results by a number of giving me the data you need your text?
Returns whether it? Water How to use an advanced filter with an OR condition in Google Sheets.
However hundreds of sheets countif
The COUNTIF function in Google Sheets is used to count the number of times a value is found in a selected data range that meets the specified criteria.
Unhides one column partial match using the dates are plenty of cells being the countif partial match cell reference google sheets is on!
VLOOKUP only looks RIGHT. An exact same as a partial matches with partial match function with a reference? So if you want to highlight both the drawings considering as one, then you have to follow the partial matching duplicate approach.
The partial match and information, and count in countif partial match cell reference google sheets formulas.
Thanks in google sheets but it is less than the percentage of frozen rows starting with dates but, reference cell number of minutes to.
You summarize and sheets countif functions that is getting result will return
Retrieves the google sheet tabs of countif partial match cell reference google sheets to the if it the cells?
Moves this google sheets has returned depending on that reference, fun and is not equal to extract certain values, you get your day before submitting please see in google sheets countif cell match reference. So you will need to arrange your data first before using VLOOKUP.
If for all column index and countif partial match cell reference google sheets are meant for reading it to create, such a space after an explanation of a filter criteria and only.
For table chart type of countif partial match cell reference google sheets spreadsheet formulas to apply a busca em um usuário em um lista suspensa?
See More News
This problem or another cell match reference a single condition of users with some of.
You are either side of the partial string in pixels. Save my name, email, and website in this browser for the next time I comment. Sets the countif function has a reference the range in countif partial match cell reference google sheets its come through.
Any of sheets countif cell match
Sets a partial match functions, partial match and group limit belongs to the problem was that?
It is countif with cell reference other criteria is before the data source specification and modify existing conditional formatting options for your blog! Gets the rich text is equal to each word assessment appears in the.
Am trying to return a match of iterations to a spreadsheet with excel allows one criteria must be anything else to countif partial match cell reference google sheets files are matched cells in.
What do advanced excel hacks. You would need to add some COUNTIFS together, but there are two ways to do this. Down list now i input rules from corresponding operator to partial match instead of blank, partial matches are endless.
The partial matches in countif partial match cell reference google sheets? Schoolhouse Typically the values to reference with a value of the cell match reference with this sheet.
An array formulas in google sheet tab key to reference a lifesaver when input does not supported data source.
How google sheets cell reference other cells where you would like a countifs. Do partial match and partial text to count if it limits the.
Checks if there are used in google sheets in more specific words in the data contains text as countif partial match cell reference google sheets cell reference?
Once you have received the verification code, you will be able to choose a new password for your account.
Putting nested inside of google sheet and data source data range when dealing with and then execute a google sheets countif cell match with two columns in quotes like!War Army On Declared Emus Australian
An accurate reply to countif partial match cell reference google sheets you can reference and partial matches deleted from google sheets, configured with the given name from the range.
Any tips on how to get around this? Express
To sum if cells contain specific text in another cell, you can use the SUMIF function with a wildcard and concatenation.
Access and modify an existing data source chart. Having to countif partial match cell reference google sheets makes it! Here is after those values are simply count cells that make you will spit out the sheets countif cell match reference?
Then i iterate over the cell match
Scenarios where a specific words are regular expression matched cell reference or more than three of a set the input is met when a filter criteria at google sheets countif cell match?
Gets the partial matches may be? Sets whether to countif partial match cell reference google sheets is google sheets is bold or partial text style for the new chart this formula will see the resulting array formula in a reference? Is there is all conditional format this sorted range or sheet or range of fiddling to be from another name of cells to get help. Supposing, you have a list of names in Column A of Google sheet, and now, you want to count how many times each unique name is appeared as following screenshot shown.
Requires a google sheets including detailed explanations of countif partial match cell reference google sheets conditional format.
We do is very easy and sheets countif
If formula to match of sheets cell text or to this method, and quotes and how to find function supports wildcards, simply put an incorrect!
An item inside an enumeration of observations for partial match against the name for the. Application Affordable Senator.
Sets a reference cell match patterns applied. Check if you should be an error details, cell match the actual dates! This guys for this tutorial explores some, date itself and sheets countif criteria to create intricate spreadsheets.
Match function in countif partial match cell reference google sheets counting not. The text part is the cell where you have something to look for and the.
Gets interesting and countif partial match cell reference google sheets countif less than, partial match up to show cells from us know that one tab. If you be combined in countif partial match cell reference google sheets!
Enables paging and sets the number of rows in each page.
To count this data, you need to use a counting formula such as COUNT, COUNTIF, COUNTIFS, COUNTA etc.
It may include wildcards. The picture above demonstrates an array formula that searches a cell range for a text string and returns the corresponding value on the same row as many times as the textstring is found in the value. Sorts the horizontal alignments of the results for the average values based rules in google sheets, much for that tracks sale date! Sets the conditional format rule to trigger when that the input contains the given value.
Science And Technology
News And Announcements
The same result can be achieved by subtracting one Countif formula from another.
Oh sorry for partial matching a reference each part here we develop our countif partial match cell reference google sheets immediately after them? Gets the color set for the minimum value of this gradient condition.
|
OPCFW_CODE
|
ERROR & crash after linking data dir .dogecoin to external device dir
Reproduce the issue:
When starting up dogecoin-qt v1.10.0 in Ubuntu 18.04 and whether you choose to store the data in another dir OR whether you choose /home/user/.dogecoin dir and you previously linked to an external device with: ln -s /media/sf_DB/Dogecoin/ .dogecoin
dogecoin-qt will open up and throw an error in either case.
Error is:
Error opening block database
Do you want to rebuild the block database now?
In the debug.log is the following
2018-10-24 18:31:50 Dogecoin version v<IP_ADDRESS>-208dc1b (2015-11-01 18:31:30 +0000)
2018-10-24 18:31:50 Using OpenSSL version OpenSSL 1.0.1l 15 Jan 2015
2018-10-24 18:31:50 Using BerkeleyDB version Berkeley DB 5.1.29: (October 25, 2011)
2018-10-24 18:31:50 Default data directory /home/user/.dogecoin
2018-10-24 18:31:50 Using data directory /home/user/.dogecoin
2018-10-24 18:31:50 Using config file /home/user/.dogecoin/dogecoin.conf
2018-10-24 18:31:50 Using at most 125 connections (1024 file descriptors available)
2018-10-24 18:31:50 Using 3 threads for script verification
2018-10-24 18:31:50 Using wallet wallet.dat
2018-10-24 18:31:50 init message: Verifying wallet...
2018-10-24 18:31:50 scheduler thread start
2018-10-24 18:31:50 CDBEnv::Open: LogDir=/home/user/.dogecoin/database ErrorFile=/home/user/.dogecoin/db.log
2018-10-24 18:31:50 Cache configuration:
2018-10-24 18:31:50 * Using 2.0MiB for block index database
2018-10-24 18:31:50 * Using 32.5MiB for chain state database
2018-10-24 18:31:50 * Using 65.5MiB for in-memory UTXO set
2018-10-24 18:31:50 init message: Loading block index...
2018-10-24 18:31:50 Opening LevelDB in /home/user/.dogecoin/blocks/index
2018-10-24 18:31:50 IO error: /home/user/.dogecoin/blocks/index: Invalid argument
2018-10-24 18:31:57 init message: Loading block index...
2018-10-24 18:31:57 Wiping LevelDB in /home/user/.dogecoin/blocks/index
2018-10-24 18:31:57 Opening LevelDB in /home/user/.dogecoin/blocks/index
2018-10-24 18:31:57 IO error: /home/user/.dogecoin/blocks/index: Invalid argument
2018-10-24 18:32:03 scheduler thread interrupt
2018-10-24 18:32:03 Shutdown: In progress...
2018-10-24 18:32:03 StopNode()
2018-10-24 18:32:04 Shutdown: done
I tested Bitcoin/Litecoin and they run perfectly whether they are links to external devices for the data dir or whether you set the data dir from the beginning to external devices.
The only data dir it doesnt crash in is if i set it to one like /home/user/Documents or some other in $HOME
Why is this happening? Does this need to be fixed for next release?
This is generally a sign something has damaged the files on disk. Generally the solution (although yes it takes a while) is to run Dogecoin with the -reindex option, which will cause it to rebuild the database that's been damaged.
@rnicoll yes i agree with you but in this case its not. Because i tried -reindex on the default folder and it finished but when i linked to it after it still gives that error. So something is up with it trying to read from a link maybe?
Maybe i should also mention that i tried this from a shared VM folder from VirtualBox but that wouldn't make a difference, as the link is exactly how i posted it above.
I can't reproduce this on 1.14. Even on 1.10 I have hardlinked (on Windows) my datadir and it works fine. On 1.14 I have mounted my datadir into a VM and just bass -datadir=7mnt/ngfs/doge-data and that reads the shared datadir just fine. Mind you both times I have made sure my linked directory was empty. Maybe that has something to do with it? I'll close here for now cause this was inactive for a while, feel free to post again if that is still an issue.
@langerhans this problem still persists.
I tried both of your methods, and i get the same error. This is from running dogecoin from scratch with an empty dir /media/sf_DB/Dogecoin linked to .dogecoin
ln -s /media/sf_DB/Dogecoin/ .dogecoin
Then i ran
./dogecoin-1.10.0/bin/dogecoin-qt
then the popup of
Error opening block database
Do you want to rebuild the block database now?
i tried putting: strDataDir=/home/user/.dogecoin inside the config file and started up with the same error. The error that it says in the logs before it crashes is:
2018-10-24 18:31:57 Opening LevelDB in /home/user/.dogecoin/blocks/index
2018-10-24 18:31:57 IO error: /home/user/.dogecoin/blocks/index: Invalid argument
For some reason it is not seeing the data in the linked dir properly.
The funny thing is, that this has happened (and ive tested just out of curiosity) also for Navcoin client.
When i test litecoin, vertcoin and bitcoin clients. all work fine from linked dirs from the VM
The VM guest OS is the same as the host OS. Ubuntu 18.04
Please could you try and explain this strange error phenomenon.
I'm still running the same setup as I outlined in my previous comment and haven't encountered any issue. You have also not really tried what I said. strDataDir is not a valid param and it doesn't go into the config file either. And the dir you put there is the default anyway. Try this: Make an empty folder on your share, then start Dogecoin Core with ./dogecoin-qt -datadir=/media/sf_DB/Dogecoin or whatever you call the directory. The symlink isn't really required, it was just convenient for me on Windows. On vmware I just use the datadir param pointing to the shared folder that is linked directly to the host's datadir.
@langerhans i tried what you said. please check the small vid i made on a new VM.
|
GITHUB_ARCHIVE
|
Here is a topic I have been saying “I’ll get to it” for a while now…
We’ve talked a lot about UAC here, and I have really stressed the point that standard users shouldn’t be able to affect other users or the machine itself, and if you want to violate that rule then you need to do so explicitly.
The one area that I’ve received some questions on is what to do about shared user data. You should be using c:programdata (not hard coded, of course!) to put your shared user data into, and then explicitly setting the ACL. You’ll need elevated permissions to set that ACL, so you should be doing so at install time.
Now, here’s the part that makes people nuts (and rightly so!) – we then never bother to tell you how you can set that at install time! At best, we’ll give you some hints. Want to know something interesting? You’d probably be surprised at how many people don’t know how to do this themselves, but nonetheless will happily tell you that it’s what you ought to be doing.
I think that’s kind of rude, so I figured I’d actually spend some time poking around so that when I tell you to do it, I could then answer the follow-up question of, “OK then, how?”.
Of course, installers could be anything, and I don’t know all of the tools (not by a long shot). I’ve never been a packager. I had to pick something, though, so I picked what I thought was best – an MSI. If you’re writing arbitrary code (or a custom action) you can just use the Windows APIs directly to set up the security descriptor. But you actually get OK (note I didn’t say “great”, or even “good”) support from the Windows Installer framework.
But how should I build the MSI? I prefer WIX. One comment talks about using the Visual Studio Setup and Deployment Project. I recommend you do not pass go and do not collect $200 until you install WIX instead. It’s not quite as simple, but it actually exposes the power of the platform instead of simplifying it by not letting you actually use the whole thing.
So, here’s the XML I wrote for WIX to create a folder (which I have to do explicitly since I made an empty one) and set the ACL to allow the Everyone group full control of this folder:
<?xml version="1.0" encoding="UTF-8"?>
<Product Id="1cf0f45f-3a04-4878-becc-6f6b4331bfb6" Name="InstallerDirectoryPermissions" Language="1033" Version="22.214.171.124" Manufacturer="InstallerDirectoryPermissions" UpgradeCode="f9a6c7b0-6ed9-4b46-9db1-653eeb568236">
<Package InstallerVersion="200" Compressed="yes" />
<Directory Id="TARGETDIR" Name="SourceDir">
<Directory Id="MySharedFolderId" Name="MySharedFolder">
<Component Id="SharedFolderComponent" Guid="84A264EF-2BC5-41e3-8124-2CA10C2805DB">
<Permission User="Everyone" GenericAll="yes" />
<Feature Id="FolderPermissions" Title="InstallerDirectoryPermissions" Level="1">
<ComponentRef Id="SharedFolderComponent" />
If you compile this to create an MSI, and then edit it with Orca, you’ll see the entries in the Directory, CreateFolder, and LockPermissions tables that make all of this magic happen.
Now, remember how I said that the support was just OK? Well, have a look at what we put into the Permissions entry (which ends up in the LockPermissions table) – it’s just plain English. Well, you’re the one responsible for localizing this. From the docs:
“User - The column that identifies the localized name of the user for which permissions are to be set.”
Why did I choose the Everyone group? Because it’s special cased: “The common user names ‘Everyone’ and ‘Administrators’ may be entered in English and are mapped to well-known SIDs.” (Please note: I don’t speak any other languages, so I don’t have any localized versions of Windows installed – feel free to correct me if you do and I have misinterpreted this!)
But if you just wanted to target users, or domain users, or some other group, and you support multiple languages, you’ll want to do that work inside of a custom action (“A custom action is required to enter the localized name of any other user or group.”). Unless, of course, you already have that value in a property, such as the LogonUser property.
Hopefully this helps you sort out how to do it, instead of us just telling you to “go look it up.” Because you probably have enough to do already.
|
OPCFW_CODE
|
Projects: form idea to execution
Weconomics supports innovation projects that contribute to the organization of sustainable prosperity. An important element is to understand real Digital Transformation and the smart use of organizational technology (such as blockchain), so that we can become more productive, get out of those offices and use the surplus time for sustainability. Blockchain provides more democracy, transparency, less power with existing institutions and secure data traffic. We can set up a community (shared information and transaction network) for your industry/domain, based on the Weconomics organization model, infrastructure and the transition program.
Do it yourself, outsourcing or a hybrid form?
Weconomics has three variants for projects:
- Do it yourself (internal label): You can choose to carry out a project yourself (under your own label), where you hire the knowledge and experience of and give assignments to a Weconomics partner (project leader and / or fellow).
- Hybride (mixed label): we can also choose to start a project together from the beginning under Weconomics mixed label (you are the client and you appoint someone as a project leader who works together with a Weconomics partner who is trained as fellow).
- Outsourcing (external label): you can also work entirely under Weconomics label, and follow the phasing as mentioned below (you are the client, a Weconomics partner delivers both project leader and fellow).
With both option 2 and 3 you use a proven methodology, which reduces the lead time and costs. In all cases you make your own arrangements about, for example, time and rate with the executor (Weconomics partner: project leader and fellow). You have to take into account that costs are associated with the deployment of Weconomics partners. These are independent entrepreneurs.
Goal of Weconomics projects
The goal of Weconomics projects is to develop a community with you. This is a shared transaction network with many providers and many customers (n: n community). With this network you can profile, select, buy and pay almost completely digitally and at minimal transaction costs. The transactions are stored in a shared and distributed ledger which makes the network transparent and allows transactions to be carried out with fewer intermediaries. Privacy and data management can also be better guaranteed and society become less dependent on tech companies and central governments. The underlying infrastructure for Weconomics communities is basically the same for every community. The differences are mainly in the focus (target and target group-oriented communication: branding). The agreements are organized in a Data Common, the differences in their own application (for example DAPP, decentralized application).
Do you have an idea for a project?
Are you a pioneer and do you have an idea to organize supply and demand smarter within your industry, for example by applying Blockchain Organizing? We help you set up and guide your project to realize this. Weconomics helps you with the realization of your idea. With Weconomics you get access to our knowledge center, an extensive network and the right tools to expertly turn your idea into a successful community. Our partners are experienced in guiding enterprising people from idea to execution.
Weconomics has been working from the beginning on the basis of an ecosystem with stakeholders that can preferably also finance parts of the investment (in hours or money). Often we start with a project exploration to find out what the best plan is to start. In general, we distinguish the following project phases (in brackets is an estimate of the budget you have to take into account):
- Start: exploration, choose project leader, fellow, quick scan, prepare kick-off (1-5 K)
- Awareness: what is our perspective: from new thinking to a pilot idea (5-10K)
- Proof of Concept: prepare use case, design sprints, technically working demo (10-20K)
- Feasibility: what problems do we solve with blockchain organizing? (20-50K)
- Open innovation: innovate, MVP, Design Thinking with stakeholders (50-100k)
- Design&architecture: Blockchain by Design, choose technology/platform (100-250K)
- Development: develop a working and user friendly application (100-500K)
- Implementation: implement a working application
- Exploitation: shared information & transaction network (Blockchain As A Service)
- End: official closing of project and transfer to community manager
Our project support usually starts with an introduction in Weconomics and Blockchain Organizing.
Overall, we distinguish the following steps in Weconomics projects:
- You have an idea and would like to hear the opinion of an expert.
- You fill in this form: for Weconomics project general (Dutch) or blockchain project (Dutch) or blockchain project (English) ideas and send it to us.
- One of our consultants evaluates your answers and will contact you.
- If we cannot add any value to your idea/project we will tell you, or advise other experts. You don’t have to pay anything.
- If we proceed and work together on your project and you want to continue, you pay 390 euro* for an analyses, advise and presentation of one of our consultants.
* If you are not a Weconomics partner yet, you become one first (€ 100, once).
- Access to knowledge center and network of professionals and potential customers
- A fundamentally new organizational model and technology for communities
- Apply Blockchain Organizing in practice
- Organizational technology for the improvement of privacy and productivity
- An open source infrastructure to share knowledge and network
- A validated transition program with appropriate tools
- Network of blockchain experts, consultants, trainers and developers
- Bring Your Own Data
- Personal Data Service on a blockchain
- Government services on a blockchain
- Expat Services on a blockchain
- Shared information- and transaction network HR industry
- Blockchain for Supply Chain Management (SCM)
- Social coins on a blockchain
If you are interested in our services, please contact us.
|
OPCFW_CODE
|
mount NAS drive at a specified mount point & make it persistent?
I've looked at a few of the entries in the "Similar Questions" field when starting this question, but they didn't help.
I need to mount a network drive to a mount point I have created in /Volumes/Synology/backup. I need it mounted at boot time - not at login. The NAS drive is a cifs filesystem on my Synology NAS. I understand from this Q&A that it's possible to mount a drive by creating an entry in /etc/fstab. I'm familiar with this on my Linux systems, and expected that some of the fstab parameters might be different in macOS.
I consulted man fstab, opened /etc/fstab using sudo vifs, and created the following entry:
//SynologyNAS-1/backups /Volumes/Synology/backup msdos rw 0 2
Note that the mount point ends in backup, and the NAS share name ends in backups; i.e. not an error.
In other words:
fs_spec = //SynologyNAS-1/backups
fs_file = /Volumes/Synology/backup
fs_vfstype = msdos (closest to cifs in man fstab)
fs_mntops = rw
fs_freq = 0
fs_passno = 2
The attempted mount was a disaster:
% sudo mount -a
mount_apfs: volume could not be mounted: Operation not permitted
mount: / failed with 77
mount_msdos: /SynologyNAS-1/backups: No such file or directory
mount: /Volumes/Synology/backup failed with 71
Does this even work in macOS? The other Q&A says, "Don't worry if it tells you it can't mount a volume." ha ha - seriously? I took that as a joke. Has Apple actually left a dysfunctional mount command in their distribution?
In any event: My actual question is, as stated in the title/subject:
How can I mount an NAS drive at a specified mount point & make it persistent?
I am by no means an expert, but here is what I did to mount my synology shares as persistent mount points on my Mac (in my case it is using NFS, so if you are using smb or afp it will be a little different - but a little googling might help there):
First, edit /etc/auto_master.
It should look something this:
#
# Automounter master map
#
+auto_master # Use directory service
#/net -hosts -nobrowse,hidefromfinder,nosuid
/home auto_home -nobrowse,hidefromfinder
/Network/Servers -fstab
/- -static
Add a line at the end that looks like this:
/- auto_nfs -nosuid
Note how I did not include nobrowse as an option. I found that if I did that I could not browse the mounted shares via the Finder. But, again, since I am not an expert I don't know if this also has other ramifications. It just seems to be working for me.
Next, create a file in /etc called auto_nfs (again, this if for an NFS mount. SMB and AFP will be similar. Just create auto_smb or auto_afp files instead, and make sure that they are referenced in your auto_master file instead of auto_nfs).
This file should look something like this:
/System/Volumes/Data/show -fstype=nfs,noowners,nolockd,resvport,hard,bg,intr,rw,tcp,nfc,rsize=8192,wsize=8192 nfs://<IP_ADDRESS>:/volume1/show
/System/Volumes/Data/assets -fstype=nfs,noowners,nolockd,resvport,hard,bg,intr,rw,tcp,nfc,rsize=8192,wsize=8192 nfs://<IP_ADDRESS>:/volume1/assets
Obviously you should add as many lines as you have shares, and you will need to create the directories yourself using the sudo command (by 'directories' I mean the entries like /System/Volumes/Data/show and /System/Volumes/Data/assets)
Now the shares (/volume1/show and /volume1/assets) will auto mount to these locations (/System/Volumes/Data/show and /System/Volumes/Data/assets). Again, this is using NFS and assumes that you have set your shares up as NFS shares on your Sinology. SMB and AFP will be similar, but the syntax of the lines will differ a fair bit. I'm afraid I am not familiar with their settings, but a little bit of searching should get you to where you need to go.
Finally, it may be annoying that you cannot just mount directly to the root of your Mac's filesystem. That portion of the filesystem is read only (or protected in some other way that you cannot override). For this reason, if you want to mount something at the root of your Mac's filesystem you will need to edit another file called /etc/synthetic.conf. This file lets you specify symlinks at the root of the filesystem that point to whatever location you want them to. In my case, my synthetic.conf looks like this:
show System/Volumes/Data/show
assets System/Volumes/Data/assets
At boot time, the contents of this file are read and the symlinks are created. So in my case, I have two symlinks at / called show and assets, and they are set to point to the mounted directories I specified earlier.
Example:
-> cd /
-> ls -l
lrwxr-xr-x 1 root wheel 26 Oct 3 23:32 assets -> System/Volumes/Data/assets
lrwxr-xr-x 1 root wheel 24 Oct 3 23:32 show -> System/Volumes/Data/show
Make sure you change permissions on each of the files you created (auto_master, auto_nfs, synthetic.conf) files using chmod 644.
Also, Apple has a tendency to overwrite auto_master every time they update your system because of course they do. So it makes sense to create a copy and store it alongside the auto_master file for when you inevitably cannot access your data at a critical moment because Apple decided to "fix" your system.
Hope this helps.
Thanks for this! I had given up on finding an answer, and finally worked out a solution myself - very similar to yours I think, as it uses Apple's undocumented automount feature. I've been meaning to post my answer for several weeks now, but got lazy. I'm going to accept your answer, and will eventually get around to posting mine. I'd appreciate any feedback you care to give.
Here's the answer to overwriting the auto_master file after every update.
For a full explanation see Automounting NFS share in OS X into /Volumes.
Issue: The auto_master file deletes added lines in the auto_master after every update macOS. The entry is overwritten by macOS security updates.
Environment: macOS 11.6 and newer.
Causes: File overwritten by macOS security updates.
Solution: To prevent the operating system from overwriting the /etc/auto_master configuration file in the future, make the file immutable: sudo chflags schg /etc/auto_master
Note: To revert changes to allow editing of the file, perform the following: sudo chflags noschg /etc/auto_master
Finally getting around to posting the solution that I worked out:
As stated in my question, I needed a persistent mount for a share on my Synology NAS (model DS1621+, DSM 7.1.1-42962). One thing I did not mention in my question is that I need a solution that works for systems that employ the read-only file system (Catalina & beyond), and for those that don't (Mojave & prior).
As it turns out, this is a fairly straightforward configuration once you understand what is going on in Apple's AutoFS... but that's made far more difficult by Apple's discontinuation of the documentation! Not to get too far off on a tangent, but I simply don't understand why Apple has removed the documentation for AutoFS, and why they seem to have abandoned development of it. If anyone has any background on this, I'd love to hear from you. In the meantime, I've managed to locate a copy of the AutoFS documentation that may be accessed here.
I. autofs for read-only file systems (Catalina & later)
Without further ado, here are the required changes for my Catalina system. Please note that the following operations require root privileges:
1. Modify the file /etc/auto_master to add one line as shown below:
#
# Automounter master map
#
+auto_master # Use directory service
#/net -hosts -nobrowse,hidefromfinder,nosuid
/home auto_home -nobrowse,hidefromfinder
/Network/Servers -fstab
/- -static
# above is default; add this one line:
/System/Volumes/Data/mnt/synology auto_synology
You may choose an alternative name for synology, and auto_synology is a file containing details for the auto mount.
2. Create the file /etc/auto_synology with the following content:
syn_backup -fstype=smbfs ://username:password@SynologyNAS-1/backups
syn_music -fstype=smbfs ://username:password@SynologyNAS-1/music
syn_pictures -fstype=smbfs ://username:password@SynologyNAS-1/pictures
Note the pattern: one line for each share you wish to automount; I used 3 shares in this example.
The first column is the share's name under the mount point (i.e. /System/Volumes/Data/mnt/synology from the /etc/auto_master entry)
The 2nd column specifies the network file system format as defined for the share on the Synology server; in this case I used SMB
The 3rd column gives the userid & password defined for a valid user account on the Synology NAS, followed by the network name (SynologyNAS-1), and the proper share name as defined on the server.
3. Run the "magic command" to immediately apply all changes :)
% sudo automount -vc
II. autofs for read-write file systems (Mojave & earlier)
The only change required is in the /etc/auto_master file. The single added line should reflect the more straightforward file system hierarchy:
# to the default auto_master file, add this one line:
/Volumes/mnt/synology auto_synology
The /etc/auto_synology file is identical, and the same "magic command" immediately applies all changes.
Other Ideas:
Nothing exceptional here, I only wanted to make a point about creating symbolic links to the mount points can come in handy. As I use the AutoFS feature mostly to simplify routine access to network shares, I've found it useful to create symlinks that are convenient & useful in scripts & working from the command line. For example, I have created a symlink to the directory where my rsync backups are stored. The mount point is /System/Volumes/Data/mnt/synology/syn_bkup and the directory is rsync-myMac. To easily access that location, I've created the following symlink:
% ln -s /System/Volumes/Data/mnt/synology/syn_bkup/myMac ~/rsyn_bkup
|
STACK_EXCHANGE
|
After installing the current Foreman version on a CentOS-based linux machine, I would like to install an Ubuntu host. There is no pre-defined OS for Ubuntu. Installation media is there. But how do I actually install an Ubuntu OS on a client?
Foreman and Proxy versions:
Foreman 2.5.2, Katello 4.1.2
Other relevant data:
“Ubuntu mirror” exists in Installation Media. So I created a new operating system. Ubuntu with Major version 20. Minor version 04. Family Debian. I picked a partition table and added “ubuntu mirror” for the install media. The templates tab only shows “Host initial configuration template”.
When I try to create a host, I set the operating system to Ubuntu 20.04, using ubuntu mirror, the Preseed default LVM partition table (though Kickstart default is also an option). Grub2 UEFI as the PXE loader.
When I try to save the host, I get a “No PXEGrub2 templates were found for this host, make sure you define at least one in your Ubuntu 20.04 settings or change PXE loader”. I’m not sure what that means. In the Ubuntu operating system page, I don’t see any pxe template options in the templates tab. There are lots of other options in CentOS, am I doing something wrong that they don’t show up for this new Operating System?
Have a look at the deploying an internal application guide from the orcharhino documentation. It shows how to deploy hosts running Ubuntu 20.04 with Prosody (an XMPP server as an example application). You’ll see which templates are necessary and where to find them on your Foreman instance.
Also, you probably want to/have to set the major version to
20.04 and omit the minor version for the OS entry &
Kickstart templates only works for EL systems (CentOS, Oracle Linux, RHEL and more). Ubuntu and Debian use
Thank you. That did help. The solution was to go to the preseed provision templates and assign them to the OS. Then go back to the OS (this was not covered in the link you provided) and select those templates. I am now able to pxe boot and install a basic ubuntu OS. Now I just need to learn about preseed to make any necessary modifications.
We are already using Foreman for a while in order to discover/provision hosts with CentOS7 and Rocky.
Now I’m also busy to apply the necessary configurations in order to provision hosts with Ubuntu 20.04.
I followed the documentation deploying an internal application guide provided by @maximilian.
Note: The legacy-images are depricated by Ubuntu, but as the live-image does not have the necessary netboot stuff I used the legacy-image for version 20.04.1 as described in the documentation.
But when I provision a discovered host with Ubuntu 20.04 it goes well untill it starts the “Install the system” where it fails with:
In the syslog file I can see the following error:
Even when I try it again selecting the “Type of installation” => “normal”:
It keeps complaining about the live-installer in the sysylog-file.
Here the mirror-config in the preseed provisioning template:
d-i mirror/country string manual
d-i mirror/http/hostname string foreman-server:80
d-i mirror/http/directory string /pub/installation_media/ubuntu_20_04/
d-i mirror/http/proxy string
d-i mirror/codename string focal
d-i mirror/suite string focal
d-i mirror/udeb/suite string focal
I have no idea what is wrong in our case, so help would be welcome.
Thanks in advance
|
OPCFW_CODE
|
is there a way to have mysql update records at certain times automatically within phpmyadmin
I am not sure how to word this but here is my question:
is there a way to have mysql update records at certain times automatically within phpmyadmin?
Use the MySQL event scheduler to do the update.
Your problem can be solve by cron job.
See this link:
http://stackoverflow.com/questions/14805742/cronjob-or-mysql-event
Mysql Event Scheduler is very good option, if you want to do all in mysql using sql.
there is a good tutorial on site point for this.
http://www.sitepoint.com/how-to-create-mysql-events/
if you want to insert some data or update some data or delete some data and on other case also on particular time stamp on a schedule basis.you can easily use this.
Wow! Thanks for the help everyone! The sitepoint link is especially helpful.
I have another question, if anyone could answer it.
Using the tuitorial from the sitepoint link that developerCK said, where do I plop this? I tried in the appropriate databases SQL area, but no process becomes set up.
DELIMITER $$
CREATE
EVENT dude
ON SCHEDULE EVERY 1 HOUR STARTS '2013-10-26 04:37:00'
DO BEGIN
UPDATE whoa SET blah=1;
END;
DELIMITER ;
check that if you have on your event scheduler variable. it is in article.
PHPMyAdmin just client for MySQL. But you could use many solutions of it.
Use Unix crontab (also known as a “cron job”) or the Windows Task Scheduler
Example for cron
0 */2 * * * mysql -h localhost -u user -ppassword -P 3306 < UPDATE `table` SET `field`='value' WHERE `id`=100500 ;
Use MySQL scheduler.
Example
SET GLOBAL event_scheduler = 1;
CREATE EVENT newEvent
ON SCHEDULE EVERY 2 HOUR
DO
UPDATE `table` SET `field`='value' WHERE `id`=100500 ;
Never heard about the MySQL scheduler... Thanks! :)
Hmmm... I am clearly doing something wrong, because when I do this one it says there is syntax error: SET GLOBAL event_scheduler = 1
CREATE EVENT myeventname
ON SCHEDULE EVERY 1 HOUR
DO
UPDATE yattaSETblah`='1';
I even added the delimeter and end. it executed, but no process is added: DELIMITER $$
SET GLOBAL event_scheduler = 1
CREATE EVENT chibi_shops
ON SCHEDULE EVERY 1 HOUR
DO
UPDATE shops SET shopStockCurrent='shopStockMax';
END;
DELIMITER ;
@micker, add semicolon after first statement.
|
STACK_EXCHANGE
|
Decide on data type for tags: hstore, json, jsonb
hstore vs. json vs. jsonb
I'm currently looking for an improvement for variable data structures (key-value-pairs).
Research:
https://dba.stackexchange.com/questions/115825/jsonb-with-indexing-vs-hstore
https://wiki.postgresql.org/images/b/b4/Pg-as-nosql-pgday-fosdem-2013.pdf
https://www.ongres.com/blog/a_generalized_unstructured_data_type_for_postgres/
https://www.postgresql.org/docs/9.3/functions-json.html
https://www.2ndquadrant.com/en/blog/postgresql-anti-patterns-unnecessary-jsonhstore-dynamic-columns/
https://dzone.com/articles/using-jsonb-in-postgresql-how-to-effectively-store
https://scalegrid.io/blog/using-jsonb-in-postgresql-how-to-effectively-store-index-json-data-in-postgresql/
https://medium.com/hackernoon/how-to-query-jsonb-beginner-sheet-cheat-4da3aa5082a3
http://www.silota.com/docs/recipes/sql-postgres-json-data-types.html
https://dba.stackexchange.com/questions/3492/optimizing-query-using-view-on-eav-structure
https://dba.stackexchange.com/questions/105533/postgres-9-4-jsonb-instead-of-eav
https://dba.stackexchange.com/questions/95758/postgresql-update-and-delete-property-from-jsonb-column
Criterial:
Performance (Speed)
Performance (Storage size)
Usability (Syntax)
Examples:
OSM uses hstore.
I use this issue as documentation. Feel free to comment and improve!
We chose JSON because it seems to be the best option for our use case and offers great support. I already updated the data packet files.
That doesn't really solve my question between json and jsonb.
see: http://www.silota.com/docs/recipes/sql-postgres-json-data-types.html
Difference between JSON and JSONB
The JSON data type is basically a blob that stores JSON data in raw format, preserving even insignificant things such as whitespace, the order of keys in objects, or even duplicate keys in objects. It offers limited querying capabilities, and it's slow because it needs to load and parse the entire JSON blob each time.
JSONB on the other hand stores JSON data in a custom format that is optimized for querying and will not reparse the JSON blob each time.
If you know before hand that you will not be performing JSON querying operations, then use the JSON data type. For all other cases, use JSONB.
The following example demonstrates the difference:
select '{"user_id":1, "paying":true}'::json, '{"user_id":1, "paying":true}'::jsonb;
json | jsonb
--------------------------------+--------------------------------
{"user_id":1, "paying":true} | {"paying": true, "user_id": 1}
(1 row)
(the whitespace and the order of the keys are preserved in the JSOB column.)
I see one typo: (the whitespace and the order of the keys are preserved in the JSOB column.) Should read (the whitespace and the order of the keys are preserved in the JSON column.). JSONB does not retain whitespace and order of keys as it stored parsed json.
Thanks for the info @Ludee!
I think for our use cases, querying the JSONs is not crucial (perhaps this never occurs).
Therefore, I tend to JSON.
Additionally, I don't know if OEDialect supports JSONB. If not, maybe this should be added as it could be useful for metadata querying!
|
GITHUB_ARCHIVE
|
A few handy Git Tips
Because I use git on a daily basis, I’ve found that over the course of several years, I’ve cobbled together a pretty unique group of settings.
I’d like to highlight some of the interesting git configuration tricks that I’ve found over the years.
As everyone does, I frequently fat-finger a few git commands.
Git has a neat feature built in that it will autocorrect your typos!
To set it up, run:
git config --global help.autocorrect 10
This will tell Git to wait just a brief period of time before running the ‘best guess’ for your command. The small period of time is intended to allow you some reaction time to cancel it, just in case.
As many a developer will attest, the only thing worse than a file full of mixed tabs and spaces is a file full of trailing whitespace.
Git has a really handy feature to let you know if you’re about to commit the unforgivable act of committing whitespace.
git config --global apply.whitespace warn
Running this command will assure that any of your repositories will tell you if you have trailing whitespace, so that you could fix it before sharing it with the world.
Beautiful Git Log
One of the frequent steps in my workflow is to inspect the git log to see how the project is progressing, as well as identify any possible issues with merging branches.
If you just try to run
git log by itself, you get a LOT of information.
Sometimes it’s too much.
Because of this, I have two
git log commands that I frequently use.
There is one for projects that I maintain by myself, and one for ‘shared’ projects; and I’ll explain the difference in just a moment.
git log command is actually an alias for the command:
git log --graph --decorate --pretty=oneline --abbrev-commit
If you run this command, you’ll notice that it cuts out much of the info, and only shows the ‘necessary’ info.
The second is almost exactly the same as the first, but with a few tidbits of extra info.
git log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit
Each line of the log graph will now include a date of the commit, relative to today, as well as the name of the person who made the commit. This makes it super easy to scan through a long log to see who did what and when.
Obviously, you don’t want to type this all the time, so you can run the command:
git alias lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"
Then you simply type
git lg to get the condensed information.
Another cool trick is to use auto-rebase.
Normally, if you have a branch with some commits that aren’t pushed, and someone else also commits and pushes on that branch, when you pull, git will create a commit merging your commits to the commits upstream.
I think that sentence makes sense.
Anyway, because this ‘merge’ commit to rebase your recent changes on top of theirs is meaningless, I prefer to set up ‘auto rebase on pull’ through the config setting:
git config --global branch.autosetuprebase always
With this setting, git will try to reapply your commits with the current version of the upstream branches every time you pull.
|
OPCFW_CODE
|
Convert Date rows to column using redshfit
I have a SQL table that contains date, category, revenue, revenue2 and revenue3. Looks like the below table :
EMPID
Date
Revenue1
Revenue2
Revenue 3
USA
2015-08-01
1000
1000
2000
USA
2015-09-01
2000
1000
1000
USA
2015-09-01
2000
1000
1000
USA
2015-07-01
3000
1000
1000
CHINA
2015-07-01
4000
1000
3000
INDIA
2015-07-01
5000
1000
1000
INDIA
2015-08-01
5000
1000
2000
CHINA
2015-09-01
1000
1000
1000
I want the below results :
EMPID
CATEGORY
2015-09-01
2015-08-01
2015-07-01
USA
REVENUE1
4000
1000
3000
USA
REVENUE2
2000
1000
1000
USA
REVENUE3
2000
2000
1000
INDIA
REVENUE1
...
...
...
INDIA
REVENUE2
...
...
...
INDIA
REVENUE3
...
...
...
CHINA
REVENUE1
...
...
...
the date mentioned is the first day of the month.
also I dont mind hardcoding the dates. Can someone help me with the solution. I dont have any code for now.
I was in an assumption that pivot function cannot be used in redshift since my previous code hadnt worked. below is the code I tried to unpivot first and used cte, later within the cte i tried to pivot which is not working. can anyone please see the code and suggest if something is missing.
select cte as (select * from (select empid, revenue1, revenue2,revenue3, date from table) UNPIVOT (totalrevenue for Revenues IN (revenue1, revenue2, revenue3)))
select * from (select empid, totalrevenue, revenues , date from cte) PIVOT (sum(totalrevenue) FOR date in ('2022-06-01', '2022-07-01)'))
Hi @Anushika, why you can't use PIVOT ?, dates are always the first day of every months ?
@pfigueredo, datagrip is not supporting pivot.
@Anushika: DataGrip doesn't support anything in SQL - it's the database server you connect to that does this.
I have connected to redshift. @a_horse_with_no_name
I am currently connected to redshift and the code is not working since it does not recognize cross join lateral @a_horse_with_no_name
@Anushika: well you tagged your question with PostgreSQL which is something different than Amazon Redshift - my answer was for Postgres, not for Redshift
sorry for the confusion @a_horse_with_no_name
|
STACK_EXCHANGE
|
Discussion in 'Amateur Radio Software' started by KC9SWV, Jun 3, 2016.
Anyone get HRD to work with RS-BA1 at the same time over Ethernet?
I know this is a very old thread but, although I've had the software for quite some time, I am just now attempting to remote my 7300 and I am getting exactly the same error. I am running the latest Win10 Pro OS version on both my base station PC/server and my remote PC. I expect I am misinterpreting something and there is a simple configuration that I have wrong.
I have the radio connected via a USB cable to the base station PC and RA-BA1 installed on both PCs. I thought I had everything configured as directed but, although the base station PC connects to the radio, I have no errors indicated, and I can control the radio via the base station PC, when I attempt to control the radio from the remote PC (on the same Wi-Fi home network), I get that "Cannot open COM Port" error.
When I run the Remote Utility on the remote PC, and attempt to connect, it connects but, when I run the Remote Control on that remote PC, the Cannot open COM port error occurs.
Any suggestions would be much appreciated. Thank you
On the Remote Control window, go to 'Option', and 'Connect Settings' and verify your com port, baud rate, and CI-V address.
Mine are set like this...
Com Port = 4 (yours may vary)
Baud rate = 115200 (again, yours may vary)
CI-V Address (radio) = 94 (Most 7300's should have this setting)
Note: you should be able to see these first three on the Remote Utility window
CI-V Address (RS-BA1) = E0
I think one of the problems with this software is that while almost everyone eventually gets it working, nobody ever documents all the tinkering they did when they were finally able to make it work, so nobody is ever sure exactly what they did that resolved it.
One other thing that took me a while was to make sure that the radio drivers are loaded on the server pc before a connection attempt. This normally means that you have to have power applied to the radio (says captain obvious), but also the radio had to be turned on at least once before the connection attempt. At least, that is how mine acts.
73 & GL. You'll get it.
John, thanks very much for the reply. My settings are similar to yours. I wholeheartedly agree with your comment about lack of documenting all the tinkering required to get this to work. I am trying to document whatever I am doing but I still have no solution. To wit:
1) My initial problem was that I was always getting a Cannot Open COM Port error when I launched the Remote Control and clicked the Connect (Power On) button icon.
2) I resolved that when I realized that the Remote Utility on either the server PC or remote PC can connect to the radio, but not both at the same time, and then I was able to remotely control the radio from either the server PC or the remote PC, but I had no audio on either PC (N.B., if I adjust the audio volume through either PC, the audio output on the radio did change accordingly).
3) Just before receiving your reply, during my scrambling to find an answer, I noted that on the remote PC, under the Windows Settings Installed Apps list, there was a Windows USB driver from Silicon Labs - but that did not show up in the server PC, even though I could see all the appropriate 'devices' under Device Manager (Audio I/O, Multiport Serial Adapters, Ports, Sound et al Controllers) .
4) Thinking that perhaps the Silicon Labs driver was improperly installed on the server PC, because it was not showing on the installed apps list, under Device Manager/Ports I uninstalled the driver, then unplugged the USB cable and reinstalled the driver.
5) Now the driver is showing in the installed apps list i.e., Windows Driver Package - Silicon Laboratories Inc. (silabser) Ports...
6) It did however change the virtual COM Port number from 4 to 3; so, on the remote PC, I changed the virtual COM Port to 3 to match the one on the server PC.
Good, I thought, until I launched the Remote Control app, first on the server PC, and noted that I still had control but, as before, there was no audio. If I cannot get audio when controlling via the server PC, then I'm not going to get audio on the remote PC - and that remains the case.
As it appears to be strictly an audio issue, may I ask what your audio device settings are on the server and remote PCs?
I appreciate your efforts to assist.
On the remote utility on both the server pc and the remote pc, I have 'Default Device' listed for both speakers and mic. I believe the audio is passed between the two using the virtual audio ICOM_VAUDIO-1 (all at 48Khz). So, if I connect from the server pc, or if I connect from the remote, either way I am hearing audio on the default audio device (and microphone). On the remote pc, for the network settings in the properties, I clicked on 'Recommended' and selected LAN as both machines are on the same lan. When I go remote, I reset that to 'Internet' and vpn into my home network.
At one point, I did have to go into the sound control panel and adjust levels on the ICOM_VAUDIO, and I also had to adjust audio settings in the 7300 for acc/usb af output level, and also the usb mod level.
Again, I don't have it all documented
Edit: fyi, I am using Windows 7 for both server pc and remote (laptop)
Thanks for the follow-up John.
I am running the same version of Win10 Pro on both computers. As you can see in the JPG file, I believe I have those settings configured the same as yours, and I have cranked the PC audio output all the way to 100%, for now at least.
However, I do notice an anomaly, namely that the audio output devices bear different 'names' even though I have selected the same audio devices within the Remote Utility app on both PCs. That tells me something may be different between the installation on the server PC and on the remote PC, but which (if either) is correct?
Could it be that a different version of the driver was somehow installed on each PC? I have no idea how that could happen, but...
This is truly baffling!
Thanks again... Paul
One more thing I notice, which may or may not mean anything, but do you understand the ICOM_VAUDIO-1 (I=2 O=2) on the server PC and ICOM_VAUDIO-1 (I=0 O=0) on the remote PC? I am presuming the I and O refer to Input and Output but, even if I am correct, why the difference (2 versus 0)?
Perhaps incorrectly, but I keep focussing on the fact that even the server PC has no audio.
So just for kicks yesterday, I had my 7300 being remotely controlled by my laptop for most of the day. At the end of the day, I shut down the software, but left the two computers running overnight. Today, I could connect from the server pc, but not from the remote, I was getting the virtual serial port error. I shut down the software on both, rebooted the server pc, and then it came right up and worked. I made no config changes. Like I said, both are Windows 7. But I think that shows that there is something flaky in the remote utility on the server pc, at least on Windows 7.
Edit: As to the I and O settings on icom_vaudio, I am not sure, but I think they relate to local resources. On my server pc, right now it says I=3, O=4, and on my remote pc is says I=0, O=1.
It is probably documented somewhere, but off the top of my head, I don't know.
FYI: Page 7-6 of the manual has a whole page on received audio.
That's not all that unusual in the Windows world! I believe PCs need to be rebooted from time to time to clean up left-over clutter that will sooner or later impacts apps.
In my case, however, because I have yet to get it working fully and, therefore, I am not actually using it, my server PC gets shut down in between each of my scrambling efforts to resolve the problem - i.e., a number of times per day.
Regarding the I and O settings, you have introduced another possible issue. If they relate to local resources, the fact that your I and your O 'resources' differ on each PC. I wonder if my problem could be that I and O are both the same on each PC (both are 2 on the server, and both are 0 on the remote). Would it not make sense that I and O should differ?
I cannot tell you how much I appreciate your comments, as I have been struggling with this for weeks and I live in a rural area where they are a limited number of other 'hams', and only 1other has the 7300 and he has no interest in remoting it.
Thanks again... Paul
I am not sure where I found this, but maybe it will help....
|
OPCFW_CODE
|
The ports collection has some serious issues
vlad-fbsd at acheronmedia.com
Thu Dec 8 10:06:02 UTC 2016
On 2016-12-08 06:16, Daniil Berendeev wrote:
> * Why pkg is still nice?
> It is able to update packages with broken ABI, it's fast and easy to
> use. Some packages/ports don't have options and can be used via pkg by
> ports user.
Yes, and I'll echo what Matt said previously, and suggest Poudriere.
I've been using it exclusively for over a year and I've observed it
cleanly rebuild ports others have had hell of a time with, with
portmaster, eg. Perl upgrades.
Working with pkgs you pre-built yourself is the most atomic and flexible
way to have and use the ports ecosystem.
> 2) pkg and ports are not in sync.
> pkg appeals to build ports that are from 2xxxQx branches. The promoted
> tool for syncing ports (portsnap) always fetches from head. And there
> no way to choose. That gives us the next problem:
There is way to choose. You can change your pkg repo to "latest" via
/etc/pkg/FreeBSD.conf, or even better override it into
/usr/local/etc/pkg/repos/FreeBSD.conf (will need to create last two
dirs). See pkg.conf(5) for more info.
And that's if you use the official FreeBSD pkg builds. Another good
reason to use Poudriere and build pkgs yourself.
> 6) broken ports are pushed to head
> Why do we have such a situation, when head contains a handful of broken
> ports? Why commit a port that won't build? It's sick.
Well, that's normal? A little fact often neglected is that the HEAD is
ultra-unstable-we-just-committed-here-use-at-your-own-risk repos of some
Linux distros are.
I mean, they are the FIRST landing point of a change. And the only QA we
ask for that change is a confirmation that poudriere and portlint have
been run, the rest is at liberty of committers how far they'll go with
own testing before they commit. For many, only builds against -CURRENT
or latest -RELEASE are done because it's very time consuming to test
against all supported FreeBSD versions, and not just versions but
various permutations like different pythons etc... When it comes to some
defaults like OpenSSL (or any kind of dependency on it), all of those
tests are required.
The problem is, FreeBSD doesn't have a STABLE repo that would receive
gradual updates from HEAD as they prove themselves stable. QUARTERLY !=
STABLE, it's just a snapshot of whatever state HEAD is in, with a loose
promise the ports in it will receive "security and bugfixes only" but
that's a separate set of issues.
There are some solutions and we don't have to NIH or reinvent the wheel.
Just looking at what other open source projects do with, say, GitHub and
continuous integration testing, every pull request gets an automated
test. Why don't we do that? Is it difficult to implement it?
I am also convinced that such testing can be automated and a true
"STABLE" repo can be made instead of manual QUARTERLY that breaks
> 8) ports with vulnerabilities.
> They exist in the tree and on build attempt they shout that they won't
> build without DISABLE_VULNERABILITIES=yes. The catch is that there is
> always a bunch of ports with vulnerabilities. So if you are doing a
That's just a nature of it, and the consequence of VuXML being a
separate port that gets often updated first, as it's better to announce
the vuln before it was fixed. And fixing is bound to maintainer
timeouts, poor issue tracking via Bugzilla, etc...
> I hope that my mail will produce a productive discussion that will lead
> to some good decisions for fixing these problems.
Probably not. I've already posted about issues with head/quarterly,
hoping for a discussion, never happened. Others have complained about
the same problem, but no constructive discussion ensued. Is my
frustration coming through, yet? :)
More information about the freebsd-ports
|
OPCFW_CODE
|
The group of logical operators NOT, AND, OR, and XOR is required for expressions of the type Boolean on the one hand, and for numbers of the type Short, Integer, or Long (-Integer) on the other. The logical operators form the basis for efficient comparisons and bit manipulation functions. Special cases apply to character strings and objects in logical negation with the NOT operator. Setting parentheses affects the ranking of operations if several logical operators are used.
|NOT expression0||Calculates the logical NOT of the expression (from True to False and vice versa) or a number based on the binary representation of this number.|
|Expression1 AND Expression2||Calculates the logical AND of two Boolean expressions or the numeric AND of two integer numbers based on the binary representation of these numbers.|
|Expression1 OR Expression2||Calculates the logical OR of two Boolean type expressions or the numerical OR of two integer numbers based on the binary representation of these numbers.|
|Expression1 XOR Expression2||Calculates the logical XOR of two Boolean expressions or the numerical XOR of two integer numbers based on the binary representation of these numbers.|
Table 8.4.1: Logical operators for Boolean or numeric expressions
The numbers are of the type Boolean Short, Integer or Long (-Integer). The calculation with the used logical operator is based on the bitwise logical operation of the binary representation of the numbers on the basis of the so-called truth tables.
The operator inverts each bit in the binary representation of the number, as shown in the following table and examples:
|Bit 1||Bit 2||Not Bit 1||Bit 1||Bit1 AND Bit2 AND Bit2||Bit1 OR Bit1 OR Bit2||Bit1 XOR Bit2||Bit1 XOR Bit2|
Table 188.8.131.52: Logical operators and truth table
Print 13;; Bin(13, 8);; Bin(Not 13, 8);; NOT 13 13 00001101 11110010 -14 Print 5;; 12;; Bin(5, 8);; Bin(12, 8);; Bin(5 AND 12, 8);; 5 AND 12 5 12 00000101 00001100 00000100 4 Print 5;; 12;; Bin(5, 8);; Bin(12, 8);; Bin(5 OR 12, 8);; 5 OR 12 5 12 00000101 00001100 00001101 13 Print 5;; 12;; Bin(5, 8);; Bin(12, 8);; Bin(5 XOR 12, 8);; 5 XOR 12 5 12 00000101 00001100 00001001 9
Chapter 9.9 Bit manipulation introduces functions that consistently work with the presented logical operators.
If the operand after the unary logical operator NOT is a string (string) or an object, the following applies to NOT expression: If the expression is a string or an object, True is returned if the expression is zero or False if the expression is NOT zero.
Sub Picture_Write(pPicture As Picture) If NOT pPicture Then pPicture = HorizontalFader.DefaultPicture MyOriginalPicture = pPicture If MyOriginalPicture Then GetPictures Me.Draw End
|
OPCFW_CODE
|
Start/Stop VM’s during off-hours. [Preview] Azure Automation.
>>Making life easier with Azure Automation!!⌗
>>What is it?⌗
The Start/Stop VMs during off-hours solution starts and stops your Azure virtual machines on user-defined schedules, provides insights through Azure Log Analytics, and sends optional emails by using SendGrid. It supports both Azure Resource Manager and classic VMs for most scenarios.
Currently I have a Dev environment that is costing money when its not being used. There is an easy way to “Auto-Shutdown” the VM at a specific time using the Auto-Shutdown Operation within your VM on Azure. Issue is when I come back into the office in the morning I need to login to the Azure Portal and manually turn that VM back on. Not anymore!
>> The Solution⌗
Microsoft have released a solution (In Preview) called “Start/Stop VMs during off-hours”, this essentially lets you schedule time a time that you want your Azure VM’s to power down and when to power back on. Before this you could have done this manually by creating runbooks which essentially are scripts that can be automated to run under a particular condition. With the release of “Start/Stop VM’s during off-hours” this process is massively simplified into completing one deployment which I’ll take you though now.
>>Doing it yourself.⌗
From your Azure Deployment choose “Create a resource” and look for “Start/Stop VM’s during off-hours”. Once you’ve found this you will be prompted to choose and existing or create a new OMS workspace, this is a Operations Management Suite space where you can log analytics about your tenancy/ specific VM’s or applications. Next you will need to have a Automation Account to link to this service. This is an account that will complete your automation tasks for you, this is know as an “Azure Run As Account”. One thing to make sure is that you have the required permission on your tenancy to create these accounts, if not you’ll have to get your internal IT team to step in and create them for you. How to do this can be found on this Technet post. https://docs.microsoft.com/en-us/azure/automation/automation-quickstart-create-account
Once this is done the last things to do is choose your Resource Group that has the VM’s inside you want to shutdown, this is taken as a string so make sure you’re typing your resource group correctly, a VM Exclude, so you can type the names of the VM’s inside of this resource group that you don’t wan to follow this shut off/on policy again this is a string. Note: if you leave the * (wildcard) in the ResourceGroup field then ALL VM’s INSIDE THE SUBSCRIPTION WILL BE TARGETED you’ve been warned.
You can now configure the schedule of when you want the VM’s to shut off and on. At this moment this is set as a daily task.
One option you have is to configure email notifications by using the SendGrid service. There is a free tier which allows you:
Up to 25,000 emails per month. Includes all Bronze Plan features and access to Web, SMTP, Event and Parse APIs
Then press “OK” then press “Create” to deploy your solution.
>>Customizing for yourself⌗
Once your deployment has been fully completed it will auto pin a Solution Tile and Automation Account VM to your Dashboard to easily access the information you need.
Each of the Start/Stop Processes use Runbooks that are stored inside of your Automation Account created earlier. If you open your corresponding Automation Account you can find each Runbook inside of there and you can edit them. Make sure that when you’re editing a Runbook you use the Parent and not a Child version.
The way it is currently configured will start and stop the server at the specified time each day, you can however change this by editing the values inside the Runbook. Once inside choose Schedules from the Resource list on the left and inside you will see both StartVM and Stop VM.
From here you can choose more specifically when this will run. For example I have set mine to run only Monday - Friday. I did this because my Dev environment will only need to be auto start/stop during the week and stay off during the weekends.
I think that this preview item that is now in the Azure dashboard is a nice little feature that can auto start/stop your VM’s when needed. I think it could do with some more features such as a webhook that would be called once the Runbook is complete rather than a email being sent. For more information about this you can follow this link: https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management
|
OPCFW_CODE
|
import VehicleService from 'App/Services/VehicleService'
import DriverCalendar from 'App/Models/DriverCalendar'
export default class DriverCalendarService {
/**
* Create Driver Calendar
* @param data
* @Returns Driver Calendar
*/
public static async store(id: any) {
const vehicle = await VehicleService.getVehicle(id)
const data = { vehicleId: vehicle.id }
return await DriverCalendar.create(data)
}
/**
* Create Driver Calendar With Reservation
* @param id
* @param data
* @returns Driver Calendar
*/
public static async storeWithReservation(vehicleId, reservationId: any, pickDate: any) {
//at least 1 record in data
let driver = await DriverCalendar.query()
.where('vehicleId', vehicleId)
.whereNull('pick_date')
.first()
if (driver) {
driver.reservationId = reservationId
driver.pickDate = pickDate
await driver.save()
} else {
const data = { vehicleId, reservationId, pickDate }
driver = await DriverCalendar.create(data)
}
return driver
}
/**
* Delete Driver Calendar
* @param id
* @Returns Driver Calendar|null
*/
public static async delete(vehicleId: any, reservationId: any) {
//at least 1 record in data
const driver = await DriverCalendar.query()
.where('vehicleId', vehicleId)
.whereNull('pick_date')
.first()
let deleteDriver = await DriverCalendar.query()
.where('vehicleId', vehicleId)
.where('reservationId', reservationId)
.firstOrFail()
//delete file image
if (driver) {
await deleteDriver.softDelete()
} else {
const data = { reservationId: undefined, pickDate: undefined }
await deleteDriver.merge(data).save()
}
return deleteDriver
}
}
|
STACK_EDU
|
Your best friend for file transfer.Fetch
The toolbar at the top of transfer windows provides easy access to frequently used commands.
Fetch ships with a default set of buttons in the toolbar, but you can add other buttons, remove any of the default ones, or return the toolbar to its original state. See below for a complete list of the buttons available and a short description of what they do.
To customize the toolbar:
Choose View > Customize Toolbar, or Control-click in the toolbar and choose Customize Toolbar from the contextual menu.
You can also hide the toolbar, or change it to show smaller icons or only icons or only text.
To hide the toolbar:
Choose View > Hide Toolbar. To show it again, choose View > Show Toolbar. You can also click the toolbar button in the upper-right corner of the transfer window.
To change the appearance of the toolbar:
Control-click in the toolbar and choose one of the following options:
- Icon & Text - Show both icons and names in the toolbar
- Icon Only - Show toolbar buttons as icons without names
- Text Only - Show toolbar buttons as only names
- Use Small Size - Display the icons and names at a smaller size
The default buttons in the toolbar are listed below. Select a button title for more information.
- Go back to the remote folder you were viewing before the current one.
- Click this button to display a list of all the folders that contain the current one. Choose a folder to go to it.
- Click this button to display a list of folders you've recently visited on the current server. Choose a folder to go to it.
- Download one or more files or folders from a server to your Macintosh.
- Upload one or more files or folders from your Macintosh to a server.
- Quick Look
- Display files on a server without leaving Fetch. Many different kind of files can be displayed, including text files, images, sounds, movies, PDFs, and Microsoft Word and Excel files.
- Edit files in an editor application and save changes back to the server automatically.
- Get Info
- Display a window containing detailed information about the selected remote files or folders, and lets you rename or set permissions on the items.
- Display the currently selected items in your web browser (or set up WebView for the current server, if necessary). The WebView button's icon will always be the icon of your default web browser, so the icon may be different if you have set a browser other than Safari as your default web browser.
- New Folder
- Make a new folder on the server.
- Delete files or folders on a server.
The optional items that you can add to the toolbar are listed below. Select a button title for more information (if available).
- Choose which format to use when putting or uploading files to the server.
- Go to Folder
- Change to another folder on the server by typing its name or path.
- Take you to your home folder on the server.
- Show the Mirror window, where you can copy all the new or changed files from a Macintosh folder to a server folder, or vice versa.
- Choose what mode (automatic, text, or binary) to use when getting or downloading files from a server.
- New Shortcut
- Create a new shortcut to access remote files or folders quickly.
- Change the current folder to be the parent folder of the folder you're currently viewing, that is, go up one level. You can drag items to the Parent button to move them to the parent folder.
- Resume Download
- Resume partially completed downloads.
- Send FTP Commands
- Send arbitrary FTP commands to an FTP server, with special support for searching (SITE INDEX) and setting upload permissions.
- View as Text
- View files on a server as plain text without leaving Fetch.
- Insert a vertical line in the toolbar so you can group items together.
- Flexible Space
- Insert blank space in the toolbar between items.
- Display the Customize Toolbar dialog.
|
OPCFW_CODE
|
import { Arr, Fun } from '@ephox/katamari';
import { Generators } from '../api/Generators';
import * as Structs from '../api/Structs';
import { Warehouse } from '../api/Warehouse';
import { CompElm, RowCell, RowElement } from '../util/TableTypes';
import * as TableGrid from './TableGrid';
const toDetails = <R extends RowElement>(grid: Structs.RowCells<R>[], comparator: CompElm): Structs.RowDetailNew<Structs.DetailNew<RowCell<R>>, R>[] => {
const seen: boolean[][] = Arr.map(grid, (row) =>
Arr.map(row.cells, Fun.never)
);
const updateSeen = (rowIndex: number, columnIndex: number, rowspan: number, colspan: number) => {
for (let row = rowIndex; row < rowIndex + rowspan; row++) {
for (let column = columnIndex; column < columnIndex + colspan; column++) {
seen[row][column] = true;
}
}
};
return Arr.map(grid, (row, rowIndex) => {
const details = Arr.bind(row.cells, (cell, columnIndex) => {
// if we have seen this one, then skip it.
if (seen[rowIndex][columnIndex] === false) {
const result = TableGrid.subgrid(grid, rowIndex, columnIndex, comparator);
updateSeen(rowIndex, columnIndex, result.rowspan, result.colspan);
return [ Structs.detailnew(cell.element, result.rowspan, result.colspan, cell.isNew) ];
} else {
return [] as Structs.DetailNew<RowCell<R>>[];
}
});
return Structs.rowdetailnew(row.element, details, row.section, row.isNew);
});
};
const toGrid = (warehouse: Warehouse, generators: Generators, isNew: boolean): Structs.RowCells[] => {
const grid: Structs.RowCells[] = [];
Arr.each(warehouse.colgroups, (colgroup) => {
const colgroupCols: Structs.ElementNew<HTMLTableColElement>[] = [];
// This will add missing cols as well as clamp the number of cols to the max number of actual columns
// Note: Spans on cols are unsupported so clamping cols may result in a span on a col element being incorrect
for (let columnIndex = 0; columnIndex < warehouse.grid.columns; columnIndex++) {
const element = Warehouse.getColumnAt(warehouse, columnIndex)
.map((column) => Structs.elementnew(column.element, isNew, false))
.getOrThunk(() => Structs.elementnew(generators.colGap(), true, false));
colgroupCols.push(element);
}
grid.push(Structs.rowcells(colgroup.element, colgroupCols, 'colgroup', isNew));
});
for (let rowIndex = 0; rowIndex < warehouse.grid.rows; rowIndex++) {
const rowCells: Structs.ElementNew<HTMLTableCellElement>[] = [];
for (let columnIndex = 0; columnIndex < warehouse.grid.columns; columnIndex++) {
// The element is going to be the element at that position, or a newly generated gap.
const element = Warehouse.getAt(warehouse, rowIndex, columnIndex).map((item) =>
Structs.elementnew(item.element, isNew, item.isLocked)
).getOrThunk(() =>
Structs.elementnew(generators.gap(), true, false)
);
rowCells.push(element);
}
const rowDetail = warehouse.all[rowIndex];
const row = Structs.rowcells(rowDetail.element, rowCells, rowDetail.section, isNew);
grid.push(row);
}
return grid;
};
export {
toDetails,
toGrid
};
|
STACK_EDU
|
I have FileMaker Server 12 Advanced and want to have documents on an external harddrive for remote users to browse and attach documents to a container field. Any suggestions?
What kind of documents? Image files? PDF's or ???
I suggest that you import r insert (use references to keep your file size down) each of these files into a table with one container field per record. Users can then browse a layout displaying the documents in container fields so that they can then use a mouse click to pick a document from this layout to link that record to the desired record in your main table.
All documents are pdf. Problem is that I don't know how to set the directroy path for remote users. When a remote user logs in they can't browse to wanted document directory.
Additionally, there are thousands of pdf files involved.
Server OS and version?
Client computers' OS and version? All clients on the same router (subnet)? Gigabit network (router and clients) or slower?
Is the external drive shared? Can you mount it, and add, change, delete files? Are users and groups setup in server software?
Is the containing folder buried deep or at the root level of the drive? Folder named conservatively or badly named?
An SSD would be a good idea for speed considerations.
You may have it the magic answer about drive being shared. Never thought about that...too busy looking elsewhere. After mapping/sharing drive I will report results and answers to all the other questions.
Mapping the drive so that all users have identical file paths is one option. On a FileMaker 10 system I administer, I use that method and have some batch files that remount/map the shared directory so that users can just click a button to execute the batch file if a system misadventure disconnects the client machine from the shared directory.
But with FileMaker 12, you might also use containers with external storage and then all your users are supposed to be able to access the images without needing access to a shared directory.
No luck. Remote users have no problem accessing the database but cannot view document directory. My server is a Microsoft 7 Professional that exceeds min standards.
Containers sound like that is what I have been trying. Can you point me where to look/start? My setup has a shared external harddrive with all documents and the database is on the server's drive.
Why do they need to access the document directory? What I am suggesting would make that unecessary as they can browse the same list from a FileMaker layout based on a table where you have one record for each document.
Currently, each record has several fields that will allow a user to attach a document (pdf). Right clicking and selecting ‘Insert File’ is where the user will browse my server’s external hard drive. This is what I think I want, but if there is a better way I am listening. At present I have all the documents on my drive and all remote users will be loading their pdf files on my server.
I'm trying but just don't understand how to do what you are suggesting.
If you set up a table (PDFs) with one container field in each record, you can load this table with each of the files from your shared hard drive. This table can include an auto-entered serial number field to serve as the primary key for that file. Another field or fields can include the file name and other descriptive data about that file should such be useful.
Start with these relationships:
MainTable::__pkMainID = SelectedPDF::_fkMainID
PDFs::__pkPDFID = SelectedPDF::_fkPDFID
For an explanation of the notation that I am using, see the first post of: Common Forum Relationship and Field Notations Explained
You can place a portal to SelectedPDF on the MainTable layout to list and select a PDFs record for each given MainTable record. Fields from PDFs can be included in the Portal to show additional info about each selected PDFs record and the _fkPDFID field can be set up with a value list for selecting PDFs records by their ID field.
And there are a number of options you can use for selecting a PDF to link to a given record in Main. A value list of ID's and file names and/or description fields is one option. A portal of available documents is another and a script could even take the user to a list view layout where they can scroll through the PDFs records viewing the PDF's in interactive container fields and clicking a record to select it to attach it to the current record in MainTable. Note that any number of PDF's can be so attached to a single such record.
Still trying. Question: Does it make any difference that all my documents are in folders with names of the clients and within the client folder the pdf files all have the same naming convenction as all other client folders. i.e. smithbob/intake form.pdf? Each client folder has files named: intake form.pdf, medical.pdf, etc.
One more thing. Attached is a screen shot showing fields to add documents. My desire is to right click and select 'Insert file' by browsing the external hard drive. I can do this within the office; however, no remote users can browse my hard drive.
And I continue to suggest that there is a better way that does not require the user to browse the hard drive. Having the user browse the hard drive will create issues where it is very easy to select the wrong file--especially since you can't "point" FIleMaker at a specific folder such that when the user opens the dialog to select a file, the system automatically opens to the correct folder from which they should select a document for insertion.
If, instead, you insert or import the files for the user into a table, they can then use a standard FIleMaker interface to see a list of documents from which they can choose one to link to their current record. Your scripts can limit the records used in this list so that the user only selects from the list of documents appropriate to their context.
Another approach would be to acquire a plug in that can list the files in a specific folder. This would allow you to set up the same kind of "browse for a file interface" but through a fileMaker layout where the user sees a list of files from a particular folder. Just FileMaker, without a plug in, can also pull in such a list, but only through use of a system script specific to the OS on which FileMaker is running. In Windows, for example, a Batch file can generate a text file listing all the files in a specific folder and FileMaker can then import that text file to get at the list of files and their names.
Retrieving data ...
|
OPCFW_CODE
|
The New Wild: Why Invasive Species Will Be Nature’s Salvation
For something that sets off alarm bells—zebra mussels! kudzu! Asian carp!—invasive species are a strangely natural phenomenon, as old as life itself. All species start somewhere, and move on from there. Conservationists willing to go to war with the invaders, argues Pearce in his lively polemic against what he has come to view as a moral panic, are highly selective when it comes to identifying the enemy. Australia’s 100 million sheep—that’s a 1 followed by eight zeros—have done more damage to the native ecosystem than rabbits, but no one considers them aliens; many Australians, on the other hand, think of dingoes as non-native, 5,000 years after their arrival.
And we always excuse ourselves, the ultimate ecology-warping invasive species, from the discussion, even though a case could be made that humans should be eradicated from every continent but Africa.
Yes, there are horror stories, Pearce agrees, especially on small islands where human-introduced species have wreaked havoc—none worse than the two million mutant mice (now 25 cm long and carnivorous) who daily eat a tonne of bird flesh per acre on Gough Island in the South Atlantic. But, in most cases, aliens simply replicate life as the planet has known it: They arrive, upset the apple cart (sometimes drastically), then settle down as ordinary eco-citizens. They often, in fact, provide crucial benefits for earlier residents: A third of California’s native butterflies now depend on exotics for food. If the naturalness of the process isn’t enough, writes Pearce, consider the record: For the most part, in ways large and small, our war with the invaders has been as futile as the war on drugs.
Take the Florida Everglades. Since the 1990s, and despite organized hunting efforts, Burmese pythons have flourished there. (Pearce points the finger not at frightened pet owners but at the destruction caused by 1992’s hurricane Andrew, which included knocking out walls and smashing cages in a warehouse holding more than 900 pythons.) The estimated 30,000 Burmese snakes have devastated local rabbit, raccoon and opossum populations, perhaps by 90 per cent. That means the python numbers, too, will soon crash in the absence of prey, and they may well become just another of the many species of plants and animals—a quarter of the Everglades’ total—that have arrived in recent times.
The kicker, though, is that there wasn’t an Everglades at all until 5,000 years ago; it has never settled into a fixed form, but alters profoundly with changes in sea levels. If those levels rise as high as some climate scientists fear, the entire question will be academic very soon.
|
OPCFW_CODE
|
cellxgene-schema must validate that -add-labels will not result in a write failure during ingestion
There have been multiple instances where a validated dataset has been uploaded to CELLxGENE Discover and then fails during -add-labels with an exception like:
ERROR: Writing h5ad was unsuccessful, got exception 'Can't implicitly convert non-string objects to string'. Above error raised while writing key 'categories' of <class 'h5py._hl.group.Group'> to /'."
See #single-cell-data-wrangling
See #single-cell-data-wrangling.
See single-cell-data-representation.
@jychien has shared a mitigation with other curators:
I used cellxgene-schema validate --add-labels output_file input_file to trouble shoot the dataset.
UPDATE:
Note to assignee from Nayib:
Our own testing/QA would catch if these errors were being introduced by the add-labels step, as those changes are entirely deterministic. Most likely, these scenarios occur when the dataset is already malformed when initially submitted. The errors are triggered by the add-label step because it's the only time we attempt to write to a file, exposing some existing defect on the dataset that our validation does not check for. Consider that when trying to create a test file to validate this during QA, Jenny (see comment below) was not able to because the call to 'anndata.write' throws the aforementioned error and didn't allow her to create the test file.
I think something similar to introducing a dry-run write to a 'dummy' temp file could catch and return these errors during the validate step; however, if we go this route, we should alert users about this write+delete to a temp file while validating locally. And consider memory/performance implications.
I am not able to create and write to file another AnnData with the same error for obs having a categorical column containing boolean objects. The same error is given when I write the AnnData to file. I have the original h5ad file that the original curator (Batu) had come across and can test with this file. However, this h5ad file is formatted to schema 3.0, and therefore we will need to stage the testing for this ticket differently. Thanks!
The error mentioned above might get fix in annadata 0.10.0 https://github.com/scverse/anndata/issues/726#issuecomment-1711388396
I tried the naive approach of trying to write the dataset to a temp directory during validation. It worked ok for small datasets, but for our larger datasets it exploded in memory usage as well as time.
It's difficult to detect where this('Can't implicitly convert non-string objects to string') will occur, since it can happen in any of the matrixes. I'm going to try installing https://github.com/scverse/anndata and see if the head branch has a fix. It sounds like it will be released soon.
I am not able to create and write to file another AnnData with the same error for obs having a categorical column containing boolean objects. The same error is given when I write the AnnData to file. I have the original h5ad file that the original curator (Batu) had come across and can test with this file. However, this h5ad file is formatted to schema 3.0, and therefore we will need to stage the testing for this ticket differently. Thanks!
@brianraymor I verified that in anndata 0.10.0 this problem has been fixed using the original h5ad mentioned. This wont catch these types of error earlier, but prevent them all together. Migrating to anndata 0.10.0 is likely a heavier lift then we want to take on at the moment, but it is an option.
The plan for now is add a flag to the CLI to allow curator to run a more thorough validation which will attempt to write the file out. This requires reading into memory and will require disk space to write the temporary file. This will not be the default behavior and a warning about time and resources required will be in both the description and when the command is executed.
I appreciate the research and update @Bento007.
anndata 0.10.0 is a release candidate too so that's another reason that we would not to adopt at this time. Generally, we update anndata versions when the curators start receiving new(er) versions of anndata datasets from submitters.
For the specific case of the dataset above:
In:
adata.obs["is_doublet"]
Out:
Name: is_doublet, Length: 191230, dtype: category
Categories (2, object): [False, True]
anndata.write_h5ad does not like that the catagories are boolean.
During the add labels step we could go through all of the columns and make sure that any Catagory columns are strings.
I've identified two cases where we could fail writing the h5ad. One was derived from the anndata github issue mention above. The other was derived from the dataset mentioned above. Here are the two causes:
Having mixed types in a column
Having none string values for catagories
Checks for these cases will be added to the validator. This may not be all of the causes, but some of these have been fixed in anndata 0.10.0
@jahilton @corismall read for QA
We only know 1 case that should newly fail because of this update, and we can't change it because AnnData fails to write it.
This does now produce a new error based on this update, in addition to other errors from the file prepared for a previous schema version
ERROR: Column 'is_doublet' in dataframe 'obs' must only contain string categories. Found {<class 'bool'>}.
And we trust that had the other issues (uns.schema_version & obs.tissue_type) been correct, the is_doublet error would have alone failed validation.
So Good-2-Go
QA notebook
|
GITHUB_ARCHIVE
|
For More Information
- Ph.D. in Computer Science and Engineering, University of Michigan, Ann Arbor, 2017
- M.S. in Computer Science, University of Michigan, Ann Arbor, 2013
- B.S. in Electrical Engineering, Seoul National University, 2009
Yongjoo Park is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. At UIUC, he is part of Data and Information Systems (DAIS) Research Lab. Also, Yongjoo is a co-founder and Chief Scientist of Keebo, Inc., a start-up company he co-founded based on his Ph.D. research. Yongjoo's research interest is in building intelligent data-intensive systems using statistical and Artificial Intelligence techniques. Yongjoo obtained a Ph.D. in Computer Science and Engineering from the University of Michigan, Ann Arbor in 2017. His dissertation received the 2018 SIGMOD Jim Gray Dissertation runner-up award.
- Assistant Professor, Department of Computer Science, University of Illinois at Urbana–Champaign, Jan. 2021 - Present
Other Professional Employment
- Chief Scientist, Keebo, Inc., Sep. 2022 - Present
- Co-founder and CTO, Keebo, Inc., Aug. 2019 - Aug. 2022
- Systems for analytics and machine learning
- A.I. Data-intensive Systems
Articles in Conference Proceedings
- Zhaoheng Li, Xinyu Pi, Yongjoo Park. S/C: Speeding Up Data Materialization with Bounded Memory. ICDE 2023 (research): 39th International Conference on Data Engineering, Anaheim, CA, USA, 2023.
- Nikhil Sheoran, Supawit Chockchowwat, Arav Chheda, Suwen Wang, Riya Verma, Yongjoo Park. A Step Toward Deep Online Aggregation. SIGMOD 2023 (research): ACM SIGMOD/PODS International Conference on Management of Data, Seattle, WA, USA, 2023.
- Supawit Chockchowwat, Wenjie Liu, Yongjoo Park. Automatically Finding Optimal Index Structure. AIDB Workshop at VLDB 2022 (research): 4th International Workshop on Applied AI for Database Systems and Applications, Sydney, Australia, 2022.
- Sophia Yang, Yongjoo Park, and Abdussalam Alawini. The Effects of Teaching Modality on Collaborative Learning: A Controlled Study. FIE 2022 (research): The Frontiers in Education, Uppsala, Sweden, 2022.
- Supawit Chockchowwat, Chaitanya Sood, Yongjoo Park. Airphant: Cloud-oriented Document Indexing. ICDE 2022 (research): 38th International Conference on Data Engineering, Kuala Lumpur, Malaysia, 2022.
- Johes Bater, Yongjoo Park, Xi He, Xiao Wang, Jennie Rogers. SAQE: Practical Privacy-Preserving Approximate Query Processing for Data Federations. PVLDB 2020 (research): 46th International Conference on Very Large Data Bases. Tokyo, Japan (Online due to COVID-19), 2020.
- Yongjoo Park, Shucheng Zhang, Barzan Mozafari. QuickSel: Quick Selectivity Learning with Mixture Models. SIGMOD’20 (research): ACM SIGMOD/PODS International Conference on Management of Data. Portland, OR, USA, 2020.
- Yongjoo Park, Jingyi Qing, Xiaoyang Shen, Barzan Mozafari. BlinkML: Efficient Maximum Likelihood Estimation with Probabilistic Guarantees. SIGMOD’19 (research): ACM SIGMOD/PODS International Conference on Management of Data. Amsterdam, The Netherlands, 2019.
- Yongjoo Park, Barzan Mozafari, Joseph Sorenson, Junhao Wang. VerdictDB: Universalizing Approximate Query Processing. SIGMOD’18 (research): ACM SIGMOD/PODS International Conference on Management of Data. Houston, TX, USA, 2018.
- Wen He, Yongjoo Park, Idris Hanafi, Jacob Yatvitskiy, Barzan Mozafari. Demonstration of VerdictDB, the Platform-Independent AQP System. SIGMOD’18 (demo): ACM SIGMOD/PODS International Conference on Management of Data. Houston, TX, USA, 2018.
- Yongjoo Park, Amhad Shahab Tajik, Michael Cafarella, Barzan Mozafari. Database Learning: Toward a Database System that Becomes Smarter Over Time. SIGMOD’17 (research): ACM SIGMOD/PODS International Conference on Management of Data. Chicago, IL, USA, 2017. SIGMOD Travel Award.
- Yongjoo Park. Active Database Learning. CIDR’17 (abstract): The biennial Conference on Innovative Data Systems Research. Chaminade, CA, USA, 2017.
- Yongjoo Park, Michael Cafarella, Barzan Mozafari. Visualization-Aware Sampling for Very Large Databases. ICDE’16 (research): IEEE 32nd International Conference on Data Engineering. Helsinki, Finland, 2016.
- Yongjoo Park, Michael Cafarella, Barzan Mozafari. Neighbor-Sensitive Hashing. PVLDB’15 (research) for VLDB’16: 42nd International Conference on Very Large Data Bases. New Delhi, India, 2016.
- Michael Anderson, Dolan Antenucci, Victor Bittorf, Matthew Burgess, Michael Cafarella, Arun Kumar, Feng Niu, Yongjoo Park, Christopher Ré, Ce Zhang. Brainwash: A Data System for Feature Engineering. CIDR’13 (vision): The biennial Conference on Innovative Data Systems Research. Asilomar, CA, USA, 2013.
- Alekh Jindal, Barzan Mozafari, Yongjoo Park, Brian Westphal, Shi Qiao, Matthew Larson, Advait Abhay Dixit. Platform Agnostic Query Acceleration (United States Patent 11567936)
Conferences Organized or Chaired
- Publicity Chair, 39th IEEE International Conference on Data Engineering (ICDE 2023)
- Co-chair, SIGMOD 2022 Student Research Competition
- Co-chair, SIGMOD 2021 Student Research Competition
- Publicity Chair, ACAIA workshop 2017 (http://dbgroup.eecs.umich.edu/acaia/)
- Teaching Excellence, Fall 2022 (2023)
- 2018 ACM SIGMOD Jim Gray Dissertation Award Runner-up (Jun. 2018)
- 2021 Engineering Council Outstanding Advising Award (February 2021 )
Recent Courses Taught
- CS 411 - Database Systems
- CS 511 - Advanced Data Management
- CS 598 YP - ML and Data Systems
|
OPCFW_CODE
|
Hello Arvid, Thanks for joining to the thread! First, did you take into consideration that I would like to dynamically add queries on the same source? That means first define one query, later the day add another one , then another one, and so on. A Week later kill one of those, start yet another one, etc... There will be hundreds of these queries running at once, but the set of queries change several times a day. They will consume the same high intensive source(s) therefore I want to optimize for that by consuming the messages in Flink only once.
Regarding the temporary tables AFAIK they are only the metadata (let's say Kafka topic detals) and store it in the scope of a SQL session. Therefore multiple queries against that temp table will behave the same way as querying normal table, that is will read the datasource multiple times. It looks like the feature I want or could use is defined by the way of FLIP-36 about Interactive Programming, more precisely caching the stream table . While I wouldn't like to limit the discussion to that non-existing yet feature. Maybe there are other ways of achieving this danymic querying capability. Kind Regards, Krzysztof https://cwiki.apache.org/confluence/display/FLINK/FLIP-36%3A+Support+Interactive+Programming+in+Flink#FLIP-36:SupportInteractiveProgramminginFlink-Cacheastreamtable * You want to use primary Table API as that allows you to programmatically > introduce structural variance (changing rules). > * You start by registering the source as temporary table. > * Then you add your rules as SQL through `TableEnvironment#sqlQuery`. > * Lastly you unionAll the results. > > Then I'd perform some experiment if indeed the optimizer figured out that > it needs to only read the source once. The resulting code would be minimal > and easy to maintain. If the performance is not satisfying, you can always > make it more complicated. > > Best, > > Arvid > > > On Mon, Mar 23, 2020 at 7:02 PM Krzysztof Zarzycki <k.zarzy...@gmail.com> > wrote: > >> Dear Flink community! >> >> In our company we have implemented a system that realize the dynamic >> business rules pattern. We spoke about it during Flink Forward 2019 >> https://www.youtube.com/watch?v=CyrQ5B0exqU. >> The system is a great success and we would like to improve it. Let me >> shortly mention what the system does: >> * We have a Flink job with the engine that applies business rules on >> multiple data streams. These rules find patterns in data, produce complex >> events on these patterns. >> * The engine is built on top of CoProcessFunction, the rules are >> preimplemented using state and timers. >> * The engine accepts control messages, that deliver configuration of the >> rules, and start the instances of the rules. There might be many rule >> instances with different configurations running in parallel. >> * Data streams are routed to those rules, to all instances. >> >> The *advantages* of this design are: >> * *The performance is superb. *The key to it is that we read data from >> the Kafka topic once, deserialize once, shuffle it once (thankfully we have >> one partitioning key) and then apply over 100 rule instances needing the >> same data. >> * We are able to deploy multiple rule instances dynamically without >> starting/stopping the job. >> >> Especially the performance is crucial, we have up to 500K events/s >> processed by 100 of rules on less than 100 of cores. I can't imagine having >> 100 of Flink SQL queries each consuming these streams from Kafka on such a >> cluster. >> >> The main *painpoints *of the design is: >> * to deploy new business rule kind, we need to predevelop the rule >> template with use of our SDK. *We can't use* *great Flink CEP*, *Flink >> SQL libraries.* Which are getting stronger every day. Flink SQL with >> MATCH_RECOGNIZE would fit perfectly for our cases. >> * The isolation of the rules is weak. There are many rules running per >> job. One fails, the whole job fails. >> * There is one set of Kafka offsets, one watermark, one checkpoint for >> all the rules. >> * We have one just distribution key. Although that can be overcome. >> >> I would like to focus on solving the *first point*. We can live with the >> rest. >> >> *Question to the community*: Do you have ideas how to make it possible >> to develop with use of Flink SQL with MATCH_RECOGNIZE? >> >> My current ideas are: >> 1. *A possibility to dynamically modify the job topology. * >> Then I imagine dynamically attaching Flink SQL jobs to the same Kafka >> sources. >> >> 2. *A possibility to save data streams internally to Flink, >> predistributed*. Then Flink SQL queries should be able to read these >> streams. >> >> The ideal imaginary solution would look that simple in use: >> CREATE TABLE my_stream(...) with (<kafka properties>, >> cached = 'true') >> PARTITIONED BY my_partition_key >> >> (the cached table can also be a result of CREATE TABLE and INSERT INTO >> my_stream_cached SELECT ... FROM my_stream). >> >> then I can run multiple parallel Flink SQL queries reading from that >> cached table in Flink. >> These >> >> Technical implementation: Ideally, I imagine saving events in Flink state >> before they are consumed. Then implement a Flink source, that can read the >> Flink state of the state-filling job. It's a different job, I know! Of >> course it needs to run on the same Flink cluster. >> A lot of options are possible: building on top of Flink, modifying Flink >> (even keeping own fork for the time being), using an external component. >> >> In my opinion the key to the maximized performance are: >> * avoid pulling data through network from Kafka >> * avoid deserialization of messages for each of queries/ processors. >> >> Comments, ideas - Any feedback is welcome! >> Thank you! >> Krzysztof >> >> P.S. I'm writing to both dev and users groups because I suspect I would >> need to modify Flink to achieve what I wrote above. >> >
|
OPCFW_CODE
|
A quick retrospective into last year2017 was an amazing year for SuperTuxKart, we had tremendous support from our community to Greenlight us in five days and our lead artist released the 0.9.3 version live at the Blender conference. You can watch the event here. 0.9.3 also brought SuperTuxKart to Android.
Behind the doors we are busy and ready to take the game to the next level. On the rendering side we have a couple of exciting things to share with you. We are going to take a journey into the wonderful world of light and shadow.
Improved Rendering EngineSome solutions that worked in the past are slowing becoming a burden going forward. Over the past few months, a lot of work has been ongoing on Antarctica, SuperTuxKart's graphical engine, in particular thanks to our developer Benau.
This work will be released in a future version of the game, though we do not yet know which version or when the release will occur.
Better performanceLet's put it straight, performance is important for us. And while it is hard for our small team to rival the powerful engines used by AAA games, we are working hard to make things as good as we can with the resources at our disposal. The new engine is more optimized and in most scenarios will see slightly improved performance.
Physically Based RenderingOne of the highlights of the new version is an improved materials system. This new system should make it easier for artists to create good-looking tracks, by allowing them to easily use content creation software and tweak materials properties such as roughness, glow or metallic look.
The version of Antarctica that will power the upcoming version of SuperTuxKart will use a PBR renderer. It will become very easy to setup complex materials. Instead of learning a brand new system you just have to answer these questions:
- What is the color of the surface (without any shadows, just the pure color)?
- Is the surface polished?
- Is the surface a metal?
- Is the surface glowing?
Custom shadersWhile we expect the majority of content creators to use the default shaders that are provided with the game, we also want to empower advanced creators by allowing them to create their own custom shaders. It will be possible to easily add shaders, each object can have up to 6 textures in slots and "unlimited shared textures".
|An example of a road shader with procedural texture blending|
New file formatWhile this was already introduced in the last version of SuperTuxKart we extended the format to include high quality bitangent which removed a couple of bugs regarding normal map shading. We now use the same technique that is used by any modern game engine (Blender, Unreal, Unity, etc) so the shading will look perfect.
Everything is shiny, what does it mean?Now let's go in-depth and see how rendering works and how we came here.
A brief history of lightOur epic journey starts 8 minutes ago in the core of a giant plasma ball tirelessly emitting light particles.
|Source: NASA Solar Dynamics Observatory|
When the photons are bouncing around, they will react differently depending on the surface, sometimes they are absorbed, sometimes they are scattered around or they can even go through a material like glass.
Those slight differences are causing the whole diversity of colors and materials in our world. If something appear red it's only because the surface of the object absorb most of the incoming light except red.
Why does it matter for SuperTuxKart?Well, in video games we are trying to simulate this phenomenon to provide a believable environment to the player. When you drive in SuperTuxKart and explore our tracks, the computer has to somehow emulate this, to be able to show you a picture.
The first approach would be to simply cast billions of light rays just like in reality.
While this approach works (it's called ray tracing) and can provide amazing results it has a huge drawback. It's costly in term of performance. It's mostly used in movies and one frame can take up to 5 hours to be computed, not really compatible with an interactive game. People already complain about performance in our game, imagine if now you should wait 2 hours after pressing the start button just to see the first picture.
In our video game if we want to reproduce the world we need to think outside of the box.
In 1975 Bui Tuong Phong revolutionized the world by publishing "Illumination for computer generated pictures", a paper describing how it was possible to compute an approximation of lighting fast enough to be carried in real time. It became known as the Phong Shading.
While the algorithm wasn't physically accurate, it could approximate with a reasonable degree of accuracy how light behaves in the real world.
This model (or usually a variant of it called Blinn Phong) is what 99.9% of 3D games are using to render pictures. While it has several advantages it also introduce some inconveniences. This was the technique used by SuperTuxKart up until version 0.8.2.
- Phong works only for local pixel (objects won't cast shadows)
- It doesn't take into account advanced things like the influence of other objects in the scene.
- Ability for objects to cast shadow on themselves and others
- The sky will light the scene and influence the overall hue
- Light will scatter with fog producing halos
However, there were still some issues. This model separate materials by two categories, the ones that reflect light, like plastic and metal and the ones that are matte, like fabric, however in nature this distinction doesn't exist.
|Previous light model in Antarctica|
This previous implementation is still powering the last version released to date. However aware of the limitation we went back to the drawing board and started to plan improving our rendering technology.
Going PBRIn the new version of Antarctica the artist can easily emulate the whole range of materials by simply defining how the surface looks. Just like in reality if you polish a surface enough it will eventually become a mirror and it will start to reflect the surrounding environment.
Even bricks are a bit shiny, yes they have a rough surface but nonetheless the model will accurately emulate the physical property of any materials.
Metals are a special case, since their color will influence the reflection color, hence the metal map. Parts that are metallic will use the color of the object as one of the components of their reflection.
The emit map is the same as before (if the surface is emitting light). There are no changes.
|Light model in the new Antarctica|
Empowering the artist beyond textures and geometryWe now have a model which is accurate, reflects the real physical world and how it behaves. One last remaining part was allowing an artist to create specialized shaders. While generic shaders are okay in 90% of cases, there can be specific situations where you want to have a custom shader like a lava flow.
Currently it's still a bit difficult to write shaders but we plan to offer a bunch of predefined components to allow you to quickly set up your own shaders.
Why is it worth it?Since the PBR model is more or less a standard you can create an asset in Blender, paint it in Substance Painter, try it in Blender's Eevee and then load it in SuperTuxKart and it will have the same look. You can follow a tutorial about texturing for PBR shaders in Unity and apply the same concepts in SuperTuxKart.
Basically the whole library of tutorials, online courses on real time PBR can be applied to SuperTuxKart. It sounds way better than learning a custom system that is only used in a specific game.
We want to allow people from other backgrounds to quickly contribute and create art for our beloved game.
We will show you more in the coming months and we hope you are as excited as we are for 0.9.4!
|
OPCFW_CODE
|
Involvement of S3 storage with JDBC queries
In our company we are working with the Snowflake-JBDC-Driver. Playing around a little bit, we noticed that not the full amount of results came back when executing a query with a large result set. Only after adding another Firewall-rule to access S3 storage were we able to retrieve all the records of a query.
I was then looking into the source code of the JDBC-Driver and there indeed I found code, that seems to communicate with S3, but it seems to me, that this feature has to be used explicitly and is not something that is done behind the scenes.
So my question: Does the Snowflake-JDBC-Driver work with S3 buckets when dealing with large queries?
Snowflake caches all large result sets on the internal stage of the Snowflake Account. For AWS this would be an S3 bucket indeed.
All drivers (this includes JDBC as well) do retrieve the large result sets from the internal stage automatically and this is not something that needs to be configured explicitly. Actually you cannot configure the behavior.
More information also available here.
Thank you for your response. The issue we have is regarding the additional firewall rule we have to setup for S3. As I understand from your provided link, this feature can be deactivated by setting 'USE_CACHED_RESULT' to false? And could you add a code point in the Snowflake JDBC driver where the call to S3 is made?
It cannot be deactivated using USE_CACHED_RESULT=false. The use of that parameter only indicates that every time you run the query don't use the cache if already done before, but the results will be stored on the internal stage anyway if they are over a certain size. Like I mentioned this is done automatically and cannot be deactivated (it is done for performance reasons).
Ok. But does the driver in that case retrieve the result from S3 directly or will it be coming from the database endpoint? I'm asking because in one case we only need to open the firewall for accessing the snowflake database and in the other scenario we would need to open the firewall to snowflake database AND S3 storage.
You should have access opened to both Snowflake endpoint and the S3 storage endpoint as well at all time. You can verify what URLs need to be whitelisted at firewall using SnowCD or the output of function SYSTEM$ALLOWLIST
Thank you for helping me out. So that means the driver is directly communicating with an S3 storage in that case. I know about the SYSTEM$ALLOWLIST. I just couldn't find the point in the code nor in the logs, where the connection to S3 is established.
Yes, that is correct. For JDBC look for the SnowflakeS3Client class.
I was finally able to debug the code: It's not the SnowflakeS3Client involved in this scenario, but the SnowflakeChunkDownloader. Depending on the url's, this downloader downloads the chunks from an S3 bucket.
|
STACK_EXCHANGE
|
Firebase Device group messaging to thousands of users
We built firebase device group functionalities into our app's back-end. This permits us to message users on all of their devices. According to the documentation, to send a message to a device group, we have to make a request to the firebase with the notification key. So if we have to send to multiple devices, what we do is; loop through the list of notification keys for each of the 4000 users and send a message to each user with their notification keys.
This approach obviously is very slow, since thousands of web requests are made to the firebase API.
So, while implementing this push notifications functionality on our API using Firebase cloud Admin SDK, we didn't find any form of "topic" messaging for device groups.
My question is: does someone know a more efficient way of sending push notifications to device groups other than looping through these notification keys and sending multiple web requests?
This is how I send a message to a device group.
public async Task<SendDeviceMessageResponse> SendNotificationToDevices(DeviceMessage message)
{
var resultStr = await Post(JsonConvert.SerializeObject(message), FirebaseSendMessageURL);
var result = JsonConvert.DeserializeObject<SendDeviceMessageResponse>(resultStr);
if (result.FailureCount == 0 && result.SuccessCount == 0)
{
result.ResponseType = ResponseType.EmptyDeviceGroup;
}
else if (result.FailureCount > 0 && result.SuccessCount > 0)
{
result.ResponseType = ResponseType.PartialSuccess;
}
else if (result.FailureCount == 0 && result.SuccessCount > 0)
{
result.ResponseType = ResponseType.TotalSuccess;
}
else
{
result.ResponseType = ResponseType.TotalFailure;
}
return result;
}
private async Task<string> PostToFirebaseDeviceGroup(DeviceGroupRequestModel requestModel)
{
string body = JsonConvert.SerializeObject(requestModel);
var respStr = await Post(body, BaseFCMNotificationsURL);
return respStr;
}
The complete implementation of the above is found in this repository.
When I have several device groups I have to send a message to, I loop through the device groups, and call the above "SendNotificationToDevices" for each device group. That is what makes it slow. whereas, with "Topic" messaging, I just need to make a single request to the API and that is all. Is there some form of topic messaging for Device groups ? or a more efficient way of sending thousands of messages?
"This approach obviously is very slow" Keep in mind that everyone is using this exact same API to send notifications to their (Android) users. So while your implementation may be obviously slow to you, the API isn't inherently slow and is actually incredibly scalable. There's a good chance we can help better if you show the minimal code that reproduces the slowness you experience in sending messages, and the timings that you get from that code.
All I do, is; send a post web request exactly as what is seen here: https://firebase.google.com/docs/cloud-messaging/android/device-group#device-group-http-post-request The only difference is that I do it for about 3000 users. The messages are sent, but the time it took is about 6minutes.
Hello @FrankvanPuffelen I updated the question with code, and a link to the complete source code I use on github.
When you call SendNotificationToDevices, are you awaiting each individual response? I'd recommend doing a reasonable number of calls in parallel, as the FCM backend is set up to handle massive parallel loads anyway. The other way to speed things up is to send batched messages as those can pipeline over HTTP/2 and amortize the cost of establishing the connection over multiple calls. But I'm not sure what API you're calling, and whether device groups are even supported i the versioned API that supports such batch calls.
This is what I'm calling : https://fcm.googleapis.com/fcm/send
|
STACK_EXCHANGE
|
using System;
using UnityEngine;
using UnityEngine.EventSystems;
namespace FarrokhGames.Inventory
{
public interface IInventoryController
{
Action<IInventoryItem> onItemHovered { get; set; }
Action<IInventoryItem> onItemPickedUp { get; set; }
Action<IInventoryItem> onItemAdded { get; set; }
Action<IInventoryItem> onItemSwapped { get; set; }
Action<IInventoryItem> onItemReturned { get; set; }
Action<IInventoryItem> onItemDropped { get; set; }
}
/// <summary>
/// Enables human interaction with an inventory renderer using Unity's event systems
/// </summary>
[RequireComponent(typeof(InventoryRenderer))]
public class InventoryController : MonoBehaviour,
IPointerDownHandler, IBeginDragHandler, IDragHandler,
IEndDragHandler, IPointerExitHandler, IPointerEnterHandler,
IInventoryController
{
// The dragged item is static and shared by all controllers
// This way items can be moved between controllers easily
private static InventoryDraggedItem _draggedItem;
/// <inheritdoc />
public Action<IInventoryItem> onItemHovered { get; set; }
/// <inheritdoc />
public Action<IInventoryItem> onItemPickedUp { get; set; }
/// <inheritdoc />
public Action<IInventoryItem> onItemAdded { get; set; }
/// <inheritdoc />
public Action<IInventoryItem> onItemSwapped { get; set; }
/// <inheritdoc />
public Action<IInventoryItem> onItemReturned { get; set; }
/// <inheritdoc />
public Action<IInventoryItem> onItemDropped { get; set; }
public Action<IInventoryItem> onItemRemovedAndRearranged { get; set; }
private Canvas _canvas;
internal InventoryRenderer inventoryRenderer;
internal InventoryManager inventory => (InventoryManager) inventoryRenderer.inventory;
private IInventoryItem _itemToDrag;
private PointerEventData _currentEventData;
private IInventoryItem _lastHoveredItem;
/*
* Setup
*/
void Awake()
{
inventoryRenderer = GetComponent<InventoryRenderer>();
if (inventoryRenderer == null) { throw new NullReferenceException("Could not find a renderer. This is not allowed!"); }
// Find the canvas
var canvases = GetComponentsInParent<Canvas>();
if (canvases.Length == 0) { throw new NullReferenceException("Could not find a canvas."); }
_canvas = canvases[canvases.Length - 1];
}
/*
* Grid was clicked (IPointerDownHandler)
*/
public void OnPointerDown(PointerEventData eventData)
{
if (_draggedItem != null) return;
// Get which item to drag (item will be null of none were found)
var grid = ScreenToGrid(eventData.position);
_itemToDrag = inventory.GetAtPoint(grid);
}
/*
* Dragging started (IBeginDragHandler)
*/
public void OnBeginDrag(PointerEventData eventData)
{
inventoryRenderer.ClearSelection();
if (_itemToDrag == null || _draggedItem != null) return;
var localPosition = ScreenToLocalPositionInRenderer(eventData.position);
var itemOffest = inventoryRenderer.GetItemOffset(_itemToDrag);
var offset = itemOffest - localPosition;
// Create a dragged item
_draggedItem = new InventoryDraggedItem(
_canvas,
this,
_itemToDrag.position,
_itemToDrag,
offset
);
// Remove the item from inventory
inventory.TryRemove(_itemToDrag);
onItemPickedUp?.Invoke(_itemToDrag);
}
/*
* Dragging is continuing (IDragHandler)
*/
public void OnDrag(PointerEventData eventData)
{
_currentEventData = eventData;
if (_draggedItem != null)
{
// Update the items position
//_draggedItem.Position = eventData.position;
}
}
/*
* Dragging stopped (IEndDragHandler)
*/
public void OnEndDrag(PointerEventData eventData)
{
if (_draggedItem == null) return;
var mode = _draggedItem.Drop(eventData.position);
Debug.Log(mode);
switch (mode)
{
case InventoryDraggedItem.DropMode.Added:
onItemAdded?.Invoke(_itemToDrag);
break;
case InventoryDraggedItem.DropMode.Swapped:
onItemSwapped?.Invoke(_itemToDrag);
break;
case InventoryDraggedItem.DropMode.Returned:
onItemReturned?.Invoke(_itemToDrag);
break;
case InventoryDraggedItem.DropMode.Dropped:
onItemDropped?.Invoke(_itemToDrag);
if(inventory.TryDrop(_itemToDrag))
ClearHoveredItem();
break;
}
inventory.onItemRemovedAndRearranged?.Invoke(_itemToDrag);
_draggedItem = null;
}
/*
* Pointer left the inventory (IPointerExitHandler)
*/
public void OnPointerExit(PointerEventData eventData)
{
if (_draggedItem != null)
{
// Clear the item as it leaves its current controller
_draggedItem.currentController = null;
inventoryRenderer.ClearSelection();
}
else { ClearHoveredItem(); }
_currentEventData = null;
}
/*
* Pointer entered the inventory (IPointerEnterHandler)
*/
public void OnPointerEnter(PointerEventData eventData)
{
if (_draggedItem != null)
{
// Change which controller is in control of the dragged item
_draggedItem.currentController = this;
}
_currentEventData = eventData;
}
/*
* Update loop
*/
void Update()
{
if (_currentEventData == null) return;
if (_draggedItem == null)
{
// Detect hover
var grid = ScreenToGrid(_currentEventData.position);
var item = inventory.GetAtPoint(grid);
if (item == _lastHoveredItem) return;
onItemHovered?.Invoke(item);
_lastHoveredItem = item;
}
else
{
// Update position while dragging
_draggedItem.position = _currentEventData.position;
}
}
/*
*
*/
private void ClearHoveredItem()
{
if (_lastHoveredItem != null)
{
onItemHovered?.Invoke(null);
}
_lastHoveredItem = null;
}
/*
* Get a point on the grid from a given screen point
*/
internal Vector2Int ScreenToGrid(Vector2 screenPoint)
{
var pos = ScreenToLocalPositionInRenderer(screenPoint);
var sizeDelta = inventoryRenderer.rectTransform.sizeDelta;
pos.x += sizeDelta.x / 2;
pos.y += sizeDelta.y / 2;
return new Vector2Int(Mathf.FloorToInt(pos.x / inventoryRenderer.cellSize.x), Mathf.FloorToInt(pos.y / inventoryRenderer.cellSize.y));
}
private Vector2 ScreenToLocalPositionInRenderer(Vector2 screenPosition)
{
RectTransformUtility.ScreenPointToLocalPointInRectangle(
inventoryRenderer.rectTransform,
screenPosition,
_canvas.renderMode == RenderMode.ScreenSpaceOverlay ? null : _canvas.worldCamera,
out var localPosition
);
return localPosition;
}
}
}
|
STACK_EDU
|
Notifications on publish reports to meeting organizers
Is your feature request related to a problem? Please describe.
As an admin/moderator I want to be able to set up a notification to be sent to meeting organizers if their meeting passed and they didn’t published a meeting report so that the quality of the meetings on the platform is improved.
It is necessary to have the meeting reports filled in, therefore, email reminders will be sent to meeting organizers, as an initial notification and a reminder notification.
As a meeting organizer, I want to receive a notification if my meeting passed and I didn't publish a report so that I can remember to close it and attach the meeting report. Also, I want to receive another reminder at a given time if I still didn't have the chance to close the passed meeting.
As a space/global admin user, I want to receive a notification if an Official Meeting created through the admin panel passed and no one published a report so that I can remember to close the meeting and attach the report meeting. Also, I want to receive another reminder at a given time if no one still didn't have the chance to close the past meeting.
Describe the solution you'd like
The setting for the notification should be configurable through the admin dashboard (at meeting's component level) and should include 2 options:
- initial notification - x days
- reminder notification - after another x days.
(see "Related images" section)
Based on those options from the admin dashboard, a cron job will schedule notifications for all the users/global and space admins who have overdue meetings without report.
"You can now close your meeting with a report on the [ORGANIZATION NAME] platform"
"Dear [NAME OF USER],
Thank you for taking part in the [ORGANIZATION NAME] by organizing the meeting "[NAME OF THE MEETING]." We hope you had a successful meeting and a lively discussion. We invite you to add a meeting report by using the "Close Meeting" button on the meeting page.
Sharing a meeting report ensures that participant have access to the ideas that were exchanged at your meeting and that these ideas can be taken into account in the analysis, allowing proper feedback and follow-up.
In the report, it would be useful to include information on the type of the meeting (e.g. participatory workshop, open space, world café), the number of participants and their demographic background (e.g. age, gender), the main subjects discussed and ideas suggested, the arguments that led to them, as well as the general atmosphere and expected follow-up. Please indicate the number of participants in the dedicated field when completing your meeting report.
We also encourage you to link proposals that emerged in the meeting to the ideas which have already been published on the platform. This is important for the analysis.
Your participation is greatly appreciated, please do not hesitate to contact us should you have any questions.
Describe alternatives you've considered
Does this issue could impact on users private data?
List of Endorsements
Report inappropriate content
Is this content inappropriate?
|
OPCFW_CODE
|
Amazon's Security Groups provide the basic firewall and
networking security. You can of course suppliement it with your own
host-based firewall, but this provides the core.
By default, an instance has no connectivity, either internal or external. This is a like a firewall that starts with a "DENY ALL". By adding security groups, you are allowing different incoming connections. Either from your other instances, or from the outside world.
You assigned security groups before an instance starts. The ordering doesn't matter. Once the instance starts you can't change the membership.
Out of the box, every EC2 instance comes with a "Default group". It says any TCP/UDP/ICMP request from any other machine in your Default group is allowed. This allows internal connectivity. This security group creates your "personal" VLAN -- other instances that aren't yours can not connect to your instances.
If you don't think this is hot, let me give an example of another cloud provider. In the Rackspace Cloud, your instance can connect to every machine they have in the cloud (unless of course the instance is running it's own firewall). I'm sure Rackspace knows this is lame and is working on it. In the mean time, this is a problem you don't have with Amazon
In the AWS Management Console there is another group, "SSH/HTTP" which allows connections on ports 22, 80, and 443 (ssh, http, and https). If you have a one machine site, this is probably all you need.
If for some reason you needed to expose your database to outside world (say mysql on port 5506), you could create another group for that and assign that your instance.
For more complicated systems, allowing everyone to log into any machine is problematic for a few reasons. You have manage updates to SSH (the binaries, keys) on every box. It makes running intrusion detection systems more difficult.
To simplify security management, create a bastion host. This is only machine that allows external ssh connectivity. From the bastion host you can ssh into your other instances.
In Amazon, this is a snap. Create a security group called "bastion" which allows TCP port 22. Create and instance with that group and the default. Create a plain "http" group that does not have SSH access. Then assign new machines to either the default group or the default group + the http group.
Ok here's the fun. Because amazon's firewall is based on group membership and not raw IP addresses. You can terminate your bastion host anytime. When you need to log in, fire up a new instance that has membership of default+bastion. Ta-da. This is actually better than a bastion host in a datacenter -- if it dies, you need to make a trip to the data center (or have a backup).
This allow means external SSH access is completely OFF when the bastion host is off. That may be handy if you suspect you are under attack.
While you can't change group membership when an instance is running, you can change the group's rules. You can delete or add rules, and the change is immediate. This allows you to make quick changes, although I have yet to find a real need for this. If your bastion host is dead and you needed to quickly log-in, you could add port 22 to the default group and you log in.
This could go wrong, either in some horrible failure with Security Group, or more likely someone was tinkering with the security groups and opened it up too far. As a backup it's not a bad idea to run a host-based firewall as well.
In full form, you'd have a database of every machine in your
network and be able to generate the
iptables config and
update every machine with it. THis is tricky.
A more lame version would to have the firewall reject any port that you don't use. Or allow from only the 10. network. For instance:
DENY ALL ALLOW 22/TCP FROM ALL ALLOW 5606/TCP FROM 10.*.*.*
In other words, always allow SSH. Allow connections to MYSQL only from inside EC2. That's not great but will prevent people probing your MySQL from outside world.
Comment 2009-12-30 by None
Thanks for a very clear description of this topic! This helped me a lot.
Comment 2012-01-22 by None
Thanks for writing this!
|
OPCFW_CODE
|
How to speed up mobile application which consumes a lot of API data?
I built a social-type mobile Android application. Due to a specific requests, application contacts a remote API on almost each screen. Consequently, I feel that the app is slow.
What is the best design for apps like this? Should I have a special service which will handle all API requests and responses? If yes, users will not not see "data loading" information, but they still have to see the most accurate data refreshed from server side. How should I accomplish this?
Ideally, I would like to have the same UX like top apps where users simply press buttons and do stuff on the scree, and data is synched with the server-side without they know about it.
NOTE: I know how to do all this technically. I am asking about the best approach to this problem.
Are they hitting the same APIs for similar data on each screen? If so, caching would probably greatly help. So might pre-loading of data, requesting it ahead of time if its likely they'll need it. Of course preloading is something that should be a setting, due to data caps.
Although I feel that the question is ambiguous, I would like to put my own thoughts about social platforms out.
In apps like Facebook or Instagram, I can see when they load up information from the API. They do heavy caching on the posts, so that when you open up the app, you would see some posts already popped up before they actually retrieve new ones from the API, which happens a few seconds later.
Also for actions like Like or Comment they alter the UI instantly and then send out the request asynchronously, and later update the UI if any update is needed. That's why there is no loading.
@GabeSechan Yes, the same API. Can you clarify "preloading" thing. I did not understand it. Not sure if it's due to my English or lack of my knowledge.
@MiroMarkarian Pls suggest if I can make this question less ambigious. What is not clear?
Preloading is requesting the information before you need it and storing it. Lets say you're writing a web browser. You may download the pages links lead to before the user clicks them, so you already have them downloaded when the user clicks the link, and just need to display them.
@sandalone I mean that it can depend on many factors. API response time, connection speed, application code, device power, ... All of them have to be fast to be able to build up a fast system.
About the preloading I agree with @Gabe. You can preload data that you think will be needed in the future and keep it in cache and use it if it's needed. However as Gabe mentioned, there should be an option for turning it off as some users may not want it because of limited data allowance. But personally, I don't think Facebook or Instagram do preloading..
@GabeSechan Where does preloading occur? A preloading service is started on app boot?
@MiroMarkarian Are you saying that these apps don't do global preloading but rate scree-limited preloading, or they don't preload things at all???
I think your comments lead me to conclusion that I need to open another questions, preloading-related. Since that may be the key.
@sandalone As we don't have access to their source codes, I can't surely tell if they do or not. However, if they did, it would be so data-intensive and they probably would have implemented an option to override that behavior. Also, in Instagram for instance, I don't see any apparent signs of preloading. They just load what is necessary as far as I think. However, it could be because of my inadequate knowledge of software engineering that I don't see the preloadings...
@MiroMarkarian I also don't think they do preloading but rather immediate changes with a background updating services. I say this with a large reserve.
@sandalone If I was you, I would integrate a profiler-like behavior in my code. For example you can log just before sending the request, log just after the response is received, and finally log once more after the information is processed and is shown to the user. That way you can measure that how much time the request has spent in various stages and what has took the most time and focus on optimizing that part. It can be the server that is slowing down the application.
|
STACK_EXCHANGE
|
This set of deliverables has 3 parts, and is due Wednesday 9/9 at the beginning of class:
- An optional fun reading about loops;
- A brief reading-response;
- A brief Looking Outwards
- A looping GIF using arcs.
0. Optional Fun Reading about Loops
Check out the article “On Repeat: How to Use Loops to Explain Anything” by Lena V. Groeger, a journalist/developer/information-designer at ProPublica. This article is purely for your edification/entertainment; there’s no deliverable. It should take about 10-15 minutes to view this elegant stack.
1. Looking Outwards #02: Generative Art Deep Dive
For each of the following 4 computational artists, spend a solid 5 minutes (each) looking at their generative projects:
- Helena Sarin, instagram • twitter
- Zachary Lieberman, instagram
- Manolo Gamboa Naon, behance
- Sofia Crespo, instagram
- Create a blog post titled nickname-LookingOutwards02
- Select one artwork by one of the above three artists that you find appealing or intriguing.
- Grab an image of the artwork (use a screengrab if necessary); embed this image in the post.
- Write a sentence about why you selected this project.
- Include a link to the project’s URL.
- Categorize your blog post LookingOutwards-02
2. Reading #02: First Word Art, Last Word Art
Please read the one-page essay, “First Word Art / Last Word Art” by Michael Naimark. A backup PDF of this article can be found here. You are asked to write a brief (two-paragraph) response to this essay.
Please contemplate technical novelty in relation to the arts. The author of this reading, Michael Naimark, is a new-media artist who has been active in experimental cinema and virtual reality since the mid-1970s. The duality Naimark describes is one attempt to understand how culture accommodates new technologies, delineating a spectrum from the well-understood to the utterly novel.
Make a new blog post in this WordPress site. In your blog post, please write about 150-200 words reflecting on Naimark’s article. Some possible starting points for your reflection could include (but are not limited to):
- Where do you locate your interests along this spectrum?
- What are some ways in which new technologies shape culture?
- What are some ways in which culture shapes technological development?
- We might aspire to make stuff of lasting importance, but when our work is technologically novel, it doesn’t always age well. Discuss.
When you are done, please give your blog post the title: nickname-Reading02, and categorize your blog post with the WordPress category, 02-Reading.
3. Living Wallpaper: A Looping, Animated Environment for Zoom.
This is the “main” part of Deliverables #2, and will be the only part of this week’s assignment which is critiqued in class. In this project, you will create a looping, animated graphic, and produce this in two versions:
- An animated GIF, uploaded to this site
- A video loop which will be used as your background in Zoom during Wednesday’s class
Your GIF must be uploaded to this site before the beginning of class Wednesday, 9/9. So:
- SKETCH FIRST! Before doing anything, make some (real) sketches in your notebook. Try to develop a graphic concept.
- Your canvas must have 16:9 dimensions. It is recommended that your canvas be 1280×720. Be aware that Zoom has a minimum background size of 640×360.
- Write code which generates a (seamlessly) looping animation. You may use the code template below to get started.
- It is recommended that you restrict yourself to just a few, well-chosen colors. Remember, an animated GIF must represent all of its frames with a single a palette of just 256 colors. You may find this resource helpful: tips for making animated GIFs.
- Limit the duration of your GIF to approximately 30-300 frames long (~1-10 seconds).
- Export your animation frames from your code computationally (i.e. from code), and then use a tool (such as one of these) to assemble the animated GIF from the frames. It is strongly recommended that you NOT use a screencapture program to make your GIF! Also:
- be sure to design your GIF so that it loops infinitely, with a duration of forever. Make sure it loops without a “hiccup”. Please note: If your loop has a hiccup, you will get an “F”. Did you read that?
- be sure to create your GIF so that it plays back at least 20 frames per second (preferably 30 FPS). These are options you can set with proper GIF creation tools. You may need to specify the frame rate using milliseconds per frame (e.g. 30 FPS = 33 ms/f).
- be sure your GIF is under 10MB in filesize, preferably under 5MB. You can optimize (compress) your GIF with a tool like https://ezgif.com/.
- In a blog post, upload and embed your animated GIF. Important: Embed the GIF at its original resolution. Be sure not to embed any version that has been automatically resized by WordPress; it will not be animated! Also important: please upload your GIF directly to this WordPress site. GIFs embedded from Giphy or elsewhere in the cloud are not acceptable and will receive zero credit.
- Write a paragraph about the experience of creating the piece. Which easing function did you select, and why? Critique your work: where you feel you succeeded, where you feel you fell short of what you’d hoped to achieve.
- Make sure your project is uploaded to the online editor.p5js.org.
- Include a scan or photo of your pen-and-paper sketches.
- Label your blog post with the Category, 02-LivingWallpaper.
- Title your blog post, nickname-LivingWallpaper.
Below is a p5.js template for exporting frames to make an animated GIF loop. If you decide to use the template code, you’ll need to rewrite the renderMyDesign() method, which takes a percentage from 0..1 (NOTE: this is not a frame number!) as its argument.
The template below exports individual frames. You will need to be resourceful about finding a way to convert these frames into a GIF and into a video for Zoom. EZGif.com has tools to help, such as an image-batch-to-GIF maker, and a GIF-to-MP4 converter.
Here is an animated GIF, and the code template (in JS and Java) that produced it. Observe how the small pink square is moving nonlinearly, using one of the Pattern_Master functions. It has some character!
To get this working, you may find the following video helpful:
|
OPCFW_CODE
|
Essential Tools Every Python User Should Know
Ever found yourself lost in a maze of Python tools, wondering which ones are truly essential for your projects? Or perhaps you’ve faced the dilemma of moving from online coding platforms to setting up a robust local development environment, but the transition seemed a tad overwhelming?
Going from Python basics to being adept at navigating the never-ending Python tooling landscape is a trek many find overwhelming. The command line, often thought of as a cryptic realm, is actually your launchpad into Python tooling. The ability to efficiently manage Python packages, set up isolated environments, and handle version control is the bedrock of proficient Python development.
At Dataquest, we've observed this transitionary challenge and crafted a solution — the Tooling Essentials for Python Users course is tailored to equip you with essential tools, setting you on the path to becoming a well-rounded Python developer.
The Power of Python Tooling
Technology is reshaping traditional industries, so knowing Python and its suite of tools isn't just a skill, it's a necessity. From web developers building dynamic websites to data analysts uncovering insights from large datasets, the command line interface (CLI) is a common denominator, providing a powerful, flexible base for operations. Similarly, developers across the spectrum leverage tools like
pip, Git, and virtual environments to navigate the complexity of project dependencies and version control. These tools are not just conveniences; they are the engines driving modern development workflows.
The Command Line: Your Development Launchpad
The command line is more than just a black box where text magically transforms into actions. It’s the gateway to efficient development, allowing you to interact with your computer in a powerful and streamlined manner. Whether it's file management, package installation, version control, or testing some Python code, being comfortable with the command line enables you to optimize your workflow.
Real-world example: Imagine a scenario where a team of developers is working on a critical project. Just hours before the deadline, they encounter a bug. Instead of combing through the entire codebase in a graphical interface, a developer uses the command line to quickly search for a specific error message, locating the problematic file almost instantly. This speed in diagnosing the issue allowed the team to fix the bug and deliver the project on time.
Tools Beyond the Basics: pip, Virtual Environments, and Git
Once you're comfortable with the command line, the world of Python tooling really opens up! Tools like
pip, for managing packages, and Git, for version control, become your friends for building scalable and maintainable projects. Additionally, using virtual environments is an industry best practice for managing dependencies, so that your projects remain conflict-free.
Real-world example: Consider a developer named Britta. She's working on two projects: one using an older version of a Python library and the other one with its latest version. By leveraging virtual environments, Jenny can seamlessly switch between the two projects without any version conflict issues, ensuring that each has the exact dependencies it requires.
IDEs: The Hub of Your Coding Universe
Integrated Development Environments (IDEs) like VS Code and PyCharm are not merely text editors, they are your coding copilots. They offer an array of features like syntax highlighting, code completion, and debugging tools, which are indispensable for efficient coding sessions.
Real-world example: Think about Abed, a developer working on a complex Python project. One day, while working on a critical function, he noticed something peculiar: the code didn't produce the expected output. Instead of sifting through hundreds of lines manually, Alex turned to his IDE, PyCharm. Using its built-in debugging tools, he set breakpoints and watched variables, narrowing down the issue in a fraction of the time it would've taken without the IDE. Moreover, the IDE's code completion feature often suggested relevant methods and functions, saving him from frequent document consultations. It's as if he had an assistant that constantly guided him, ensuring his code was efficient and error-free.
In the competitive tech industry, complementing coding knowledge with a grasp of essential tools can be a significant differentiator.
What You'll Learn in This Course
The Tooling Essentials for Python Users course is made up of carefully curated lessons aimed at providing an in-depth understanding of the tooling essentials in Python. Let's take a closer look at each one:
Lesson 1 - Command Line Basics: Navigating and Managing Files
In this lesson, you'll gain confidence using the command-line interface (CLI) by learning how to navigate, create, and manage files and directories. As you work your way through this lesson, you'll:
- Get comfortable with the command-line interface, learning how to navigate and manage files and directories like an expert.
- Gain hands-on experience by taking on the role of the curator at The Digital Museum of Computing History, managing digital assets.
Lesson 2 - Command Line Basics: Searching, Editing, and Permissions
This lesson goes deeper into the intricacies of the command-line interface, where you'll learn about controlling permissions, redirecting output, and searching for files and content. In this lesson, you'll:
- Handle real-world challenges faced by developers, including controlling permissions, redirecting output, and searching for files.
- Continue your role as the curator of The Digital Museum of Computing History, further cementing your command-line skills by performing advanced computing tasks as you manage the museum's digital exhibits.
Lesson 3 - Virtual Environments and Environment Variables in the Command Line
This lesson focuses on virtual environments and environment variables in Python development. During this lesson, you'll:
- Understand what virtual environments and environment variables are, why they're vital, and how to create and use them effectively in the command line.
- Play the role of a chef in a busy kitchen, where you safely mix ingredients and keep secret recipes secure, analogous to a developer keeping their projects separate and protecting sensitive data.
Lesson 4 - Git Basics in the Command Line
This lesson introduces you to the indispensable tools and concepts of the popular version control system, Git. In this lesson, you'll:
- Begin with an introduction to Git, transition seamlessly into hands-on tasks, including installation, repository management, and the core Git workflow.
- Practice staging and committing changes, branching techniques, and remote repository handling.
- Clear potential roadblocks, resolve merge conflicts, and learn best practices.
Lesson 5 - Selecting and Installing an Integrated Development Environment (IDE)
This lesson explores IDEs, specifically popular ones like VS Code and PyCharm. In this lesson, you'll:
- Understand unique features of popular IDEs, and learn how to choose the appropriate IDE for your specific requirements.
- Gain practical experience installing and configuring your chosen IDE and Python on your local machine, setting the stage for productive coding sessions.
- Create and run a Python script using your chosen IDE.
Each lesson is designed to build on the previous one so that you have a deep understanding of Python's essential tools and how to use them effectively in your projects. By the end of the course, you'll have a strong foundation in these tools and the confidence to apply them seamlessly in various scenarios!
Whether you're collaborating on a team project, managing dependencies for a complex application, or simply setting up your local developer environment, you'll have the skills to do so efficiently and professionally. Your progression from a Python enthusiast to a proficient Python developer will be smoother and more rewarding.
Many industry experts argue that a well-rounded skill set, which includes expertise in critical development tools, can not only open doors to more advanced projects and roles but also potentially lead to higher salary opportunities.
Avoiding the Pitfalls: Why Python Tools Are Required Knowledge
Taking on a Python project without being equipped with the right tools can be like setting sail in stormy seas without the proper gear. Here's how a lack of proficiency in Python's essential tools can spell trouble:
Job Market Competitiveness:
- Employers treasure developers who not only have a solid grasp of Python but are also proficient with irreplaceable tools like the command line, virtual environments, and version control systems.
- A resume boasting of expertise in these areas isn’t just about embellishment, it's about showcasing readiness for collaborative and complex projects.
Smoothing Collaborative Efforts:
- Imagine being part of a large team where project dependencies are as tangled as a bowl of spaghetti. Without a solid understanding of virtual environments, you’re in for a tough time untangling the mess, risking project delays and bug infestations.
- Version control systems like Git are your safety net when collaborating. They track code changes and allow seamless integration of code, averting the chaos that comes with manual tracking.
Implementing Advanced Python Capabilities:
- Advanced Python packages and frameworks aren’t forgiving to the uninitiated. They assume a level of familiarity with essential development tools.
- Whether it’s data science, web development, or automation, knowing how to use these tools is a golden ticket to leveraging Python's full potential.
Career Advancement and Satisfaction:
- A well-rounded skill set is your gateway to advanced projects and roles. It’s not just about higher salary prospects but about carving a niche for yourself that can be incredibly satisfying.
- As tasks become less cumbersome and collaborative efforts run like a well-oiled machine, job satisfaction isn’t a mere phrase but a lived reality.
How We Equip You for the Challenge
Our course, Tooling Essentials for Python Users, doesn’t just teach you about these tools, but makes sure you understand their practical applications in real-world scenarios. By incorporating practical tasks, our course gives you the ability to leverage these tools to solve problems that developers face every day.
How This Course Will Set You Apart
The course stands out for several reasons:
- Practical Approach: Unlike many courses that focus on theory, ours is built around practical exercises and real-world applications that'll make sure you’re ready for professional challenges.
- Interactive Learning: Our platform is designed for active learning — our interactive coding challenges make sure you to learn by doing. While we do give you some guidelines, feel free to mix things up and tackle those "What if I try this?" moments that pop up during exercises.
- Personalized Feedback: With Dataquest, you're not just another student. Our system provides tailored feedback on your coding exercises, helping you identify areas for improvement and makes continuous learning rewarding.
Starting Your New Learning Path with Our Support
Starting on a new learning path often comes with questions and uncertainties — "How can I learn all these tools?" or "Will I be able to effectively apply them in my projects?" might be some of the concerns you have. We recognize these challenges, and our Tooling Essentials for Python Users course is meticulously tailored to guide you every step of the way.
What to Expect and How to Enroll:
Comprehensive Curriculum: Our course offers step-by-step guidance. We've deconstructed complex concepts into digestible segments, making for a seamless learning trajectory. By the end of the course, you'll not only grasp the tools but be adept at implementing them in real-world projects, optimizing your workflow.
Community Involvement: Beyond the course content, you'll be welcomed into our supportive community. It isn't just a forum either – it's a dynamic platform where you can collaborate on projects, network with like-minded developers, and get real-time solutions to your challenges.
Easy Enrollment Process:
- Visit the Course Page: To get more details and register, simply head over to our course page.
- Engage with the Community: Once registered, immerse yourself in our online community, a platform to connect, share, and learn.
- Jump into learning: Post-enrollment, you'll gain immediate access to all course materials. Happy coding!
So, what are you waiting for? Enroll now to level up your skill set, open new career avenues, and stand out with your new Python development skills!
|
OPCFW_CODE
|
I want to initialize the struct using this RegisterSymbol() function but got this error: expected expression before '{' token?
typedef void (*func_get)() ; // function pointer
void RegisterSymbol(Symbol *S1)
{
symbolDic[dicIndex].SymbolId = S1->SymbolId ;
symbolDic[dicIndex].get = S1->get ;
}
void SymbolInit() //array initilizer
{
RegisterSymbol(0x0A ,Get_temp());
RegisterSymbol({10, &getTempRawAdc()});
}
typedef struct //structure
{
uint32_t SymbolId;
func_get get;
} Symbol;
Please help me to configure the error as I try to assign value to the element of structure but could not succeed
Have you tried moving the typedef struct definition up above where its members are referred? To just below the first typedef.
First format your code correctly. It cannot be read
Also the first call to RegisterSymbol is passing 2 arguments where 1 is expected.
Note that dicIndex is not incremented between the two calls, so the second call would overwrite the array elements set by the first call. But it is unclear anyway why you define a struct and then later split the members across different arrays. The struct does not seem to be used for anything. Why not just pass two arguments to the function?
Given a function declared as yours is:
void RegisterSymbol(Symbol *S1)
The argument must be a pointer to a Symbol structure. There is a variety of ways to provide that. Here is a particularly simple alternative:
void f() {
// Declare s as a Symbol, with initializer:
Symbol s = { 0x0A, Get_temp }; // pointer to Get_temp, not the result of calling it
// Pass the address of s
RegisterSymbol(&s);
}
Or, provided that your compiler conforms to at least C99, among your other options would be to pass the address of a compound literal:
void f() {
// Pass the address of s
RegisterSymbol(&(Symbol){ 0x0A, Get_temp });
}
It is important at this point to observe that the lifetime of the object represented by that compound literal ends when execution of the innermost containing block (the body of f()) completes, but that doesn't matter because your RegisterSymbol() method does not retain a copy of the passed pointer, and the literal's members are not encumbered the same way.
But since indeed the registration function doesn't retain the passed pointer, you could consider having it instead receive a structure by value:
void RegisterSymbol(Symbol S1) {
symbolDic[dicIndex].SymbolId = S1.SymbolId ;
symbolDic[dicIndex].get = S1.get ;
}
void f() {
// Pass a copy of a Symbol
RegisterSymbol((Symbol){10, Get_temp});
}
or receive the structure members directly:
void RegisterSymbol(uint32_t SymbolId, func_get get) {
symbolDic[dicIndex].SymbolId = SymbolId ;
symbolDic[dicIndex].get = get ;
}
void f() {
// Pass the components of a symbol
RegisterSymbol(10, Get_temp};
}
typedef void (*func_get)() ; // function pointer
void RegisterSymbol(Symbol *S1)
{
symbolDic[dicIndex].SymbolId = S1->SymbolId ;
symbolDic[dicIndex].get = S1->get ;
}
When the compiler sees Symbol* the type Symbol is not yet defined...
Put your type definitions at the top of the file or at a header file
Compare with this code
typedef void (*func_get)(); // function pointer
void Get_temp();
void getTempRawAdc();
typedef struct //structure
{
uint32_t SymbolId;
func_get get;
} Symbol;
void RegisterSymbol(Symbol* S1, Symbol symbolDic[])
{
static unsigned int dicIndex = 0;
symbolDic[dicIndex].SymbolId = S1->SymbolId;
symbolDic[dicIndex].get = S1->get;
dicIndex += 1;
return;
};
void SymbolInit(Symbol dic[])
{
Symbol local;
local.SymbolId = 0x0A;
local.get = Get_temp;
RegisterSymbol(&local, dic);
local.SymbolId = 0x0A;
local.get = Get_temp;
RegisterSymbol(&local, dic);
return;
};
void Get_temp(){};
void getTempRawAdc(){};
This is just something that compiles and builds an array of Symbols for you to compare and fit to your needs
|
STACK_EXCHANGE
|
Android 4.1 crash on zoom
Hello guys !
So I'm starting a new app based on Cordova, and wanted to use Leaflet for my maps - either via Mapbox or directly with OSM. I encountered an issue when testing the most basic example, on Android 4.1 only, the app crashes when zooming in. I looked for many hours all over the internet, and found solutions that either didn't work or even sometimes I didn't understand ( someone said << android23 = ua.search('android [23]') !== -1 || ua.search("android 4.0") !== -1 || ua.search("android 4.1") >> - where am I supposed to put that ?!)
Note that the issue isn't even linked to the application itself, because when launching the stock Android browser and going to the leaflet website or mapbox website, when trying to zoom in on the available examples, the browser crash ! On Chrome for Android all is fine, though.
Sooo there it is, how to prevent that crash ?
Thanks !
@mourner is it visible to add this issue to https://github.com/Leaflet/Leaflet/milestones/1.0-beta1 ?
@ilvalle not sure if it's the same bug as #2693, can you verify any of the proposed fixes?
I changed Browser.js as
android23 = ua.search('android [23]') !== -1 || ua.search('android 4.1') !== -1
and loaded leaflet with
window.L_DISABLE_3D = true;
still crashing after a very few zoom.
With the following, it seems that the crash happens when you reach max-zoom
#map {
transform: none;
}
In general the app crashes when is loading tiles and you try to zoom at the same time.
Hope it helps
Please have a look at my comment [1]. I did a lot of debugging on this and all of the freezes could be traced back to browsers crashing when css-transform was set to it's previous value.
[1] https://github.com/Leaflet/Leaflet/issues/2693#issuecomment-46053713
I followed @fab1an's suggestion
I placed in https://github.com/Leaflet/Leaflet/blob/4c0f02879470cb4c1781db1739138d286040f0b4/src/dom/DomUtil.js#L141
pos.x = pos.x + (Math.random() / 10000);
it isn't so pretty, but it works. For me, the last thing is to apply this trick only when an android41 is detected.
It's not pretty at all, but I think it fixes the problem at it's core. I don't think it's possible to reason about the code in a way that we can be sure identical values will never be set. Another way would be to store the last value that has been set and compare it or something?
@fab1an yeah. I'm thinking of a prettier solution. Thanks again for finding out the root cause of the issue.
You're welcome, thanks for your work again ;).
Guys, could you please test if things got better after #3270?
|
GITHUB_ARCHIVE
|
Filing taxes as a minor from Upwork
So I am in a bit of a pickle. I am 17 years of age and have just started freelancing and am loving it! I currently use Upwork.com to find clients to do work with. However, I created an account for Upwork in 2015 when I was even younger, which means I must have (regrettably, I know guys...) put a fake birth date when I was 14 years old and a dumb freshman to get into the site. I recently have discovered that I must be 18 years old to use Upwork.com because contracts are only legally binding when you are 18 years old and above.
So, with all of this being said, I will have made around $400-$500 after my current job is done. Since this money is technically "ill-gotten" since I used Upwork as a minor, how do I file this money for taxes? Am I even able to file taxes since the money I earned was earned on a website where I violated TOS? The LAST thing I need is to have the IRS come back on me for tax evasion when I'm 30 years old.
I understand what I have done here is wrong, but it was all unintentional. I didn't create an account for Upwork with the intention of violating the TOS.
If you guys could help me out, it would be highly appreciated.
Just have your parents claim you as a dependent (like they're going to do) and have them include it in "other income" for you. You don't necessarily have to have a W2 from an employer to file taxes. The IRS doesn't really care that much that you broke some sites TOS. They just care that you declare it if required.
How are you expecting anyone here to help? You violated the TOS.. you can't legally enter into a contract as a minor... you generally do not file taxes on illegal earnings. This is why there are illegal things such as money laundering - to hide earnings you shouldn't legally be making.
The only possible solace you may find is that there's a limit to the amount of income you must make before filing taxes is mandatory. It can vary, but generally it's $600US.
Now, if your mess up and your clients find you violated the TOS and are under age, they could sue you in a civil court for any damages you cause since you are committing fraud. I'm sure Upwork could takes action if so inclined as well.
The smart and legal thing to do would be to stop work. Cancel any current ongoing projects. Don't collect any money. And wait until you are 18 before attempting to take on a new project.
I absolutely realize this is not the answer you want to hear. I imagine you are salivating over that possible $500. But the reality is.. there's no getting around your circumstances. You can either end it abruptly, showing some remorse, or continue with the unethical behavior. Continuing merely increases your odds of creating more problems.
They can't sue him. he was a minor at the time. They could try to make his parents financially responsible though. At the same time a court could just say the contract was invalid therefor nothing can be done.
You can sue a minor.. you just may not be able to collect anything until they are an adult. Sometimes the parents will take care of it, but they aren't obligated to do so. And if fraud caused damage.... then that's damage, contract terms are irrelevant. True, he/she can't be bound by the contract, but negligence is another matter.
your'e going to need to show me one single instance of a minor being sued that the parents weren't the ones held liable.
You might want to consider changing your username... http://boards.answers.findlaw.com/topic/228170-laws-on-suing-teens-under-the-age-of-18/ -- https://www.nolo.com/legal-encyclopedia/free-books/small-claims-book/chapter8-8.html
Guys... This was all completely unintentional. It's not like I am unreliable, I actually do do great work for clients. I find it little dumb for me to be penalized by wanting to make money in a different way from the rest of my peers. I understand why it's a problem though. I'm just a little upset that it was all on accident lol
I wan't meaning to imply any malice on your part @MrStank -- just the facts of your situation. I understand your motivations completely. Unfortunately with or without malice, you are in the spot you are in.
Yeah. Thanks for helping out @Scott. I suppose I will end my current Upwork contract and wait till 18 to use Upwork again.
|
STACK_EXCHANGE
|
(The following is a cross-post from the Lean Designs blog)
It gives me great pleasure to announce that you can now create hybrid width layouts using Lean Designs, letting you create professional looking websites in a fraction of the time compared to doing it by hand.
To understand the significance of this new feature and how to use it, we need to go over a bit of terminology:
Three types of layouts
In a fixed width layout, the site is given a fixed, constant width which is then displayed (usually centered) in the browser. With this type of layout, you cannot use any of the excess space in the browser except for specifying the page’s background style (white in the example below).
Until today, fixed width layouts were the only type of design you could create with Lean Designs.
100% Width / Fluid Width / Liquid Width
In a 100% width/fluid width/liquid width layout, the width of the site is based on the width of the browser. Because the position of the the elements are based on the width of the page, they’ll appear differently to visitors depending on the width of their browser.
Fluid widths are not currently supported by Lean Designs.
Hybrid width layouts, as the name implies, takes pieces of both the fixed width and the 100% width layouts. With this type of layout the outer elements can have 100% widths, but their contents are held in a fixed width, centered container.
With today’s release, you can now create this type of design using Lean Designs. Note that the term hybrid width is not a standard web design term, but we needed a way to differentiate this type of layout from the fixed width and 100% width layouts.
Creating Hybrid Width Designs in Lean Designs
Creating hybrid width designs in Lean Designs is a piece of cake:
1. Add a DIV element to your designs
2. Position it against the left side of the canvas
3. Stretch it so that it spans the entire width of the canvas
When you export it, Lean Designs will recognize that you want this element to have a 100% width and generate the appropriate HTML/CSS.
For example, let’s add an H1 element, assign it an id of logo, assign the red element an id of header, and export it. This is what it will look like in the browser:
And here’s the HTML/CSS that Lean Designs generates:
Pretty cool huh?
Lest you think this is only for creating simple, ugly sites, check out the following example of OneHub’s homepage re-created using Lean Designs in about 10 minutes:
In the editor:
Exported to HTML/CSS: (click here to view site)
Hope you like it. If you don’t have an account, you can sign up for free and create your own design in minutes.
As always, drop us a note if you run into any bugs or have ideas for improvement.
matt / email@example.com
|
OPCFW_CODE
|
Error when trying to run flip commands
On latest master (1.2.1) I receive the following error when issuing a flip command from eg/keyboard.js.
Below is the output:
Configured for Rolling Spider! RS_R107537
ready for flight
takeoff
takeoff
takeoff
/Users/kold/github_clones/node-rolling-spider/node_modules/noble/lib/characteristic.js:60
callback(null);
^
TypeError: object is not a function
at Characteristic. (/Users/kold/github_clones/node-rolling-spider/node_modules/noble/lib/characteristic.js:60:7)
at Characteristic.g (events.js:180:16)
at Characteristic.emit (events.js:92:17)
at Noble.onWrite (/Users/kold/github_clones/node-rolling-spider/node_modules/noble/lib/noble.js:279:20)
at emit (events.js:106:17)
at nobleBindings.write (/Users/kold/github_clones/node-rolling-spider/node_modules/noble/lib/mac/yosemite.js:550:10)
at Noble.write (/Users/kold/github_clones/node-rolling-spider/node_modules/noble/lib/noble.js:272:19)
at Characteristic.write (/Users/kold/github_clones/node-rolling-spider/node_modules/noble/lib/characteristic.js:64:15)
at Drone.writeTo (/Users/kold/github_clones/node-rolling-spider/lib/drone.js:342:49)
at frontFlip (/Users/kold/github_clones/node-rolling-spider/lib/drone.js:605:10)
This appears to be a noble issue more than rolling spider -- its calling the callback before checking if the provided callback is valid.
Indeed. I filed here for context and meant to circle back yesterday to clarify what you've mentioned and cc @sandeepmistry
Can you try with 1.3.0 I think I found the bug in my side... possibly or another bug that would have come out later ;)
Yeah that patch looks like it will work. I was headed in that direction yesterday but was managing a frustrated 4 year old upset because it didn't flip ;) Will give it a try this afternoon.
On June 14, 2015 at 8:48:16 AM CDT, Chris Williams<EMAIL_ADDRESS>wrote:Can you try with 1.3.0 I think I found the bug in my side... possibly or another bug that would have come out later ;) —Reply to this email directly or view it on GitHub.
also if you have a logitech f310 hanging around check out eg/gamepad.js ;)
Chris Williams
@voodootikigod http://twitter.com/voodootikigod | GitHub
http://github.com/voodootikigod
The things I make that you should check out:
SaferAging http://www.saferaging.com/ | JSConf http://jsconf.com/ |
RobotsConf http://robotsconf.com/ | RobotsWeekly
http://robotsweekly.com/
Help me end the negativity on the internet, share this
http://jsconf.eu/2011/an_end_to_negativity.html.
On Sun, Jun 14, 2015 at 7:00 AM, Kevin Old<EMAIL_ADDRESS>wrote:
Yeah that patch looks like it will work. I was headed in that direction
yesterday but was managing a frustrated 4 year old upset because it didn't
flip ;) Will give it a try this afternoon.
On June 14, 2015 at 8:48:16 AM CDT, Chris Williams <
<EMAIL_ADDRESS>wrote:Can you try with 1.3.0 I think I found
the bug in my side... possibly or another bug that would have come out
later ;) —Reply to this email directly or view it on GitHub.
—
Reply to this email directly or view it on GitHub
https://github.com/voodootikigod/node-rolling-spider/issues/43#issuecomment-111828675
.
|
GITHUB_ARCHIVE
|
Restrict Crypto Spending
At this point, anyone can edit anyone else's
wallet/balance to 0, and edit their own
wallet/balance to infinity. This is clearly not a great situation to be in.
To deal with this, we'll add a smart function that ensures that if you are editing your own
wallet/balance, you can subtract, and if you are editing someone else's
wallet/balance, you can only add.
In previous lessons, we gave you the full transaction with the exception of the code of the smart function. In this lesson, you'll be writing the full transaction and the smart function. Don't worry,you're ready for it! In this transaction, you need to:
- Create a new function
- Add that function to the relevant predicate
Creating a New Function
_fn object should have the following key-value pairs:
_id- This should be the collection of the subject you are creating (
name- We'll use "subtractOwnAddOthers?"
code- See Writing the Smart Function below
doc- An explanation of this function in words
Adding that Function to the Relevant Predicate
In order to do this, you need to include:
_id- The value of your
_idkey should specify the predicate you are updating with a unique two-tuple (see the previous lesson for an example).
spec- A reference to the function you just created; Use a tempid. (See the previous lesson for an example). This is a
multipredicate, so your value should be between
specDoc- The error message that you want to be thrown if a transaction violates this function. Note that if a predicate has multiple smart functions, the
specDocis shared among smart functions
Writing the Smart Function
You will need one (or several) of these Universal SmartFunctions to write,
You will also need these two context-specific functions:
(?o)represents the object (value) for the predicate you are updating. (Technically, this is the proposed object)
(?pO)represents the previous object (value) for the predicate you are updating.
And finally, you will need the function,
ownWallet?. If you remember, we add
ownWallet? to our ledger in Lesson 5. In
addOwnSubtractOthers? we also need to figure out whether the wallet being updated belongs to the user doing the updating. However, instead of re-writing the
ownWallet?, we can just use it in our smart function. In other words, if you insert
(ownWallet?) anywhere in your smart function, it will return
true if the wallet is yours,
false if it is not.
Write and Add addOwnSubtractOthers?
Follow the instructions above to create a new smart function and add it to the relevant predicate.
|
OPCFW_CODE
|
def array_city(x, p1, p2):
This function makes an x by x grid of 0s, 1s, and 2s by using np.random.choice. The idea of this function is the foundation of the project and was not difficult to think of. Before learning about np.random.choice, I created a grid and iterated through each row and each column using two for loops and assigned each position a random integer between 0 and 2. The hard part was trying to control the percentage of zeros, ones and twos. This function was lengthy and took almost the entire first week to create. np.random.choice allowed me to do so in one line.
After creating the controlled grid, I, along with other Schelling model students, had no idea where to go from there. Professor Granger pointed us in the right direction by telling us to write get_neighbors which passes a row and column in a specific grid and returns the coordinates of each neighbors. Initially, I wrote the function specifically for middle spaces, which returned all surrounding 8 coordinates. Upon testing, I realized that all along the boarder are special edge cases that need to be treated differently. Thus, I added additional if statements to fix that. My next function was going to return the value of each coordinate that get_neighbors returned, but I decided to combine the two and have get_neighbors return the value instead of the coordinates of each position. Testing this function was quite easy. All I did was make a controlled 3x3 grid, and made assert tests for each space. I chose a 3x3 grid because it contains every special case that I wrote my get_neighbors function for.
Coming up with the idea for this function was fairly instinctive. I knew that if something was already satisfied, I did not need to move it. Oppositely, I need to move everything that is not satisfied. Thus, I needed a function to compute the satisfaction of a specific space. Writing this function was easy as well. I initialized three variables (zeros, ones, twos), iterated through each value in get_neighbors, and added 1 to the corresponding variable. At the end, I divided each by the length of get_neighbors, and returned the corresponding percent.
def sat(satisfaction,grid): def unsat(satisfaction,grid): def empty(grid):
This function was not difficult to come up with either. I knew I needed the coordinates of all the satisfied, unsatisfied, and empties I initialized 3 empty lists, ran through each row and column of a specific grid, and appended each coordinate to the corresponding list depending on its satisfaction. Initially I made just one function for all three lists, but I realized while writing my later functions, that separating them would be easier. I even went as far as making a function for satisfied ones, another for satisfied twos, and two more for unsatisfied spaces. Then I realized that that would force me to work on 1s and 2s individually instead of at the same time. I went from one function, to three, to five, then back down to three.
I was stuck after completing the last function, so Professor Granger helped me with the idea of this one. This function simply passes a row and column in a specific grid, and when assigned a new value, will return True if satisfied and False if unsatisfied.
This function is simple. If should_move returns True, then implement the move. Figuring out the moving logic was without a doubt the hardest part of this project. The way I wrote it is that it takes a random coordinate in empty, and if should_move returns True, then the empty coordinate is assigned the new value and the old coordinate is replaced with a 0. I had to make sure that I maintained the original amount of reds and blues, rather than just reassigning new values. My empty function returned the coordinates of all empty spaces in an ordered sequence, so when I passed do_move on the list of empties, it did not work. By adding np.random.shuffle, do_move works on random empty spaces instead of in an ordered fashion.
This function simply runs do_move on each position in the grid.
This function simply runs move_all until all positions are satisfied.
This function takes a grid and runs through each position, assigning each value a specific color.
There are numerous controls that effect the grid and its satisfied output, such as size, percent of each population, and satisfaction value. The effect that some of these controls have on how difficult it is to satisfy the grid are straight forward. For example, satisfying a smaller grid should be easier than satisfying a larger grid one. Also, a higher satisfaction percent should be more difficult to achieve than a smaller percent. Some factors are not so straight forward, and those are the questions that I would like to tackle.
First, I would like to study how the difference between blue and red percentages will effect the grid. I could see this going either way, as one color would be easier to satisfy because of how many there are, and the other would be much more difficult. To do so, I created a 50x50 grid with a constant 10% empty, and 40% blue and 50% red. Then I ran sat_all and counted how many times it looped. Then I did the same thing, but increased the percentage of red by 10% and decreased the amount of blue by 10% each time. I found that as the difference increased, the number of loops increased as well. Thus, dominance of one population makes the grid more difficult to satisfy.
Second, I would like to study how grid size effects the satisfaction ability. I'm inclined to predict that a bigger grid means more loops. However, this also means more empty spaces that are open for moving into, which, from previous testing, has concluded that the grid is easier to satisfy. To test this, I started with a 10 by 10 grid, computed how many loops it ran, then increased the size by 10 and repeated. I found that as I increased the grid size with constant reds, blues, and satisfaction percent, satisfaction became exponentially more difficult to achieve.
Visualizations of these observations are in the "Schelling Graphs" notebook
I have two topics I still would like to investigate in the future. The first is changing the movement rules. Currently, I pick random empties and assign them values based on satisfaction. To make the model more realistic, I could set rules for movement. For example, only adjacent movement is allowed, blues and reds can swap, blocks can only move certain distances. The second issue came up when I ran my program with different satisfaction values. The difference between high and low satisfaction values is very apparent, and can be seen at the bottom of my “Schelling Graphs” notebook. Both gathered the reds and blues at the top of the grids. I think this is a result of how I iterated through my grids. Maybe if I change that, the colonies will collect at different locations of the grids.
The most important coding lesson that I learned from this project is to break my ideas up into small functions that do small jobs. Initially, I wrote much too large functions that became too complicated to test and debug. By breaking down these functions, I was able to delve deeper into the logic of my code and better understand step by step what I was trying to achieve.
|
OPCFW_CODE
|
At the recent a TechMentor conference, I presented a session on Hyper-V. An attendee asked, "Since Hyper-V is still relatively new, what are the best practices for the hardware I should buy?" That question was thought-provoking; so far, the advice I've heard has always been, "Buy the most powerful servers you can afford."
Now that isn't the answer the attendee was looking for. So during the session, we discussed more specific answers. We discovered that while the "bigger is better" mantra makes sense for individual Hyper-V hosts that aren't connected, the model changes dramatically when you add high availability and Windows failover clustering to the mix. This tip explores how to choose the right your hardware for Hyper-V high-availability clusters while also minimizing wasted RAM.
Individual Hyper-V host sizing
First, some guidance for buying Hyper-V hosts: Yes, buying the most powerful hardware you can afford help ensure that you get the highest number of virtual machines (VMs) per host. But that isn't necessarily the best approach, because Hyper-V tends to be constrained by its resources.
In contrast, VMware's vSphere today enjoys memory page table sharing and memory balloon driver features. The combination of these features allows more virtual RAM to be assigned and run than is actually available on a system. As a simple example, with these two features, 17 virtual machines of 1 GB each can run on a 16 GB server. Perhaps this isn't optimum for a high-performance production scenario, but absolutely helpful during a host failure.
Neither the first release of Hyper-V with Windows Server 2008 RTM (release to manufacturing), nor the second release -- of R2 – supports this capability. So a Hyper-V host cannot oversubscribe RAM assigned to VMs beyond the physical RAM that's installed into the box. As a result -- and again, this is a simplistic example -- if you have 16 GB of RAM, you'll never be able to power on a 17th 1 GB virtual machine (VM). The management interface simply won't allow for it.
High-availability Hyper-V hosts tend to be RAM-bound more than any other resource. Servers with 16 GB of RAM will have enough processing power in their four- or eight-way processing to handle most well-managed VM workloads. Obviously, high-processor-use workloads such as big Exchange or SQL servers will yield a different result. But for virtual machines that are good virtualization candidates, RAM isn't an available commodity in Hyper-V.
Thus, for individual Hyper-V hosts, purchase server hardware with as much RAM as you can afford. When it comes to RAM these days, the inflection point between additional hardware and price hovers around the 32 GB mark. Above that level, the jump to even more RAM becomes more cost inefficient and is not a good buy. But get as much memory as possible, and you'll be satisfied with the results.
But when you join multiple Hyper-V hosts into a Windows failover cluster, your purchasing decisions become more complex. And again, the problem involves Hyper-V's RAM oversubscription limitation.
In short, clustered Hyper-V instances must be architected so that hosts in a cluster can support the loss of at least one node. Otherwise, virtual machines could go down completely when a host's motherboard dies. If a host in a cluster goes down, every virtual machine on that host must be migrated and rebooted onto one of the remaining hosts. Because of Hyper-V's RAM limitation, remaining hosts must have the right amount of residual and unused RAM so that the lost host's virtual machines can power on.
The best way to explain this is through the use of an example. Let's look at three potential clusters. In each example, each host is configured with four processors and 16 GB of RAM. Cluster one is made up of two hosts, cluster two has four hosts, and cluster three contains six hosts.
In this example, assume that you've planned for complete failover capability with Hyper-V. The remaining hosts on the cluster must be able to successfully power on virtual machines after a host is lost. As a disclaimer, in each of these examples, I recognize that some RAM needs to be reserved for host processing. I also recognize that VMs are often configured with more than 1 GB of RAM. But I'm using round numbers to make the math easy while illustrating my point.
In cluster one, only two hosts are available. This means that the maximum number of virtual machines that can be hosted by this cluster is 16, at 1 GB apiece. With two hosts that each support 16 GB of RAM, I have an effective waste percentage of 50%. Exactly half my cluster capacity must sit waiting for one of the cluster nodes to die and VMs to migrate over to a functioning host. This is true whether I host all 16 VMs on one host or evenly spread them -- eight and eight -- between the two nodes of the cluster. I must do so because I need sufficient capacity to support re-homing those virtual machines if a host is lost. That's a lot of waste.
In cluster two, four hosts are available. With four hosts, I have more locations where virtual machines could be re-homed in the case of a host loss. Specifically, I can support up to 48 VMs of 1 GB each. Sixteen GB of RAM must be reserved across the entire cluster to support the loss of a single host. Whether I home all 48 VMs on three servers, leaving one completely empty or I balance them across the cluster, my percentage of waste is 25%. Our waste percentages are improving. p>
Cluster three increases the number of hosts to six, which further decreases the waste percentage. Across six hosts, I now can support 81 hosts of 1 GB each, again leaving 16 GB of RAM reserved for a loss. In this cluster, I have reduced my overall waste percentage to just 17%. Still not great, but minimal in comparison with the others.
When it comes to Hyper-V clusters, count matters as much as size. Specifically, the number of hosts in a cluster is important to minimize the level of RAM waste. All of these calculations are necessary because Hyper-V does not yet support RAM oversubscription. Microsoft will not support this feature set in Windows Server 2008 R2, nor has the company predicted when this capability will arrive, although this independent author believes that we'll see such a capability very soon.
So my guidance regarding Hyper-V hosts in a cluster is not only to buy as much hardware as you can but also to buy as many hosts as you can. If you can trade a slightly beefier set of systems for a set that includes more systems, you'll waste less space.
About the author
Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books including Windows Server 2008: What's New/What's Changed, available from Sapien Press.
|
OPCFW_CODE
|
141 Blynk Alternatives and Similar Apps
Explore 141 Blynk (legacy) alternatives and similar Apps in the list below. APKFab gathers the best Apps like Blynk (legacy) that you can play on Android.
Virtuino 6.0.02 is an Android Tools app developed by Ilias Lamprou. Explore 107 alternatives to Virtuino.IoT HMI platform - MQTT, MODBUS, HTTP, Bluetooth, WIFI, Thingspeak.
Alarm Panel 1.1.1 Build 9 is an Android House & Home app developed by ThanksMister LLC. Explore 129 alternatives to Alarm Panel.A MQTT Alarm Control Panel for Home Assistant and home automation platforms.
Opsgenie 3.4.6 is an Android Tools app developed by Atlassian. Explore 216 alternatives to Opsgenie.Opsgenie notifies users via push notifications, email, SMS and voice calls.
Hue Essentials 1.22.2 is an Android Lifestyle app developed by Hue Essentials. Explore 175 alternatives to Hue Essentials.Discover ways to get the most out of your smart lighting.
Automate 1.29.3 is an Android Tools app developed by LlamaLab. Explore 210 alternatives to Automate.Create flowcharts that make your device perform tasks automatically.
Timer 555 Calculator 3.4.10 is an Android Tools app developed by Peter Ho. Explore 86 alternatives to Timer 555 Calculator.Calculation of astable and monostable circuit.
Blink 188.8.131.52 is an Android Tools app developed by Immedia Semiconductor. Explore 237 alternatives to Blink.Smart Home Security Monitoring from Blink and Amazon.
HomeHabit 10.0 is an Android House & Home app developed by Habit Automated LLC. Explore 178 alternatives to HomeHabit.Smart Home Dashboard.
WallPanel 0.9.5 Build 6 is an Android House & Home app developed by ThanksMister LLC. Explore 180 alternatives to WallPanel.An application for Web Based Dashboards and Home Automation Platforms.
Segway-Ninebot 5.2.11 is an Android Travel & Local app developed by Ninebot(Beijing) Tech Co., Ltd.. Explore 83 alternatives to Segway-Ninebot.Segway-Ninebot, Simply Moving!
Domoticz - Home Automation is an Android Productivity app developed by HNO Mobile Games. Explore 102 alternatives to Domoticz - Home Automation.Home Automation app to control your home devices and lead a smarter life.
Gosund 3.22.5 is an Android Tools app developed by Cuco Smart. Explore 136 alternatives to Gosund.Through Gosund App to control smart home easily and safely.
openHAB Beta 2.17.9-beta is an Android Lifestyle app developed by openHAB Foundation. Explore 122 alternatives to openHAB Beta.Vendor and technology agnostic open source home automation.
Virtuino MQTT 1.0.36 is an Android Tools app developed by Ilias Lamprou. Explore 85 alternatives to Virtuino MQTT.MQTT HMI platform - MQTT visual interface.
MQTT Dashboard 1.0.0 is an Android Tools app developed by Vetru. Explore 148 alternatives to MQTT Dashboard.Manage your devices, Node-RED or home automation system using the MQTT protocol.
openHAB 2.17.4 is an Android Lifestyle app developed by openHAB Foundation. Explore 176 alternatives to openHAB.Vendor and technology agnostic open source home automation.
HP QuickDrop 1.0.7245 is an Android Tools app developed by HP Inc. Explore 74 alternatives to HP QuickDrop.Share files between devices.
Virtuino Modbus 1.0.36 is an Android Tools app developed by Ilias Lamprou. Explore 50 alternatives to Virtuino Modbus.Modbus HMI platform - Modbus visual interface.
iRobot Coding 2.1.6 is an Android Education app developed by iRobot. Explore 0 alternatives to iRobot Coding.Explore code and unleash creativity with a learning journey powered by robots.
Smart Control 7.0.9 is an Android Lifestyle app developed by AwoX. Explore 142 alternatives to Smart Control.Simply the best way to enjoy your home lighting atmosphere.
Digi-Key 4.28.3 is an Android Business app developed by Digi-Key Electronics. Explore 118 alternatives to Digi-Key.Search and order electronic components from Digi-Key on your Android device.
RaspController 5.0.3 is an Android Tools app developed by Ettore Gallina. Explore 144 alternatives to RaspController.Manage your Raspberry Pi with your smartphone.
Tuya Smart 3.28.5 is an Android Lifestyle app developed by Tuya Inc.. Explore 287 alternatives to Tuya Smart.Smart Life. Smart Living.
|
OPCFW_CODE
|
sync: sync all payment updates
Rather than syncing payments based on the payment time, sync based on the created and updated indices in core lightning. This ensures that all updates are retrieved, also when a payment is updated after the fact. To do this, listsendpays is called, rather than listpays. listsendpays contains a filter based on index. Starting off with the last synced index, fetch all missing newly created payments and updates. It's possible that this particular sync doesn't fetch all payment parts associated to a payment. Therefore, the sendpays are cached into a local database. When a new sync round is done, all known existing parts are fetched from the local database as well, to ensure the resulting sdk payments are 'complete', i.e. they're not missing any parts. This should fix the issue where for closed channels, payments would be marked forever pending.
Should fix https://github.com/breez/breez-sdk-greenlight/issues/1094
Inspiration from https://github.com/ElementsProject/lightning/blob/136244835215ae7fcb48ffcaf3c5bb80c3173348/plugins/pay.c#L415-L533
@JssDWt this looks very smart and the main benefit I see is that we can be truly incremental by fetching only the changed data.
One thing that is not clear to me is why the stuck pending payments happens with the current code.
After all we do fetch all outbound payments and as long as the payment is not completed we keep syncing it.
One thing that is not clear to me is why the stuck pending payments happens with the current code.
After all we do fetch all outbound payments and as long as the payment is not completed we keep syncing it.
Take the situation where a payment was pending on round 1, and failed on round 2. The current filter could
filter out the round 2 payment based on the created_at timestamp, if another payment had already succeeded after the current.
filter out the round 2 payment because it doesn't have a completed_at (this is only present for complete payments)
Note that this current PR also syncs all failed payments. Maybe it's worth adding a commit to delete failed payments from the payments database?
One thing that is not clear to me is why the stuck pending payments happens with the current code.
After all we do fetch all outbound payments and as long as the payment is not completed we keep syncing it.
Take the situation where a payment was pending on round 1, and failed on round 2. The current filter could
filter out the round 2 payment based on the created_at timestamp, if another payment had already succeeded after the current.
filter out the round 2 payment because it doesn't have a completed_at (this is only present for complete payments)
Note that this current PR also syncs all failed payments. Maybe it's worth adding a commit to delete failed payments from the payments database?
For the first case isn't that what we want? If eventually the payment succeeded for that hash why do we care about old attempts?
For the second case as far as I see in the code we don't filter these payments that don't have completed_at value so the payment should be included in the sync.
For the first case isn't that what we want? If eventually the payment succeeded for that hash why do we care about old attempts?
I don't mean the same payment. If any other payment succeeds after the pending one is synced, then the since_timestamp will be after the one of the pending payment, so it will be skipped.
For the second case as far as I see in the code we don't filter these payments that don't have completed_at value so the payment should be included in the sync.
You're right. Only the since_timestamp matters there, which could again be after the completed_at value in case of a success.
I don't mean the same payment. If any other payment succeeds after the pending one is synced, then the since_timestamp will be after the one of the pending payment, so it will be skipped.
I see. Still there is this question how do we skip that payment if we never filter out payments that were completed after the last_sync_time.
I mean even if meantime there is a different payment that has been succeed and changed the last_sync_time and only after that the "round 2" payment completed successfully then we shouldn't have skipped it because we don't skip those that their completed_at is after the last_sync_time: https://github.com/breez/breez-sdk-greenlight/blob/main/libs/sdk-core/src/greenlight/node_api.rs#L1796
In short, perhaps I am missing something here but I don't see how the since_timestamp can be after the completed_at unless the payment already has been synced as completed.
Sorry for being Nitty on this, I think the incremental benefit is very valuable by itself just curious if this indeed solves the pending annoying issue.
I mean even if meantime there is a different payment that has been succeed and changed the last_sync_time and only after that the "round 2" payment completed successfully then we shouldn't have skipped it because we don't skip those that their completed_at is after the last_sync_time: https://github.com/breez/breez-sdk-greenlight/blob/main/libs/sdk-core/src/greenlight/node_api.rs#L1796
Right, so basically it should only be filtered out if the payment has completed successfully, and has a completed_at time that is before the highest payment_time (which is either created_at or) time of the latest sync. That seems unlikely indeed.
I agree it should work for the most part, as long as the timestamps are accurate.
There's another suspect: https://github.com/breez/breez-sdk/blob/f6c7c8729c51c01ab7a62787bc94c340e590f43a/libs/sdk-core/src/greenlight/node_api.rs#L1850
Used here: https://github.com/breez/breez-sdk/blob/3fb6672e7970ad4b3e1495b18305f655ddd920fe/libs/sdk-core/src/breez_services.rs#L1640
@JssDWt do you think it worth that @andrei-21 will check this PR on his issue to get feedback?
|
GITHUB_ARCHIVE
|
Facial recognition is increasingly being discussed, its applications deployed across several and diversified industries.
The pandemic has brought about a proliferation of contactless solutions aimed at preventing the spread of the virus, and biometrics are often at the top of the list.
Despite an increasing number of applications using biometrics and face recognition in particular, it is common to hear unpleasant stories of biased algorithms and ensuing discrimination, particularly when the technology is deployed by law enforcement.
???? While the impact of these events should not be diminished, it is important to consider that face recognition is merely a tool, and that any bias in it is inherently a result of how its algorithms are trained.
What is bias in face recognition?
The vast majority of face recognition systems today work by scanning face images, translating them into numerical expression, and then comparing them to determine similarity.
To execute this process, however, the system first needs to use artificial neural networks to scan a substantial number of face images.
Through initially established rules, the algorithms then use deep learning to improve their predictions and get more accurate results.
The rules by which the algorithms start their learning process represents one of the two main issues related to biases in face recognition. From biased premises, after all, is always inferred a biased result.
The second issue relates to the sample selected to aid biometric algorithms in their learning process.
In other words, while to facilitate the process and comply with privacy regulations, many companies train their algorithms on standard datasets instead of creating their own, there is often no guarantee that the samples in the dataset are free from biases.
How can bias be mitigated?
Recently, our colleagues at Alice Biometrics, Daniel Pérez Cabo, Esteban Vázquez Fernández and Artur Costa-Pazo, accompanied by David Jiménez-Cabello and José Luis Alba Castro, analyzed this issue in a study focused on demographic biases.
The main purpose of the research is to enable accountability and fair comparison of several face Presentation Attack Detection (PAD) approaches.
Throughout the analysis, Costa‐Pazo and co-authors highlighted the flaws in most main image databases, then presented an attempt to evaluate the ethnic bias in a self‐designed dataset and protocol.
“Fairness is a critical aspect for any deployable solution aiming at creating models that are agnostic to different social and demographic characteristics,” the paper reads.
To pursue this goal, the researchers took the GRAD-GPAD framework, then added the categorisation of three new datasets, thus increasing the number of identities by more than 300% and the number of samples by more than 181% more.
In addition, the team introduced new categorisation and labelling for sex, age and skin tone, as well as novel demographic protocols, visualisation tools and metrics to detect and measure the existence of biases.
The researchers dubbed the improved database GRAD-GPAD v2.
Different objectives, different scenarios
According to the paper, despite the best efforts of the community, the research results indicate that individual datasets have a strong built-in bias.
“The rationale behind this bias originates on the experimental setting as follows: each dataset has different objectives and evaluates the performance of the models in different scenarios.”
For instance, some of these systems may be built to be run only on mobile devices, either in outdoor or lab environments, with synthetic or natural illumination, using a simulated onboarding setting, or for deployment in other, specific scenarios.
“Even if we try to incorporate every possible scenario, we find that the biases are still present in some form,” the paper explains.
By incorporating additional datasets and a novel labelling approach, Costa‐Pazo’s team achieved some interesting results.
Beyond the lack of representativeness
Aggregating the datasets as it is done in GRAD‐GPAD v2 not only showed an improvement in mitigating the bias coming from the data but also helped the team understand the bias distribution within the training dataset.
“[This] allowed us to incorporate compensations in the learning process and to improve future dataset captures,” the researchers wrote.
While the study is only one example of how biases in face recognition can be mitigated, it is a relevant one.
In fact, GRAD‐GPAD v2 bridges the gap of the lack of representativeness of state‐of‐the‐art works and moves a step towards fair evaluations between methods.
“[It does so] not only from the perspectives of the different instruments used to perform the attacks but also considering realistic settings in production,” the paper concludes.
???? Of course, these findings are used in Alice Biometrics developments with the aim of providing secure and unbiased biometric data.
Si quieres conocer más sobre nuestra tecnología, ¡contáctanos!
This publication has been financed by the Agencia Estatal de Investigación DIN2019-010735 / AEI / 10.13039/501100011033
|
OPCFW_CODE
|
get accessors and this typing
TypeScript Version:
nightly (1.9.0-dev.20160429)
Code
interface Foo {
readonly foo: string;
}
const foo = {
get foo(this: Foo): string { // <- A 'get' accessor cannot have parameters.
return 'foo';
}
};
Expected behavior:
Ability to type this without errors.
Actual behavior:
Errors given when trying to type this in a get accessor.
While this example is non-sensical, this typing on other get accessors would be a useful feature and since they are erasable, it isn't like the accessor has an actual parameter.
accessors are not checked for this reference at use sites, only call expressions are. so the defined this will only be used inside the accessor body. is this the intention?
//cc: @sandersn
so the defined this will only be used inside the accessor body. is this the intention?
Correct.
I made an intentional decision to disallow this on accessors because I was not sure of the story and it didn't seem that useful. However, we can discuss this again at the next design meeting.
Some details that need to be decided:
Should this be checked on access? (As Mohamed points out, this would be a new code path.)
How is assignability checked? (I don't know the normal rules for getters, so I'll have to check.)
Does the this-type of a getter/setter pair have to match?
A potentially more "useful" use case:
interface Foo {
value: number;
readonly asString: string;
}
const foo = {
value: 0,
get asString(this: Foo) {
return String(this.value);
}
};
Essentially because the contextual this typing in object literals is removed (for now), there is no way to get intellisense in accessors.
I know it isn't a totally fair comparison, but it is allowed this way:
Object.defineProperty(foo, 'asString', {
enumerable: true,
get(this: Foo) {
return String(this.value);
}
});
As far as point 3, my opinion is that it would be logical, since the value to the setter and the return type of the getter have to match.
also note that if you have --noImplicitThis turned on, you are forbidden from this in accessors inside object literals, because you have no way to annotate them:
// @noImplicitThis: true
interface Foo {
value: number;
readonly asString: string;
method(this: Foo, n: number): void;
}
const foo: Foo = {
value: 0,
get asString() { // sorry, contextual type does not include `this`
return String(this.value); // and you cannot (currently) annotate it yourself
}
method(n) { // ok, contextual type includes `this`
console.log(this.n + this.value);
}
};
Another note from reading the spec:
If one accessor has a type annotation and the other does not, then behave as if both have the type annotation.
I went ahead and tried it out. I'll put up a PR once I clean up the code a bit.
It turns out that, of my concerns, (1) and (2) are red herrings -- there's no way to tear off properties like there is with methods. And as @kitsonk points out, (3) is easily extended from the spec's requirements that parameter types are identical.
So I think it doesn't actually need much discussion. It's a pretty simple addition and it turns out people actually do want it. :)
I am not getting the Read-only property at the run time. Actually I am sending the PageContext object from client. Node web server (Expressjs) request does not receive the read only property.
Let me know how to resolve this issue.
export class PageContext {
public PageSize: number;
public PageNumber: number;
public get Limit(): number {
return this.PageSize === 0 ? AppConfig.DefaultPageSize : this.PageSize;
};
public get Offset(): number {
return (this.PageNumber - 1) * this.PageSize;
};
}
|
GITHUB_ARCHIVE
|
April 1, 2013 | Written by: LindaMay Patterson
Share this post:
IBM Redbooks recently published the IBM Redguide “IBM SmartCloud: Building a Cloud Enabled Data Center.” If you are getting started in cloud computing, this IBM Redguide is for you. The first step for many companies moving to a cloud computing environment is using the infrastructure as a service (IaaS) model. With IaaS, IT services (such as compute, storage and network services) are delivered by subscription through a self-service portal.
The cloud enabled data center adoption pattern discussed in this IBM Redguide is one of the adoption patterns defined in the IBM Cloud Computing Reference Architecture (CCRA).
The Cloud Computing Reference Architecture is a blueprint or guide for architecting flexible cloud computing implementations and is based on IBM’s years of experience in creating and implementing cloud computing solutions with their clients. So what is an adoption pattern? An adoption pattern embodies the architecture patterns that represent the fundamental ways organizations can and are implementing cloud computing solutions.
The cloud enabled data center adoption pattern contains prescriptive guidance on how to architect, design and implement an IaaS solution. The use cases, details and intricacies of the adoption pattern have been grouped first into micropatterns, and then abstracted into four macropatterns. A macropattern defines the architectural view of the components that implement it. Each macropattern typically stacks (in terms of capabilities and components) on top of the previous macropatterns and provides the base for building the next macropattern. The four macropatterns making up the cloud enabled data center adoption pattern are (starting from the simplest to the most robust):
- Simple IaaS
- Cloud management
- Advanced IaaS
- Information Technology Infrastructure Library (ITIL) managed IaaS
With the cloud enabled data center adoption pattern defined and thoroughly explained, the guide next discusses how a cloud enabled data center solution can be implemented and built to fit and grow with your business needs. The guide identifies the IBM products that provide capabilities essential to creating an IaaS solution. For example, if you are just getting started and want a simple (entry level) IaaS solution the products IBM SmartCloud Cost Management and IBM SmartCloud Provisioning are essential. At the other end of spectrum, if you want a sophisticated managed IaaS solution, the IBM SmartCloud Control Desk is worth investigating.
The table of contents for this IBM Redguide is as follows:
- Executive overview
- Business value of a cloud enabled data center solution
- Capability maturity model for a cloud enabled data center
- IBM Cloud Computing Reference Architecture
- Designing a cloud enabled data center solution
- Implementing a cloud enabled data center solution
- Roadmap to developing the solution
- Example deployment scenarios
So, if you want to understand how to design and implement an IaaS solution to satisfy your needs now and for the future, this guide is a must read.
This Redguide was written by:
- Pietro Iannucci, a Senior Technical Staff Member in Italy specializing in cloud computing solutions.
- Manav Gupta, a Software Client Architect in Canada bringing thought leadership to clients in the telecommunications industry.
- With the assistance of LindaMay Patterson, an IBM Redbooks Technical Writer. She works on publications with IBM technology leaders and innovators.
|
OPCFW_CODE
|
Hello all - I am having problems getting one of my Windows XP boxes to comply with the Anti-Virus and HIPS policy in the Sophos console. Below is a list of what I've tried and been unsuccessful with. This computer is running Windows XP Pro SP3
- Did a reinstall from the console. It reinstalled but still produced the error
- Uninstalled Sophos RMS, AutoUpdate, and Antivirus and then reinstalled
- Uninstalled the application, removed registry entries, application data, and temp files and still no go.
- moved the computer to an isolated test group with default policies. It showed full compliance but when I moved it back the result is that Anti-virus and HIPS differs form policy again.
All other computers in this group are complying with all policies, there seems to be no group problems. Anyone have any idea on what's going on?
What is the exact error message you are getting?
Are there any other settings on this machine that differ from the rest?
I will assume that you are talking about the hips policy and not the complete com[pliancy agent?
for the HIPS portion, I would suggest checking the client machine to ensure that the firewall is turned off, then check the sophos client and ensure that it is looking in the right spot for the server... kand that the update manager has the right credentials etc... finally i would ensure that the server firewall is not blocking anything related to sophos....
this should give you a good start as far as troubleshooting the problem.
In the console there is no error message other than "Differs from Policy" when I double click on the machine in the row with Antivirus and HIPS it says Differs from Policy.
I checked the machine's C:\Temp File and saw nothing that suggested an error. Although if there's a specific log file that I'm not familiar with please let me know. I also checked alc.log
I did add Sophos as an exception on Windows Firewall. When that didn't work I turned off the firewall. The server isn't running a firewall program. I double checked to make sure it was looking for updates in the right spot and it was. Update manager has the right credentials because the virus definitions are updating.
Do you have access to the computer? If so, you could try right clicking on the shield and select "Update Now" to see if that does anything.
Hi, I have the same problem in a few PC 's and I uninstall all Sophos soft, Add/Remove programs, and then I used this utility, then remove the pc from the console. Search new PC in the console to see tha PC again, and try to install again.
This procedure solve my problems.
@ Wayne - When run Sophos update from the machine it goes through the update process with no error. Still differs from policy
@ Manuel - I ran the bat that you sent over and followed your instructions. The bat file runs but I still get the same result. It differs from Antivirus and HIPS policy.
I'm going to wipe the PC and start from scratch. Thanks everyone.
Did the rebuild fix it? i.e was is machine specific or was it an issue in enterprise console?
The rebuild fixed the issue. I have a feeling it was machine specific because I could move it from one group to a test group and the computer would comply with the test group.
Nov 20, 2010 at 4:18 UTC
I've had the same problem on a couple of workstations on the network I look after.
The issue, in our case, was a problem with the sheduled task Sophos creates when it is updated or installed. The following procedure should fix your problem. Carry these steps out on the affected computer:
1. Open Scheduled Tasks either the control panel.
2. Change to detail view
3. Make sure the the creator column can be seen
4. Copy tasks created by SYSTEM to a backup location.
5. In scheduled tasks delete all the tasks created by SYSTEM.
6. Open a cmd prompt and type 'net stop "Task Scheduler"' to stop the Scheduler service.
7. Navigate to C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\S-1-5-18.
8. Delete all files within that folder.
9. In a cmd box type 'net start "Task Scheduler"'
10 Restart the computer.
11. In the Enterprise Console update the computer. It should shortly be listed as compliant.
12. Delete the backups of the Sophos scheduled task from step 4
13. Copy the other backup tasks from step 4 back to scheduled tasks.
The Sophos task should be re-created in step 11. I think this happens because the key deleted in step 8 becomes corrupt. It does get restored by Windows.
I hope this helps.
|
OPCFW_CODE
|
trying to make Main.hs work
Hi,
I have made test/Specs.hs work as expected. The concise expressions are really enjoyable and amazing.
Q1.
I was trying to make Main.hs work. I tried main = runReplica (room ['a', 'b'])
When I press Enter in inputOnEnter, fromJSON seems work.
But when I press 1, there is an alert:
Internal server error, please reload the page: Expected object
CallStack (from HasCallStack):
error, called at src/Replica/Events.hs:19:31 in concur-control-<IP_ADDRESS>-Dd2
It seems fromJSON can't parse KeyboardEvent of "1".
However, I traced the getDomEvent value of both. and fromJSON them in ghci. Both returns Success (...).
I also tried ctrl, shift, alt, space. a, b, c...
It seems the program can successfully parse keys that are not taking space but failed at keys that are taking space.
Not sure what to do.
Q2.
From test/Spec.hs I learned shared in Main.hs needs exhaust to execute. But I haven't figured out the relationships between Context, Application, Syn.run, push, exhaust, conntectors ...extra.
stack.yaml says it's seagreen/replica in commit b265e6edc3e9dadb4011628761cd2642c2d4b34c. But there is no such commit on github.
git checkout 46d2fd3b2c236d573f1d6002eb58cc95b2ba03b9
HEAD is now at 46d2fd3 Initial JS FFI prototype
src/Replica/DOM.hs:40:102: error:
• Couldn't match expected type ‘Replica.Context’
with actual type ‘()’
• In the pattern: ()
In the second argument of ‘($)’, namely
‘\ ()
-> do takeMVar block
modifyMVar ctx $ \ ctx' -> ...’
In the second argument of ‘($)’, namely
‘Replica.app
(defaultIndex "Synchron" []) defaultConnectionOptions id ()
$ \ ()
-> do takeMVar block
modifyMVar ctx $ \ ctx' -> ...’
|
40 | Warp.run 3985 $ Replica.app (defaultIndex "Synchron" []) defaultConnectionOptions Prelude.id () $ \() -> do
| ^^
src/Replica/DOM.hs:41:5: error:
• Couldn't match expected type ‘()
-> IO (Maybe (R.HTML, (), Replica.Event -> Maybe (IO ())))’
with actual type ‘IO
(Maybe (R.HTML, (), Replica.Event -> Maybe (IO ())))’
• In a stmt of a 'do' block: takeMVar block
In the expression:
do takeMVar block
modifyMVar ctx
$ \ ctx'
-> case ctx' of
Just (eid, p, v) -> ...
Nothing -> ...
In the second argument of ‘($)’, namely
‘\ ()
-> do takeMVar block
modifyMVar ctx $ \ ctx' -> ...’
|
41 | takeMVar block
| ^^^^^^^^^^^^^^
git checkout 69249ec760b8b8272d302269b54ddc4531d508a9
Previous HEAD position was 46d2fd3 Initial JS FFI prototype
HEAD is now at 69249ec Skip events on non-suscribed nodes
src/Replica/DOM.hs:58:14: error:
Not in scope: type constructor or class ‘R.Namespace’
Neither ‘Replica.VDOM’ nor ‘Replica.VDOM.Types’ exports ‘Namespace’.
|
58 | el' :: Maybe R.Namespace -> T.Text -> [Props a] -> [Syn HTML a] -> Syn HTML a
I couldn't find the right commit.
|
GITHUB_ARCHIVE
|
Have you ever read (or watched) Dinotopia? It is a book (and TV) series about a land where dinosaurs never went extinct. Not only that, but they also managed to create a civilization where humans and “saurians” live together in
relative harmony. What always fascinated me in Gurney‘s work was the idea of reptiles, in this case dinosaurs, manifesting social behaviour paralleling humans’. Unfortunately, reptiles have, in comparison to mammals and birds, been disregarded in vertebrate social behaviour research.
In their review, Doody, Burghardt and Dinets (2013) discuss the reasons behind this neglect. They describe how reptiles have traditionally been placed in the ‘non-social’ category of the ‘social–non-social’ dichotomy. According to them, this dichotomy is too simplistic and therefore deceptive as it fails to represent the variety of social systems in the animal kingdom. In fact, studying reptile social behaviour should help understand the mechanisms and evolution of complex social behaviour. The bias in research towards mammals and birds can be explained by the fact that it is easier to study “vertebrate groups whose communication systems are more salient to human sensory perception” (Doody et al. 2013, p. 96).
Besides, the inconspicuousness of reptiles and their nests creates an apparent absence of social behaviour in these animals, especially parental care. And let us not forget other human originated obstacles, such as the difficulty to get funding for such studies.
For some species, at least, social behaviour is observable in the egg stage. For example, pig-nosed turtle Carettochelys insculpta embryos make hatching happen faster when they sense vibrations coming from their siblings. Embryos of Nile crocodiles Crocodylus niloticus can adjust the synchronization of hatching and stimulate mothers by vocalizing. Parental care is rather rare, but tuataras Sphenodon punctatus and iguanas stay with the eggs for several days. Hatchling iguanas lacking packing parental care protect themselves using group vigilance.
Crocodilian mothers stay for the whole incubation period and beyond! They excavate and break the eggs, communicate vocally with their eggs and hatchlings, carry hatchlings to water, feed and protect them. Biparental care, which is the norm in vertebrates like canids and cichlids, has actually been recently documented in crocodilians (Brueggen, 2010, and Whitaker, 2007, cited by Doody et al. 2013).
What about social behaviour beyond parental care? For one thing, snakes, lizards, turtles and crocodilians display conspicuous territoriality visible through the signals, postures and combats of males.
In addition, it is common for some lizards to form large and stable social groups. The ones formed by lizards of the genus Egernia show “kin recognition, inbreeding avoidance mechanisms, parental care, group antipredator behaviors and long-term social and genetic monogamy of up to 20 yr” (Doody et al. 2013, p. 98). Cooperative breeding occurs in broad-snouted caimans Caiman latirostris and other caimans and alligators as they form multi-parental crèches. In any case, much research is necessary to be able to correctly estimate the proportion of reptile species to live in groups.
Cooperative hunting is another example of an advanced behaviour not formally depicted. As you can see in this BBC video, banded sea kraits Laticauda colubrina are sea snakes that compensate for their slowness by hunting communally.
Alligators Alligator mississippiensis have also been observed feeding cooperatively (Dinets, 2010). They can gather in small areas where water depth does not exceed 50 cm and spend up to 6 hours circling the area and catching fish.
I should mention as well that reptiles have complex mating systems, which include polygyny, polyandry, monogamy and parthenogenesis, accompanied by varied courtship behaviours. Social play has, too, been recorded in crocodilians, lizards and turtles.
Perhaps, in real life, reptiles do not exactly parallel human social behaviour, but they are definitely not ‘non-social’. There is a lot more to learn about them and I am excited for what new information future research will bring.
DISCLAIMER: I am not a professional herpetologist, so I might have made mistakes in identifying the animals presented in the photographs. If you have spotted an error, please feel free to correct me in the comment section.
Dinets, N. (2010). Nocturnal behaviour of american alligator (Alligator mississippiensis) in the wild during the mating season. Herpetological Bulletin, 111, 4-11. link
Doody, J. S., Burghardt, G., & Dinets, V. (2013). Breaking the social–non-social dichotomy: a role for reptiles in vertebrate social behavior research? Ethology, 119, 95-103. doi: 10.1111/eth.12047
Doody, J. S., Stewart, B., Camacho, C., & Christian, K. (2012). Good vibrations? Sibling embryos expedite hatching in a turtle. Animal Bheaviour, 83(3), 645-651. doi: 10.1016/j.anbehav.2011.12.006
Symonds, D. (Producer) & Brambilla, M. (Director). (2002/II). Dinotopia [TV series]. Worldwide: Hallmark Entertainment Distribution LLC.
Vergne, A. L., & Mathevon, N. (2008). Crocodile egg sounds signal hatching time. Current Biology, 18(12), R513-4. doi: 10.1016/j.cub.2008.04.011
Vergne, A. L., Pritz, M.B., & Mathevon. N. (2009). Acoustic communication in crocodilians: from behaviour to brain. Biological Reviews, 84, 391-411. doi: 10.1111/j.1469-185X.2009.00079.x
|
OPCFW_CODE
|