1 6 7 8 9 10 74 Previous Next

ABAP Development

1,103 Posts

In some of business case of MM ( Material Management ), you need to make all field in commitment item in SAP Change Purchase Requisition ( ME52N ) to be read only based on condition.

You can configure most of field of SAP Purchase Requisition ( ME52N ) to be  input or read only through SPRO Configuration but SPRO have a limit, it can’t  control all field on Commitment Item Tab.

For this case , i will use example : User cannot change commitment Item data  in purchase Requisition ( Preq ) document after that particular Preq document has been released 5 ( EBAN-BANPR )

Basically, SAP Preq ( ME52N ) commitment item field always input-able although all other field read only. but you can control it by creating an enhancement point .


You need to create enhancement point which located in KACB Function Group and LKACBF01 Include program and in feldmodifikation_normal subroutine.


You can see on the picture above, I create enhancement-point and put ths code:















data : ls_eban type eban.



select single * from eban into ls_eban where banfn = cobl-awkey.


if ls_eban-banpr = '05'.

   loop at gt_coblf into l_coblf.

     l_coblf-input = 0.

     modify gt_coblf from l_coblf TRANSPORTING input.



Note : You just need to modify data ( L_COBLF-INPUT = 0 ) in internal table GT_COBLF .

Then the result is :


Richard Harper

Be Nice Now.....

Posted by Richard Harper Jan 25, 2016

There are some legitimate cases where you may want to wait a short while in your program for something to happen.  One thing I use a wait on is when I have a program running and it encounters a lock.  This is ok when you are displaying a single record - you just tell the user that it's locked.   However,  if you are compiling an editable ALV grid for example you don't want to stop them editing some records because one is locked.


What I do is wat a second or so,  and try again.  If the record is still locked I wait a little longer,  before after a certain number of times giving up and continuing on with the next record,  flagging the failed record as not locked.


But when you wait are you polite or do you still hog the work process ??


Instead of using the 'WAIT' statement use the 'ENQUE_SLEEP function module.  The function module releases the work process for another task to use.  The WAIT does not....



Paul Hardy


Posted by Paul Hardy Jan 25, 2016



The Frozen Hare and the Tortoise


One Sentence Summary


How companies can find their SAP systems frozen waist deep in the ice, unable to pursue the torrent of new business opportunities rushing by - though in fact that sort of change can make you stronger, not freeze you.



I Could Have Been Someone

I Could Have Been a Table of Contents


·         Three Ways to Keep up with the Joneses

·         Why companies are Scared of those 3 things

·         How to come out stronger from these Scary Situations





This Frozen Parrot is Dead


As been pointed out time again the pace of change in society – both technological and otherwise - is getting faster and faster and when people ask “when is this going to stop” the answer is “it isn’t”.


There are two sides to this coin and it’s a “heads I win, tails you lose” situation. With all this change come new opportunities as well as dangers and if your IT system is “agile” enough then you can bend with the wind and surge ahead of the pack. If you can’t adapt then – as I believe Charles Darwin said in “Origin of the Species” – you will be an ex-parrot, you will cease to be.


So can we in SAP world use technology to rise to this challenge? In theory – yes. As was obvious at SAP TECHED this year, SAP are not getting left behind here, the pace of change in the ABAP language alone is breathtaking, and of course there is S/4 HANA whatever in the world that might be.


What’s the Problem then?


However often to take advantage of new technology often you have to upgrade your SAP system to the latest version. And many companies are just too scared to do this. Or even to keep up to date with support packs.


If you can’t upgrade another tactic is to make use of existing technology in your current system that you have not explored yet – like BOPF or BRF+. But many companies are too scared to do that also.


All right then, at the very least you can update your custom programs - change or add extra functionality - to adapt to the changed situation. But many companies are too sacred to do that either on the grounds it generally stuffs up existing functionality.


The end result is they stay frozen waist deep in the ice, like Satan in Dante’s “Inferno” groaning in despair as they see their competitors rushing headlong into the future, leaving them behind.



Satan frozen waist deep in the Ice: Not Happy Jan


Surely you can’t be serious?


I am serious – and stop calling me Shirley. To make matters worse either people don’t acknowledge their own aversion to change, or more often are all too aware of the need for change but cite the perceived difficulty/complexity of making any sort of change to a running SAP system.


I have two quotes as examples. The first comes from the CIO of a large bank here in Australia. I cannot remember her exact words but she noted she had two different major ERP systems running her business – one was the “spaceman” system which could spin round on a sixpence at the drop of a hat and the live system could be adapted to any new thing that came along in the blink of an eye. The other was a “caveman” system where you could make minor changes once a quarter if you were really lucky.



Spaceman vs Caveman


As I am sure you have guessed SAP ECC was the “caveman” system. Hopefully I don’t think things are really as bad as that, but it does not help that all the Solution Manager presentations coming out of SAP last year kept going on about “with our latest advances you can now make TWO releases of your custom code a year!” as if that was the best thing since sliced bread. Surely you would want to make assorted changes to production once a week and aim for as many major changes as you need as and when you need them?


The second quote comes from an SAP “poster child” company, by which I mean a company which takes all the latest technology from SAP the week it comes out, and their management get wheeled out at SAPPHIRE keynotes to say how great SAP is.


Here is a quote from an ASUG online article about “Florida Crystal’s Ground-Breaking SAP S/4 HANA Story”.


“Florida Crystals Vice President and CIO Don Whittington, a former ASUG Board Member, talked about what he hoped to realise with the move (to S/4 HANA): “But what we’re looking forward to most (with HANA in-memory computing) are benefits we haven’t even dreamed of yet, similar to what we experienced with cloud technology … I imagine SAP solutions running as fast as Excel, and being as easy to use, and as easy to implement”.


I have read that quote many times


(I checked a month later and it had vanished from the online article I was looking at, though you can still see it here http://www.asugnews.com/article/being-bold-the-benefits-of-pushing-the-edges and here http://www.asugnews.com/article/asug-in-review-2015-s-4hana-reimagines-saps-business-suite )


and as far as I can see what he is saying is it would be lovely if SAP was as fast and easy to use as Excel and he imagines that might possibly happen now he is on S/4 HANA, so obviously the prior ECC system was nowhere near as good as Excel.


That is a bit of back-handed compliment to SAP, and a whopping great compliment to Microsoft. And this is supposed to be one of SAP’s biggest fans – what in the world are their enemies saying? It’s not difficult to spot which is the caveman product and which is the spaceman product here either.


I can imagine all the people who have been saying “get away from spreadsheet hell” all these years hearing the Florida Crystals quote and banging their heads on the ground.


Mr Benn to the Rescue


One person who can turn from a caveman to a spaceman in an instant is Mr Benn. He goes into the hat shop and – as if by magic – a shopkeeper appears, and gives him a hat to wear and suddenly Mr Benn becomes whatever he wants to be.



Mr. Benn can change with the wind


So how can we be like Mr. Benn? Let us recap the three areas I highlighted earlier than companies are unwilling to change for fear of breaking something – the whole system, use of new SAP technology and custom programs.


Upgrades are Scary

The other day, whilst reading about S/4 HANA my former colleague (from the late 90’s) Mark Chalfen wrote that SAP customers had been “spoilt” by how easy upgrades had been until now. I replied that they weren’t as easy as all that, sometimes the whole project takes a year from get-go to completion. He in turn replied – quite correctly – that this is usually down to the extreme complexity of the current system.


That is the exact situation I am talking about – a really complex existing solution, and even though the technical bit of an upgrade can now be done on the long weekend you want to know what to test to see if it has stopped working. That is why companies like Panaya and Intellicorp sell tools to analyse the system and guide you as to what to test – although the answer is all too often “everything!”


As might be imagined “the business” don’t want you to stop developing for even one week, let alone a month, and suck people out of their days jobs to do regression testing when a simple way to avoid both is not to do an upgrade in the first place.


In the past often the only way you could get the upgrade project over the line – and this is a true story I have seen first-hand – is to tell the powers that be that the current version is going out of support and the maintenance fee will go up from 2% to 4% of whatever it is, a huge increase. Then I saw the light bulb go on over the manager’s head and he said “Oh I See! It’s a cost saving – that’s all right then!” After all the traditional view of IT is that is just a great big cost, necessary evil, an albatross round the neck, ALBATROSS! ALBATROSS!, and if you can reduce that cost then great.


So companies would upgrade every five years or so because they had no other choice. That was the only way I personally would ever get access (at work) to the latest SAP ABAP tools, things that I had been reading about for years. In Thomas Jung’s book “Next Generation ABAP Development” the lead character Russell finds himself in just such a situation, being on the receiving end of a long delayed upgrade, and jumps for joy.


Then of course SAP had to go and pull the rug from under my (and Russel’s) feet, and announce that ECC 6.0 was supported until 2025. That’s still almost ten years away – ten years until I can use ABAP in Eclipse in my day to day work. It’s heart breaking, especially as I spend so much time playing on the latest version of ABAP in my spare time.



2025 – We will all be living on Mars and I can upgrade!


Now you might say – “hang on just a minute you most Foolish of Fools, oh Clown Prince of Foolish Fools, how Foolish thou art, what about the enhancement packs that come out every year or so. You can just pop them into your system with no disruption at all”.


Now some have said that installing an Enhancement Pack is 95% of the effort of an upgrade, and though SAP swears blind if you put one in all the new stuff is dormant so it will have no effect at all, can you really believe this? I personally think it might just be true, but a lot of companies will not take this on blind faith, and so it is “test everything” time again just like with an upgrade. I stress again it is not the technical bit that takes the time, it is the regression testing.


So in some places not only is there no chance of an enhancement pack at all ever in ECC 6.0, as when the next deadline falls it will be time to either leave SAP (not very likely) or move to S/4 HANA.


Even worse the exact same argument can be used against support stacks – when presented with the testing effort the question come back “do you have a specific problem or problem(s) the support stack will solve?” The answer is usually you don’t, because if you did, you would have installed one or more OSS notes to solve the specific problem(s).


“Nonsense, nonsense!” I hear you cry “Check your facts, oh Stupid One, companies are upgrading all the time!”


I am sure this is true to an extent, but I did notice at TECHED in Las Vegas last year there was a great big crowd to hear Karl Kessler talk about the latest ABAP features in the 7.5 release. He asked for a show of hands to see who in the audience was currently on 7.4 and he seemed quite surprised when only about 5% of the audience put their hands up. Internally at SAP they must have been on 7.4 what seemed like forever, so it must be easy to forget that the customers are not there yet. I think that explains the debacle which was the release of the 7.40 GUI. Apparently on a 7.4 system the new GUI worked just fine, and all lower releases it had so many bugs it was Hell on Earth. I was one of the idiots who installed it on the first day.


To summarise : a lot of organisations will not upgrade until there is no other choice as they are pretty sure some things are going to break, they do not know what, and would have to expand a vast amount of time and resources to find out what had in fact broken and this all too real possibility is scary.


New SAP Technology is Scary



Come on Dad – it’s obvious how to use this!


I keep coming back to this like a broken record but Graham Robinson hit the nail on the head when he did his blog about how many programmers refuse to use new technology and like to stick to function modules and DYNPRO GUI programs and what have you. After all – it works!




Round about two years ago I was thinking I might want to use BRF+ to store the complicated rules in a major project we were doing. I had thought about it but since most people did not know BRF+ from a bar of soap that would cause a maintainability problem so I backed off in the end, but I can’t help but wonder what the end result would have been had I done so, as we ended up building a very similar custom framework ourselves.


In the same way I spent the whole of last Tuesday morning making sure all my ABAP unit tests were up to date in a gigantic application I had spent literally two years building. I can tell you now I do not have even 10% of the unit tests written I really wanted, I have been building them up when I had the odd spare second over the two years, and now I have at least the “happy path” fully covered and will know instantly if any future changes break the fundamentals.


Now I would have liked to spend far more time on this but I am all too aware that in many companies a lot of people – especially management – would consider every instant I spent on such activities a waste of time. Luckily not my CIO (because he’s read my book!) but many people would not have such understanding bosses.


The “sad” thing is that doing test driven development with ABAP unit does double the initial development time, and even if that results in 90%+ reduction over the life cycle of the application (because you can change things with impunity, instantly knowing if anything is broken) the first bit is the only thing people consider i.e. the project will not come in on time, the project manager will not get the bonus, and stuff the fact the end result is an unstable application that will haunt the company till the end of time.


The funny thing is that in a recent (2014) survey (Oxford Economics: The 2020 Workforce) employees were asked what they were most worried about in their job. You would think in the unstable world we live in that getting made redundant would be top of the list. However “layoffs” came in at number 5 with 18% (you could vote for more than one).




Number one, with 40% was “becoming obsolete”. The logic presumably being if you are laid off then you can get another job, but if you are obsolete you are in danger of being obsolete forever. Probably most people in the survey were thinking of robots taking their jobs, but can you see the relevance to the SAP world? Where you get laid off and in the next job interview the interviewer starts asking you about bizarre things like CDS Views and SADL and JavaScript? And when you say “I’ve got WRITE statement skills, I’ve got People Skills Damn you!” they just look at you blankly.



Join the Club!


So you end up with the ever popular “rock and a hard place” situation. If you are in an environment where no-one else knows how to use a certain new technology (or even that such a thing exists) then even if it is appropriate for a new business challenge it might not be a good idea to use it as you have a maintainability problem, so you would be hurting the organisation as a whole.


Conversely, if you don’t use it, then – as really the best way to learn something new is by applying it to real life problems – you are hurting yourself, as the Kodak Bear found out.


You could argue the way out of this impasse is to train the whole team in the new whatever it is, but then you run into the “that costs money/time and there is no need as everything can be done using the way we have always done it” which is of course true and not everyone has the time or inclination to learn about new technology in their spare time as a hobby, and if you don’t know what something is all about then it is scary and its use might breaksomething.


Changing Custom Programs is Scary


“How Now Brown Cow, oh Foolish One, that is just nonsense” I hear you cry “your Foolishness knows no bounds, everyone changes custom programs all the time!” How true that is, and the danger here is all too common scenario where an application gets bigger and bigger and more and more complicated over time until it gets to a stage where it is so complicated that no-one really understands any of it, and a one line change in any part of it will always have ripple effects throughout the whole application, breaking something else – seemingly totally unrelated - without fail. That’s scary.




Now here is a little story and you have to guess if I am making it up or not. Once upon a time there lived an application that was vitally important to the business. One routine inside it performed a fairly simple function.


Over the years assorted new business scenarios came along each requiring a little tweak to the logic. Since the routine was so small it was so easy just to add a new branch to an IF statement construct and just cut and paste the five lines of code into each branch, followed by some code to cater for the particular scenario.


Then some more requirements came along, this time applying to every scenario. So the new code was added to every branch of the IF statement, again just a few lines, what does it matter?


Over the years this pattern repeated, a new IF branch here, a new chunk of code to be applied to every branch there. After ten years there were multiple branches all with a very large chunk of code in each branch, with some minor differences between the branches.


A developer, let’s call him Mr. Banana, was doing a peer review and noticed the same change – for a new business requirement – getting added in assorted different places. He expressed his concern to the programmer who had done the change – let’s call him Mr. Pineapple - that this was not only a pain for Mr. Pineapple as he was doing six times the work he really needed to do, even if it was all “Control C / Control V” stuff, and more importantly what if one day one of the many places the change was needed was forgotten, or even worse if the change ended up getting made in a different way in a different place.


The answer was simple – from a technical point of view – abstract the identical code to its own FORM routine or (dare I say it) method. Mr. Pineapple agreed – he saw the problem and said that the routine had grown over time and each time he went to change it he considered doing such refactoring but it was always “rush, rush, rush”. Moreover the program worked perfectly and the new functionality was needed tomorrow morning, a whole bunch of end users were coming in – flying in - to do some acceptance testing.


The business analyst Mr. Grapefruit was called in and said “only make the change if the business benefit outweighs the risk”. The problem of course that the business benefit of such refactoring is on the surface non-existent (zero) and the risk of making any change to a program you know that works is greater than zero (it’s so easy to break something) so the non-zero change of breaking the program is obviously bigger than the supposedly zero benefit of making the code “better”.


Mr. Banana could see why it was probably not a good idea to jeopardise the testing scheduled for tomorrow but as a general principle quoted from the 1999 book “The Pragmatic Programmer…


You might want to explain this principle to the boss by using a medical analogy: think of the code that needs refactoring as a "growth." Removing it requires invasive surgery. You can go in now, and take it out while it is still small. Or, you could wait while it

grows and spreads—but removing it then will be both more expensive and more dangerous. Wait even longer, and you may lose the patient entirely. - Pragmatic Programmer, The: From Journeyman to Master /  Andrew Hunt & David Thomas







The analyst Mr. Grapefruit said that he could see the concept, but in this case would rather take the risk of dying from a tumour at an unspecified point in the future than face the certainty of being shot by his boss the very next day if the code that worked today broke tomorrow.



It’s the same deal with the extended syntax check and code inspector. The argument goes that there is no time to deal with sort of nonsense, and anyway the code works so why bother?



This is a point I have mentioned before, again and again like a broken record, but I just can’t stop myself fixing something sub-optimal (untyped parameter or variable with a meaningless or misleading name, duplicate code or some such ) inside a routine I am working on, following the good old “boy scout rule” as penned by “Uncle Bob”






That’s not a very common attitude as I understand it – as the slightest change can break something; the usual procedure is to only make a fix after a problem has actually occurred.



As mentioned above that is analogous to never applying Support Stacks to the system, just applying a single OSS note after you have encountered an actual problem in production.



Put yet another way – “cure is better than prevention”. You don’t often hear people say that in any other area of life; it’s always the other way around – so why should it be backwards for computer software?




Too Busy to fix Sub-Optimal Code



The obvious solution to this is to start using ABAP Unit – then you would know the instant your refactoring changes broke anything, but people are too busy for that as well.



So the duplicate code and unused variables and all their mates will most likely sit there for time immemorial, laughing and singing, due to the fear you will breaksomething by removing them. Therefore, changing such custom code is scary.



One line Summary of why all this is Scary


We don’t want to change anything in case something breaks. Plus we have always done things the current way, using existing technology; and it works fine.



We’ve always done it this way!


Breaking things is sometimes a GOOD Thing


A few years ago on the SCN I came across the following blog which I found very interesting indeed:-




This was all about a book by Nicolas Nassim Taleb called “Anti-Fragile : Things that Gain from Disorder”. The book was not about software but the writer of the blog (Vikas Singh) considered that software programming could benefit from the general concepts in the book and I think he is bang on.


I finally got round to starting reading the book itself the other day. The author of the book has a word for change which is “volatility” and this means the sort of change that breaks things as a matter of course. Interestingly one such cause of dangerous change is noted as time, which of course can knock down mountains eventually. I think the second law of Thermodynamics says “entropy increases” which means that as time goes on things fall apart faster and faster.


In IT world the term for violent change is “disruptive technology” and here “disrupt” also means “break” as in breaking the mould, or breaking the dominance of the technology. So you could say that digital cameras disrupted Kodak’s film technology, no matter how many Muppets they wheeled out to keep promoting it.


When confronted with new technology that threatened to break everything about his business Kodak should have said “Who loves ya Baby?” which I understand was his catchphrase. He could have then gone on to play the villain in a Bond movie.


In SAP world the “volatility” is the stream of constant change requests to change or add functionality, which is supposed to be addressed by the constant change in the technology available to solve these problems.


In the Anti-Fragile book there are three categories of how things respond to change:-


·         They are fragile, and they break, like a toaster being hit with a sledgehammer.

·         They are robust and stay unchanged in the face of stress, like those Tardigrade creatures that end up unchanged by the most violent types of change e.g. death. http://www.bbc.com/earth/story/20150313-the-toughest-animals-on-earth

·         They are anti-fragile, and the more they get harmed by change, the stronger they bounce back, like a Hydra growing seven heads when you cut one off whilst doing your seven labours of Hercules, a common requirement in modern day internships, or maybe getting a horrible disease like Scarlet Fever which causes you a lot of pain when you are a child, and then being immune to the disease thereafter.


It is common in fiction for the story to start with something really bad happening to the key character, and they are forced to do something in response, and at the end of the story end up in a far better position than if the bad thing had never happened in the first place.


You may have seen the film “A Shock to the System” where Michael Caine faces the scary situation of being passed over for a promotion, and so starts murdering everybody in his way a la Richard III until he ends up running the company, and likewise “Wolf” where Jack Nicholson faces of scary situation of being demoted, and so turns into a werewolf and once he starts killing and eating people and generally acting like a savage animal his superiors realise that he is ideal for management and thus he ends up running the company.


Charming as these stories are, I don’t think we need to go quite as far in order to turn our three shocking situations – upgrades, new technology and changing custom programs – into events that work for us and not against us i.e. making our computer system “anti-fragile” to such situations.


Anti-Fragile on an Upgrade Level


On a personal level I love it when the SAP system where I work is upgraded. As I stay up to date via the SCN about all the lovely new goodies in each release, when my system is upgraded I am like a bull in a china shop trying out all the new toys. These days even support stacks add new functionality e.g. the BOPF, so even if there is no upgrade I like it when support stacks go in.


However a common perception is that installing a support stack is like hitting your PC with a sledgehammer and hoping it still works afterwards, and doing an upgrade is like hitting your computer with a sledgehammer, then bathing it in a vat of sulphuric acid, then fishing it out and coating it with plastic explosive and throwing it in a live volcano, then throwing a hydrogen bomb down the volcano after it, and hoping the PC still works afterwards.


I dropped my mobile phone about a foot the other day, and it is not in a good way now, so I can see how fragile some things are. However I personally believe an SAP upgrade is not quite as dangerous as that, having gone through a fair few.


I talked about the negatives earlier, they are so obvious they do not need dwelling on, but I would like to counter that with a few positives.


Some would say that you get loads of new functionality, not only tools for us developer types, but for the primary consumers, which is the actual business, if you can remember them. Others might add that you are paying a trillion billion dollars a year on maintenance fees and what you are supposed to be getting out of that is not only the pretend support, but also the “free” upgrades with all the new functions.


It’s like paying for some cheese up front, and then never eating the cheese. I use cheese as an example because of that book “Who moved my cheese?” where one of the characters says “I’m not sure I’d want to eat new cheese!” i.e. the cheese he had before was the only sort of cheese he would ever want, even when it was there no longer (out of cheese support).


Amazing as it may seem those arguments just don’t seem to work. Here are some more I can think of, which most likely will not work either.


Firstly the more often you do something, the better you get at it. That is why when you are travelling to Mars or something for 18 months your muscles start to waste away in the zero gravity as they have no work to do. If you only do an upgrade every seven years you have more or less forgotten what it is like, but when one department I knew did the “annual BW upgrade” (the technology was changing that fast) it was “ho hum, business as usual”.


Also, if you leave it ten to twenty support stacks before you install them, the delta change will be a lot greater than if you installed them twice (or even once) a year. You’d still have to do just as much testing, but less things would break because less things would have changed.


The same principle applies with upgrades – I knew one petroleum company who were hanging on for dear life to SAP Version 3.0, and the longer they waited, the more new versions came out, and the more difficult it was going to be when the inevitable happened.




Anti-Fragile on a Personal Level


Once upon a time there was someone who liked to be told exactly what to do, and was very comfortable with what they knew, what consisted of what they had been taught when they were at a large consulting company.


Then, when they were an independent consultant, because all the big consulting companies had gone belly up and made all their SAP consultants redundant, they had to work for companies which wanted SAP experts but would not spend one red cent on training.


Mr. Banana told him that the next task was to develop some custom code inside SAP which would talk to some sort of SAP Enterprise Portal application being created by a web developer.


“I don’t know anything about the Enterprise Portal” he cried, dismayed, “I have never done anything like that before.”


“Oh, you will just have to work it out yourself” said Mr. Banana “I will help – I already managed  to figure a lot of it out by trial and error, and once you get there in the end you will have another thing to add to your CV!”


“I don’t want it on my CV” said the consultant “I never want to use that new scary technology ever again!”


He’s not in IT now. Last I heard he was a newsreader, most likely a lot happier in life, radical new ways of reading the news don’t come along all that often. Reading the weather is different; there are always new ways of doing that, like bouncing on a trampoline or jumping from island to island on a scale model of the UK.


Anyway, the point is, if he had soldiered on with that Enterprise Portal business he would indeed have come out of it with an improved CV. The horrible stressful change of having to learn a brand new technology might have been really painful but he would have come out of it more employable.



Even older people can learn new technology


Anti-Fragile on a Technical Level


Traditionally when we have to go and change a program due to change, it usually falls into the “fragile” category and breaks. If it manages to survive the change without breaking anything then we deem the program “robust” and break out the champagne and balloons, throw a wild party and put “Conga-Longa-Max” by Max Bygraves on the office record player, and dance around the room with joy.



You just can’t go Wronga - Singalongaconga!


We have managed to make a change without breaking anything! Surely life does not get any better than that? This is the maximum we can expect from the software lifecycle process; this is self-evident –isn’t it?


As mentioned above good old “Uncle Bob” thinks not. His position would be that although you have gotten away with it this time (because there are no pesky kids around) all you have usually managed to do is make the program more fragile with extra conditional logic branches and global variables and the like; as Mr. Scott would say “Captain! It’s all held together with bogies and string – the engines, they canna take it no more!”


Instead in the “Boy Scout” quote I alluded to above he (Uncle Bob) suggests it is possible that every change can actually make the program less fragile, and it does not take a ten ton mega-genius with bronchial pneumonia to figure out how. You do not even have to wear a woggle.


As we have seen when confronted with a new disease, provided the end result is not actually death, once the immune system has fought the new disease off the immune system thinks to itself “I will be ready for that next time” and if the same disease comes knocking the next year it gets killed before it reaches Bombay i.e. it does not cause any harm at all.


“Adaptive (or acquired) immunity creates immunological memory after an initial response to a specific pathogen, leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination.” – Wikipedia


Hence the phrase popularised by The Joker “what does not kill me makes me stranger”.


So, pretend for a moment you are the immune system, and the change request you have just successfully completed was the disease. This is not that difficult an analogy as, after all many of the change requests you get from the users / business analysts probably make you physically sick.


There are several questions you have to ask your little lymphocyte self about what just happened:-

·         Can it happen again ( answer = YES )

·         What was difficult about how I worked out how to fix the problem?

·         Can I strengthen the defences in some way to be ready for the next time this happens?


For the first question the answer is obvious – if it happened once it can happen again. You also need to ask yourself can something similar happen? Let us say your program presumed a one to one relationship between a sales organisation (VKORG) and a company code (BUKRS) and then the organisational structure changed and that assumption went out the window, and your program fell in a heap.


You can use table TVKO to get the company code for a sales organisation, rather than using the two elements interchangeably as you may have done before. With that problem out of the way you should then look for similar assumptions e.g. an assumption that there is a one to one relationship between a profit centre and a cost centre, or any other silliness which may work at the moment but could break at any time if the configuration or master data changes.


Another example - I have even seen a program which assumed a purchase order number was nine characters long and started with “45”. Most purchase orders in most companies do start with “45” but most are ten characters long and anyway, field EKKO-BSTYP is a better evaluation mechanism to see if something is a purchase order as opposed to a schedule agreement or something.


The next point is – of the problem happened in the program at hand, and is down to some sort of faulty assumption or whatever, is that same assumption or problem also in other existing programs but no-one has encountered it yet? Perhaps making a pre-emptive strike here is the go, rather than waiting for the users to find the problem?


Even if there is no actual problem but you have found an assumption which can break in the future e.g. one cost centre = one profit centre, would it not be a good idea to prevent that potential problem in every program it could occur? That could, admittedly, be a bit of a hard sell. Immune system cells are lucky – if they find a problem in one part of the body they can travel to every other part of the body ready to guard against that problem, without having to fill in a business case evaluation. This is partly because a single celled organism often cannot pick up a pen to sign the document.


Amazingly some people might say that is a silly analogy – they would spit in my face and say a company doing such pre-emptive work in advance of a problem actually occurring just sucks up time and money that could be spent elsewhere. Well, where is the immune system getting its energy from? The Moon? The non-conscious part of the human organism lets the immune system have all the resources it wants as it is wholly focussed on survival, unlike most companies, where politics is the primary driver and the manager who stops the immune system from functioning can jump out of the patient before it dies, and jump into another host, and get more money and prestige as a result (and in the UK a Knighthood).


How difficult was the fix?


As someone somewhere once said “Can we fix it? Yes we can! Can we fix it? Yes we can!” Generally it is not the end of the world to fix whatever the problem is. However I have found that a fairly large amount of the work involves replicating the problem, and the huge bulk involves locating exactly where things are going wrong, and once you have got to that point fixing matters up is a walk in the park.


As can be imagined if the problem is in the area of Scooby Doo it would be good if the routine / method was called “Scooby Doo” but it often seems that when developers name routines / methods they have a temporary attack of mild insanity and call the routine “DOG_DAY_IS_OVER” making the developer trying to fix the problem ask “Scooby Doo - Where are You?”.


SAP have a horrible habit of putting routines in includes with names like R54RZSD instead of “release credit block” and a lot of programmers have decided that is “best practice” and followed suit.


When I started learning German it made debugging standard SAP programs a lot easier but sad to say a lot of routine names make no sense in either German or English.


The point I am trying to make here is that if it takes you half a day to even find the code where the problem is occurring then next year when a similar problem occurs will it still take you half a day to find it again? Human nature is to forget how your own code works, let alone someone else’s.


The answer is so obvious I was not even going to say it, but just in case – “give routines/methods/variables a meaningful name”.


Stupid and/or joke names are even worse. A new programmer won’t be in on the joke and will have no clue what the silly name represents. This is why a lot of ABAP programmers are often scared of web development – they see a big list of tools you need, all with silly names like GIT, GRUNT, FOSSIL, JENKINS, ANT AND DEC, PINKY PONK, NINKY NONK, IGGLE PIGGLE and UPSY DAISY and cannot guess what the tool does from the name. Not that you could guess what ABAP means, or any of the ever changing forest of SAP acronyms. At least “Service Adaption Definition Language” sounds more grown up than “Ninky Nonk” even if you don’t know what either of them means.


In any event, moving to the opposite extreme, the latest SAP programming model – CDS views – is predicated upon being “as close to conceptual thinking as possible”. This is not a new idea, many academics (for example, Spiderman) have created whole new programming languages, to try and have the program read like plain English.


Furthermore when you have your code reading like English – again as mentioned in all those programming textbooks, such as “The Code Cleaner” and “Dougal and the Blue Cat” – it will be obvious when it is wrong.


IF x > y


IF number_of_hotel_floors > number_of_stars_in_the_sky ….


The syntax check will never work out this is an impossible situation, and whilst the variables names are meaningless neither will someone reading the code, but as soon as the variable names mean something any human can see what is going on.




The above blog was not about ABAP but the idea is the same – if you name everything in human terms it becomes obvious when the code is wrong in the same way a wrongly structured written or spoken sentence is clearly wrong.


Another quote from Uncle Bob is “we are authors”. Your code is going to get read by real people, and they view things very differently from a machine. Machines are quite happy with every variable being one letter long.


As a – very obscure – analogy, if a man goes into a clothes shop to buy a suit and the assistant says to him “X?” that would most likely puzzle the vast majority of customers.


Instead, in the real world, if the clothes shop happened to be located in the UK, the shopkeeper would say something like “I’m Free!” or “Oh! Oh! Suits You Sir! Suits You! Oh! Oh!” Some would say that’s not much better, but that’s what actually happens.


This might also explain why most men’s clothes in the UK are bought by women on the man’s behalf but anyway if you can stop messing about for just one second and get back to talking about programming the point is that if it has taken me half an hour to track down the routine in a custom program that changes everything and I could not find it because it was called ZCD4356 then after fixing the problem I am also going to change the name of the routine to “CHANGES_EVERYTHING” so next time I go looking for it I can find it.


This principle was invented by top programming duo Climie/Fisher (hence the name) and I was only seventeen when I started programming that way. It seems like yesterday.


How to strengthen the defences?


We just talked – at inordinate length – about how naming things sensibly can make life easier the next time we come back to the program i.e. because we did this renaming the very fact there was a problem with the program in the first place made the end result easier to read, understand and thus maintain.


Next we have the situation Mr. Banana and Mr. Grapefruit were arguing about earlier – if you had to make the same change in multiple chunks of identical code then clearly abstracting that code will speed up similar changes in the future, make sure there is no disparity between the way various sections of the program do the same thing, and make the code easier to read to boot.


Another fun thing about duplicate code is how it usually arrives – via cut and paste. You copy a section of code dealing with goats, and then change all the variables to do with goats so they now deal with sheep, but miss one, so the program ends up doing something crazy like reading the goat table and putting the value in the sheep variable. Having the duplicate code in its own method where the things that change are passed in and out as parameters goes a long way towards solving that problem.


Moving along, the warnings in the extended program check and code inspector are not just there for the fun of it – as just one example removing unused variables is good as it frees up just a tiny bit of memory, but more importantly makes the program easier to read. In the 740 version for ABAP and above those unused variables should be a lot rarer as often you only declare them inline the first time they are used.


Moreover ABAP in Eclipse has automated tools to help you with those two examples – abstracting out code and removing unused variables.


Another one is an assumption that there will only ever be one material type for widgets and suddenly there are two, so you replace a hard coded value with a range that is read from a customising table.


I could go on all day – the point is that every time you fix a hard coded problem, instead of making it worse – longer routines, more conditional logic – you can make it better.



No Risk = No Progress?




SAP sometimes gets the reputation of a “caveman” system, not able to respond to change in an “agile” manner. Where this occurs it cannot be laid at the door of the company SAP but rather the fear that some companies (people) have of encountering pain when something breaks as a result of changing anything other than the bare minimum in the system.


This takes (at least) three forms, and in each case you can make the pain work for you, thus making your system more “anti-fragile” i.e. something that benefits from the grief caused by the changes needed to keep up with a world that spins faster and faster each year.


One – changes don’t get more painful than an upgrade or even a support stack. However doing such an exercise makes you come out stronger as the more often you do them (a) the better at them you will become and (b) the smaller the delta change will be so the less will break.


Two – leaving your comfort zone and learning a new technology seems so painful it can make developers want to run and hide behind the sofa, but you come out of it stronger as you are more employable.


Three – constantly cleaning up your custom programs as you are forced to change them due to business requirements is a risk as if such a non-mandatory change breaks something your boss will shoot you with a machine gun and that is painful. However using this technique (cleaning your code, not shooting people with a machine gun) makes your custom programs stronger as they are easier to understand and thus maintain the next time a problem comes along, and moreover they are less likely to break and have that problem in the first place.


That’s All Folks


Most of the points in this blog I have covered before at some stage, though not combined in this fashion.


In my personal situation I am allowed to clean up the custom code to make it more robust as time goes by, but the prospect of an upgrade is many years away, and support stacks are not very likely on the application side.


I am would be interested to know – have you managed to persuade the higher ups that an SAP upgrade is a good thing, and as a result it actually happened? If so, what arguments did you find worked?


Cheersy Cheers





Update: I have a second blog about this subject: A second way to get the new ABAP editor in LSMW. This second blog describes a more complex but more flexible solution with even less limitations. The method in the current blog is still valid as it is easier to implement.

SAP LSM Workbench (LSMW) has for years been a much used tool for data migration into SAP. It is both loved and hated :-) While LSMW has had its last update in 2004 it remains a much used tool in this age of more modern toolsets such as SLTSAP DS and the like.


For many frequent and hard core users of LSMW a big nuisance is the old style ABAP editor. This old editor takes up alot of development time, especially in those ABAP-rich LSMW projects.


One night, bored and out of beer, I managed to develop a relatively simple enhancement that enables the new ABAP editor for LSMW.

(Mangled code completion context list is thanks to Windows 10 & a 3K screen)

Compare that with what you have been working with for the last decades:




  • New ABAP editor for all ABAP coding within LSMW (field mappings, events, user defined routines and form routines)
  • Code completion
  • Use of the Pretty printer
  • Use of the ABAP Syntax checker
  • Use of ABAP patterns
  • No Modification required, just a single implicit enhancement spot
  • Fix of a small LSMW bug where the wrong line is highlighted when doing a syntax check in the __GLOBAL DATA__




  • Code completion is not aware of globally defined variables
  • A few, more exotic, editor menu commands are not working and will return 'Function not implemented'
  • The use of Edit-->Find/Replace issues warning and will eventually cause a short dump (but who needs this function eh?)


The enhancement

The implementation of the new ABAP editor takes just one single Implicit enhancement spot. No modification or any other unwanted hacking! It has been tested on an ECC 606 system with LSMW version 4.0.0 (2004) and SAP Basis 731/02.
Update: Also tested on a brand new ERP 740 SP12 on a 742 kernel with HANA DB underneath.


  1. Create an Implicit enhancement spot (how-to) at the start of Subroutine EDITOR_START of Function group /SAPDMC/LSMW_AUX_080

  2. Paste in the code attached to this post & activate.

  3. Create a user parameter ZLSMW_NEWEDITOR  in SE80 (how-to scroll all the way down). Assign the parameter with value 'X' to each user that wants to use the new editor. All other users will not be affected.

  4. Temporary solution to prevent lines > 72 char getting chopped off (as reported by Cyrus below):
    - Open SE80 and go to Utilities --> settings
    - Check the flag 'Downwards-Comp. Line Lngth(72)' and save.

    This will draw a thin red line at position 72 in the editor and will auto-fit lines when they go over 72 chars. This will be a global setting so also your SE38 will now look like this. See my second blog A second way to get the new ABAP editor in LSMW for a permanent solution for this shortcoming.

  5. Start LSMW!



Give it a try and inform me of any bugs. As stated above not all user commands work. All the important ones do and most of the others I have managed to catch and issue a friendly 'not implemented' message. Have a look at the second blog A second way to get the new ABAP editor in LSMW if you want to get rid of these limitations

Łukasz Pęgiel


Posted by Łukasz Pęgiel Jan 20, 2016

I was lately trying to find an HTML WYSIWYG editor for ABAP, but I failed. I though or this was not needed so far, or the solution was not posted anywhere. So I've tried several times and thanks to NICEdit and this tread on SCN I found the way to make HTML WYSIWYG editor for ABAP.


My editor use CL_GUI_HTML_VIEWER to display NICEdit in container, and then thanks to POST method I put changes back to SAP. ZCL_HTML_EDITOR class,  raises an event whenever someone click on save button in the editor, so you can easily handle it and then use new HTML for your purposes. Video bellow shows the demo of usage.




NUGG with class and demo program + HTML file with editor can be found here http://abapblog.com/articles/tricks/103-wysiwyg-html-editor-in-abap




In SAP sample ALV report BCALV_GRID_01, when the output is downloaded into spreadsheet XXL format. Colors appear for the key rows and column headings with default color.


In this blog we are going to see how to change those default colors to user defined.


Go to transaction SE38, execute the report BCALV_GRID_01. Click on Export --> Spreadsheet --> XXL --> Table --> Microsoft excel.









The output below has default colors in the rows and columns as below.




These colors are defaulted or hard-coded in the system as in the below report.




The system looks for the user defined colors from table COLORSEXC, if there is no user default then it uses the default color codes 12 and 36.


Lets try by maintaining the table COLORSEXC to use the user defined colors.


Refer to the below link for the color codes used in excel.


Color Palette and the 56 Excel ColorIndex Colors


We will use color code 32 and 34.



Now execute the report again and download it into XXL format as explained earlier and it downloaded with the user defaults colors maintained in the table.



Few days back we found a serious problem regarding Translation of the Labels of all the forms we were working on. The pressure came from the functional. As per there requirement they want to update a table(Constant table) and they want to reflect the changes in the adobe forms. Then things are getting little challenging. One of my senior created an method to fetch all the Translation details for any of the form. We were passing the values through the context of the forms. She created a structure with 200 fields. Among those fields anybody can pass max of 200 fields to a structure for translation. For our forms the number 200 was little high. Anyways after that we follow the below steps and successfully displayed the caption of the fields dynamically as per the system language.



this.resolveNode("this.caption.value.#text").value = data.Page1.Translations.FIELD_001.rawValue ;


The bold one is the field which contains translation and the italic one is the subform in which we wrapped the fields which contains translation.


Now the problem begins. Some of the field text needed to be like this as shown below.



Case A: For this case the caption is needed to be right aligned. The initial spaces may varies for different languages.

  • First calculate the total no of letters may fit in that field A. Say it is 50 char.
  • Then create an global field named 'i' set its value as 50 spaces. Goto Edit->Form Properties->Variables. Click on '+' and set the value as 50 spaces. It may change for other fields.
  • Then write the following code.

var label = 0 ;

var i_val = 0;

var i_len = 0;

var label_len = 0;

var space_len = 0;

var space_app = 0;

label = data.Page1Translations.FIELD_001.rawValue ;

i_val = String(i.value); ...................................................  " For spaces

i_len = i_val.length; ................................................... " Checking the space length

label_len = label.length ; .................................................... " Checking the label length

space_len = Number(i_len) - Number(label_len); .................................................... " finding the no of spaces have to be appended to a field labels

space_app = i_val.substring(1,space_len);............................" actual required spaces


  1. this.resolveNode("this.caption.value.#text").value = space_app + label; ..........." Concatenating and displaying the labels..


Thus we achieved our requirement. Now I thought how to fulfill the next requirement...


Case B: I created another global variable named 'NXT_LINE' and set its value as 'enter'. (That means in the value I pressed only enter.. )


Then wrote the same code as before just passed NXT_LINE instead if 'i'. In the output it came like field B only.


At last the things are done with the forms. Then I started to write this document. Suddenly senior came and stood behind me and watched what I was writing.

I realized somebody behind.. Heard a little tough voice "Enough blogging something I have regarding MIGO tcode"... 


If any improvement is needed in this technique then please don't forget to write. The main implementation of this technique is to display the caption with your required alignment.

Horst Keller

Three ABAP Games

Posted by Horst Keller Jan 18, 2016

Look what I've found.


Under the cloak of "Examples of Expression-Oriented Programming" recent versions of the Example Library of the ABAP Keyword Documentation contain three ABAP games.


Jaw Breaker



Not too demanding but colorful ...



Mine Sweeper



Time-honored and well known ...



2048 Game


A recent smart phone game and a real challenge ...



Wanna play?



          (The examples from the 7.50 documentation can rely on 7.50 features. For 7.40, there are 7.40 examples).


  • If you have a release older than 7.40, you can copy old fashioned 7.00 source code from the  attached text file . Maybe you also have to replace the usage of CL_DEMO_INPUT with PARAMETERS then.


Only outside working hours, of course.


Wanna Contribute?


Maybe, the pattern used in these games (usage of CL_ABAP_BROWSER etc.) motivate you to create some other games?





Due to the discussion below (thanks Enno!) I adjusted the HTML output of




in Release 7.50, SP03. You find the new sources in the attached text file .


Hopefully, those run for all combinations of SAP GUI and IE. If not, next round ...

When you write a report you should as a matter of course validate the input that the user has provided to make sure that the request thay have made is at least sane.


This validation generally takes place in the "AT SELECTION-SCREEN ON parameter" event where if the value in the parameter is incorrect in some way and an error message is displayed the relevant field is highlighted and made ready for input again.  The processing of the report is stopped at that point.


One of the things to check for is that a parameter is populated and this can be done automatically using the 'OBLIGATORY' clause of the ‘PARAMETERS’ statement,  however,  when the selection screen is interactive,  this in itself can cause problems.


An interactive selection screen could for example have a series of radio buttons that diefine different aspects of the report,  and selecting a specific radio button means that a specific parameter should be used.  Rather than keep all these other fields visible I tend to put a user-command function on the radio button and then in the AT SELECTION-SCREEN OUTPUT event hide those fields that are not relevant to the current radio button state.  With the ‘OBLIGATORY’ parameter set on other fields this process is interrupted when a mandatory field is not populated leading to incorrect information or fields being displayed.


Also,  I am of the opinion that checks should not happen until the user has made all their entries and have clicked the ‘Online’ button.

Something like this:


At Selection-Screen On p_Matnr.
   Data: l_Count Type i.
   If sy-UComm = c_Ok_Online.
      If p_Matnr Is Initial.
         Message E018.
         Select Count(*)
           Into l_Count
           From Mara
          Where Matnr = p_Matnr And
                LvOrm = Space.
         If sy-Subrc <> 0.
            Message E007 With p_Matnr.


This checks to see if the material number is populated and that it is valid.  If not an error message is issued.


This type of code though has a little ‘Gotcha’.  I don’t know about you,  but If I have to correct an erroneous field,  I make a new entry and then rather than clicking the ‘Online’ button again I hit the ‘Enter’ key.  This then re-validates the fields and processes the report….but hitting the ‘Enter’ key sets sy-ucomm to initial and the validation code is not run.  This means that if I entered an invalid material number again it would not be picked up – as would any other field with an invalid entry.


Now you say “But users would always click the online button” in which case the code would always work,  but I would say “Well – there might be others like myself who do otherwise”.


The fix is simple – keep a copy of the previous function code and if the current code is initial use that instead:


At Selection-Screen On p_Matnr.
   Data: l_Count Type i,
         l_UComm Type UiFunc.
   If sy-Ucomm Is Not Initial.
      l_UComm = sy-UComm.
      l_UComm = g_UComm.
   If l_UComm = c_Ok_Online.
      If p_Matnr Is Initial.

         g_UComm = sy-UComm.
         Message E018.
         Select Count(*)
           Into l_Count
           From Mara
          Where Matnr = p_Matnr And
                LvOrm = Space.
         If sy-Subrc <> 0.
            g_UComm = sy-UComm.

            Message E007 With p_Matnr.


Set the value of g_UComm prior to issuing an error message.

This is the second part of my answer to Christian Drumm's question What System Landscapes Setup to use for Add On Development? - and probably more of an answer to the original question than the first part. At the moment, many authors choose to place obscure references to Trifluoroacetic acid (or TFA for short) in their blogs, but since this post will be about rather fundamental aspects, I'd like to choose a different setting.




When thinking about software delivery and usage, it is a good idea to start with a rather simple model of provider and consumer. Separating these roles clearly makes it much easier to describe the requirements and expectations of the parties involved. I'm well aware that in the real world, the distinction between provider and consumer isn't necessarily that sharp. However, any intermediate role definition will likely contain aspects of a pure provider as well as a pure consumer - and, as you'll shortly see, complexity has a way of increasing, even with some simplifying assumptions in place. Also note that I'm leaving SAP out of the picture - one might describe SAP as "infrastructure provider", and that role in itself opens up a whole new universe of complexity.


So, to answer the question: let's assume we are on the provider side for now - the consumer will have to wait for yet another blog.



Make money, what else? To do so, we need to deliver our solution to the consumer, and that solution contains some ABAP-based software - otherwise, following this would be quite pointless. We're not using the software ourselves, we're just providing it to more or less anonymous customers. (This is actually important: it's a good idea to assume you know nothing about your customer's implementation details, since that will save you from dangerous assumptions with unpleasant consequences.)


Many details of the actual system landscape will depend on a number of decisions that are specific to the solution we want to provide. So, before defining the development and delivery process, we need to be able to answer a few questions:

  • What are the dependencies of our solution - thinking in software components? SAP_BASIS certainly, but what else do we need? ERP? HR? CRM? What are the requirements of these dependencies? Which versions of the dependencies will we have to support?
  • Which technologies are we going to use? Will we need to take special provisions for any of these?
  • What kind of release and patch cycle are we planning for? How many concurrent major versions will we have to maintain? (Hint: Choose an odd number below 3 if possible.)
  • What non-technical external factors exist that will have an impact on our release schedule? Will we have to follow legal changes and/or deliver on fixed dates to allow our consumers to meet regulatory deadlines? Will our customers have to update regularly anyway because of legal requirements - and if not, can we convince them to do so anyway or will we have to support ancient patch levels of our software?


Obviously, we can't answer these questions in detail for the hypothetical provider we're considering right now, but the implications of each decision should become clear during the course of this discussion.


We won't be discussing general good practices of software development like unit testing, integration tests, documentation and the like here. These aspects are of course very relevant to producing a stable, maintainable product, but we'll assume our developers know about that and won't need to be reminded times and times again. However, we need to remember that we need a working environment for many of these tasks, and that includes both the correct versions of our dependencies as well as a usable configuration. How will we need to configure the dependencies before we can start building our own software, and what configurations will we need to test the various ways our customers might use the software? Again, we can't answer that for a hypothetical solution, but you get the idea - there are some rather time-consuming activities lurking in the dark of an unanswered question.


For the following discussion, I'll assume that we'll use the Add-On Assembly Kit (AAK) to deliver the ABAP software components. If you're not entirely sure why, you can find my personal view on this topic here. It certainly doesn't hurt to know in detail how the AAK works, but we don't have time for that, so here are the basic ideas you need to understand the following discussion.

  • With the AAK, you deliver software components (think table CVERS, SAP_BASIS). Physically, you'll ship a SAR archive that contains a PAT file which in turn contains the actual package files (starting with SAPK-). If you think of these packages as transports with additional functions on top, that'll do for now.
  • You're free to use transports within the system landscape. At some point, you gather all the stuff you want to deliver in a central location (system) and perform your unit and integration tests (not covered here). Then, you decide what exactly is to be shipped, perform some additional checks and create the package files. This happens using the so-called Software Delivery Composer (SDC) and results in the SAPK files.
  • The package contents are then turned into deliverable PAT files using the Software Delivery Assembler (SDA). In this step, dependencies and other attributes are added to the package files to produce PAT files. The PAT file names consist of the assembling system ID, installation number and a sequential number.
  • The SAR files have to be packaged manually (or using some custom program). These are just archives that can be uploaded and installed more conveniently by the customer.
  • The AAK is able to produce different kinds of packages for different scenarios that our customers might encounter. The most important ones are packages for initial installation (contains the current versions of everything), upgrade (only contains the changes), release upgrades (similar, but including release-specific changes) and patches (bug fixes, no new objects).


But enough of the asides - the question still is not answered: What do you want?


  • If you're a developer, you'll want to spend as much time as possible working on cool new stuff without that pesky delivery infrastructure getting in the way.
  • If you're responsible for testing and support, you'll want a dedicated system for every supported combination of versions and patches (and probably even separate systems for Unicode and NUC as well) for instant testing capability and full coverage of all possible installation scenarios. You'll want to find the ugly bugs before the software is delivered, and if systems needs to be broken at all - well, better not use the customer's systems.
  • If you're the CEO, you want as few systems as possible. These things don't come for nothing, you know?
  • If you're a responsible CEO, you'll want whatever is required to deliver the quality and performance the customers request - but nothing more than that.


And if you're the delivery manager, sitting right in the middle...?




Nononono, not so fast. You're responsible for getting our software to the consumer in one piece and without damaging our (bad) or their (VERY bad!) system, so you'd better step up on the understandings.


AAK-based delivery requires at least one delivery system for each SAP_BASIS release supported. We won't exactly need a separate delivery system for each patch level, but if significant changes happen (think 7.40 SP 05), different delivery systems might be required. That is, unless we can either postpone the installation of that patch in our delivery systems until all customers have that patch level installed (e. g. because it's part of some SP stack that contains legal adjustments) or simply raise the system requirements we impose on our customers. Similarly, if different releases or sufficiently differing patch levels of our dependencies (like ERP or CRM) are required, different test and delivery systems might be a good idea - but more about that later. (Remember the initial questions? As you see, it starts to get fuzzy, and we haven't sketched a single system landscape yet.)


Now what's a delivery system in this context? It's the source system of the technical installation packages - this is where the SDC stuff happens, and the SAPK package files get exported from here. It's usually a bad idea to deliver from the development system - or rather, to actively develop within the delivery system. If we allowed that, we'd probably have to lock out all the developers during a delivery process - or risk someone carelessly breaking the assembly process. Also, a development system usually contains local test objects, internal tools and more often than not some rather imaginative customizing introduced by the developers to reproduce some strange condition or try out a new feature. We don't want that ending up in our installation package, and we don't want anything in our installation package to accidentally depend on any of that stuff (think of a data element /FOO/BAR that is to be delivered, but still depends on a domain in $TMP).


The usual and indeed most simple answer to that is to separate at least the development from the delivery and have different systems for that. A reasonable approach would be to combine the test and delivery environment in one system (but use separate clients for testing - that also allows for different testing environments). Developers develop and release transports as usual, and these transports are imported into the delivery system. Whenever a delivery is required, you'd impose a transport / import freeze on the delivery system, perform the tests and compose the packages, while the developers can keep on coding in their separate development system. This also has the advantage that transports can be used to collect and group the objects that have changed and need to be delivered.


When assembling the PAT files during the delivery process, you will have to enter some additional conditions and prerequisites that will be contained in the delivered software and checked during the installation. It goes without saying that anyone can - and given enough time, will - make mistakes, so it's very sensible to double-check these import conditions by performing a test installation. (It's also mandatory for the certification which is in turn mandatory to get the AAK.) We will also want to check various different upgrade paths (will these fancy data migration tools actually work?) and probably provide an estimate on the total installation time. Now obviously, we can't use either the development system or the delivery system for that, since our software component already exists there and we don't want to break anything. No, we need a "vanilla" system that we can use to perform a test import. This had better be a virtualized system so that we can take a snapshot, perform the test installation and then revert to the saved state. Frequently, we will want to perform at least two test installations (fresh installation and upgrade from the latest version), and in some cases, it might be good to have systems with various releases and patch levels available for testing. These systems are usually not customized at all, and no functional testing takes place there - that should have happened before the delivery to keep the number of delivery iterations small. Also, with the number of installation test systems rising, it becomes a pain in the a...natomy to keep them all updated and customized correctly. These systems are simply targets to verify that we've produced an installable package.


So now that we have a basic understanding of the systems we need, let's see what that landscape will look like:




Now this looks nice and compact, but don't hold your breath - we're not quite finished yet. First off, there's a slightly less-than-obvious problem regarding the maintenance of our solution. Once the delivery of, say, version 3 has been completed and customers are busy installing the new software, the developers in turn will start to work on new features in preparation of version 4. Thus, once transports from D01 to A01 are enabled again, the delivery system won't stay in the state that was used to assemble the delivery for very long. Since the delivery system no longer resembles a hypothetical customer system (and it might even be occasionally broken while development is still ongoing), it is no longer possible to reproduce issues or test fixes with the exact same software that the customer has installed. That's bad, especially if some particularly nasty bug escaped our collective attention during testing. How can we prevent this?


The obvious solution would be to not import any transports until right before the delivery date. However, that's generally considered a bad idea, since lots of issues are usually identified by transporting and importing (e. g. missing dependencies, accidentally $TMP'ed objects and other basic errors), so delaying this is not desirable. Also, sometimes one needs to release transports for various reasons, so a queue of transports might pile up. Having to fix broken transports weeks or months after they have been released might throw a spanner right in our delivery schedule. So what's the alternative if we need the capability to continuously support older versions of our product? Yup, another system - or another line of systems. These maintenance systems will usually be copies of the delivery system that are already fully customized for tests. Every time a new version is released, a copy of the delivery system is used to create a new maintenance system, and once a legacy version is no longer supported, the maintenance system is shut down.


These maintenance systems also have another use. As the saying goes, after the release is before the release - the development continues, new features are invented and transported, virtual trenches are dug and stuff is prepared to be reworked, refactoring is happening all over the place. The development team is happily hacking away, until - a bug. It needs to be fixed - and fast. However, the objects affected by the bug are already in a different state than the one that was delivered, and they are no longer compatible to the old version. And even if that particular part of the software was not affected, the entire product might not be in a usable and testable state right now. Obviously, we can't use our development system to produce the fix. Again, the maintenance systems saves the day: The developers can implement a fix in the maintenance system and test it there, and we can deliver a patch from that system. Note that to implement the fix, the customer will either have to import the latest patch, or implement the correction manually - there's no note/correction support for 3rd party products that I'm aware of.




It should be noted that, under certain circumstances, it is possible to work without maintenance systems. Whether we can use this approach basically depends on stability of our software (or rather the lack thereof) and the time-criticality of the fixes we might need to deliver: If our software stays relatively stable most of the time and/or consumers can tolerate longer wait times, we can trade in development flexibility for system landscape simplicity:




As you can see, during active development and testing, no patches can be released, and customers always need to upgrade the entire software to get the latest patches as well as functional upgrades that they might not be interested in at all. While this might be a perfectly reasonable approach for some situations, it might prove utterly useless for other situations.


So far, we have only considered one active development stream with occasional fixes to the previous version. Theoretically it would be possible to release version 1.0 and start maintaining it, then start work on improving (not just fixing!) the software producing a line of 1.x upgrades while at the same time starting work on a completely new track, producing version 2.0. This might sound like a good idea, but in practice, it rarely is: The ways to screw up increase exponentially, and it increases development complexity and testing effort a lot - not just for own development and testing personnel, but also for our customers. This approach rarely provides any business advantage for either the provider (that will be us) or the consumer (that will be the ones that will - or won't - be paying us). A true ?REDO FROM START is rarely needed, and from my experience, we'd be best off treating this situation as "we're starting an entirely a new product under a different name that has a different code base and does not depend on the old one". So, no boom.




Pessimistic though this approach may be, it does have its merits. Because - well, we're still not there yet. So far, we have only considered the versioning and maintenance of our own solution. As we've already stated in the introduction, we'll usually need to provide software for more than one target release. To simplify the situation, let's focus on the SAP_BASIS release, since that's what the AAK requires anyway, although the same scheme might also apply for other dependencies. Again, resolving this situation requires the use of multiple delivery systems and installation test systems, but with a different pitch: We're now trying to produce the very same software (including patches) for different platform versions. Simply multiplying the development systems as well will get the developers to launch pointy objects in your general direction, so that's not an option. So - can't we just centralize the development system? We can, but that leads to another question: What's the baseline?


At this point, we need to take a short detour and think about transporting up and down the stream - that is, transporting development objects to a higher or lower target release. Transporting "up" is usually possible and not very problematic - stuff developed on lower releases will usually just work (at least technically) in higher releases. So, a SAP_BASIS 7.02 development system supplying objects to a 7.40 delivery system is usually not a problem. We might therefore choose the lowest target release we need to supply our product for as the baseline. However, this limits developers to relatively old version of the development environment (e. g. it might restrict them from using ADT). While this certainly is not favorable, from my experience, many developers tend to tolerate that if they understand the perils of the other option: Keep the development system on the highest release supported and transport objects downstream. This is not supported officially and liable to break - both technically during the transport process as well as within the coding itself. From a technical perspective, newer releases usually contain more options, larger database tables, more sophisticated tools - and transporting objects made with these new tools that already have some of the new options set and additional fields filled to a lower release that doesn't have these options and fields might have any of a number of undesirable side-effects. As for the coding - imagine a developer using the fancy new operators introduced with 7.40 SP 05 on the development system, only to find out that these won't work on the lower releases. Or imagine someone using a cool new class to create UUIDs, random numbers or some other API that doesn't exist on lower releases. This is not fun at all. SAP actually works that way, developing on the new release and then down-porting functions if necessary, but SAP does have other tools and more sophisticated system landscapes (and a lot more people!) at hand. Smaller development shops don't have that option and are usually better off with up-porting, even if that restricts the toolkit available to developers - so we'll go with that option.


Now let's consider the software itself. No product is an island - there are always connections with the platform and surrounding components. The question is - does any of these connected components change across releases in a way that forces us to write our software differently on different releases? If that is not the case, if our software stays the same on all target releases, that's good: we only need one development system, supplying transports to multiple delivery systems. At greatest need, small patches required for individual target releases might even be implemented in the delivery system.




However, if this is not possible because the software needs to be adapted substantially depending on the target release, it's a good idea to organize our software to separate release-independent parts from release-dependent ones. In this case, it also pays off to add release-dependent adaption systems between the central development system and the delivery systems.




Additionally, in order to allow the developers at least access to some of the newer technologies, it might be an option to place a "cut off" point at some release: On any release below that, consumers get maintenance fixes only, but no new features or other active development happens on the legacy releases. This will allow us to raise the baseline release, at least a little.




Finally - we're almost there, hold on - there are two more systems. One that's probably required: As far as I know, you need a Solution Manager installation in order to install and maintain the system landscape. Unfortunately, I know next to nothing about the Solution Manager. What I do know that it might be beneficial to add yet another system to the landscape that is used to centrally assemble the delivery packages (the SSDA step that turns SAPK files into PAT files). There are a number of reasons that might be a good idea:

  • It's very handy to have a single system that knows about all of the packages, e. g. to populate download portals or other delivery software.
  • If delivery systems get added and removed periodically, package detail information might get lost. If that information is kept in a central location, there's no danger of package data becoming unavailable.
  • All PAT file names contain the SID of the system that created the file. For the customer, it's more consistent if there's only one system name appears in the delivered files.
  • Package assembly and attribution is somewhat tricky - you wouldn't want the average developer to mess around with the package attributes.

Also, that system might be used as the TMS domain controller, to provide a central user administration (CUA) landscape or to produce the roles that are then distributed throughout the system landscape.


Okay - now that we've collected all of the puzzle pieces, let's put together a system landscape for a single-stream product development. Let's assume that we'll need to apply non-trivial adaptions to the target releases we provide software for, and that we need to maintain and produce patches for the last two versions we released. We'll need to actively support SAP_BASIS releases 7.31 and 7.40 with 7.31 as our baseline, while maintaining the infrastructure to provide patches for out legacy 7.02-based versions. While we're at it, we might already prepare the plans for adding the new 7.50 release to the landscape.




Now isn't this nice? 21 systems and counting, not including the SolMan...


My key points:

  • You need a clear understanding of your customer's expectations and technical limitations to decide on the best possible strategy and system landscape layout.
  • You also need a competent basis administrator (who is NOT billing by the hour!) and/or someone with combined administration and development skills in-house to maintain and optimize the systems.
  • Trivially, you need funding for the system landscape. As far as I know, SAP will license by development user, not by system, but you need the infrastructure anyway.

As a reward, you get the ability to deliver and maintain ABAP products with high quality standards and reliable, verified delivery processes while keeping customer risk and effort relatively low.


As always, careful planning and a lot of thought beforehand is in order:



Finally I've finished to work under FALV. You can find FALV classes in attachment and description under the links. But firstly let's go through few following points:


  1. Why I've created FALV although SALV classes are provided by SAP?

    I know SALV classes although I haven't used them often. The main reason was that they don't provide edit mode. So at the end I've always worked with cl_gui_alv_grid class so then whenever users decided that they need one of the field to be editable then I can do it in few seconds/minutes.

  2. But there is a way to make SALV editable!

    Yes, I know the solutions of Naimesh Patel (found here) and Paul Hardy (in his ABAP to the Future book) and some other folks to make SALV editable. But In my own opinion, especially when you're at least on 7.40 with SP5 making SALV editable is not needed as you can fast create ALV Grid which does everything you want. To be clear the big advantage of SALV, to call grid output of table in two, three lines when you goes to the code you'll see it's nothing more than call of REUSE_ALV_GRID_DISPLAY... really old FM, which at the end use CL_GUI_ALV_GRID. And yes I know about new SALV class for HANA, but that is another story...

  3. Direct reasons

    As I used CL_GUI_ALV_GRID so often then I came up with an idea to do some class which will make the creation faster, but I never had time to do it at work. You may know it, because of the time pressure you choose to create report/program/solution in the way you're doing from years ... and then comes another task....and another....

    So I've decided to do it at home.... yeah I'm crazy. But at least some of you can also use it.

  4. Advantages

    • Fast CL_GUI_ALV_GRID creation
    • Replacement of REUSE_ALV_GRID_DISPLAY and REUSE_ALV_GRID_DISPLAY_LVC for a simple editable reports to omit screen creation
    • All events are already handled and with redefinition of method I can faster use it
    • Faster setting of layout and field catalog attributes
    • Easy switch and copy between popup, full screen and container version
    • Easy toolbar handling (in grid and in full screen/popup using Dynamic GUI STATUS & TITLE with ABAP code )
    • One place to handle user commands of full screen/popup call -> event user_command

  5. Prerequisites

    I've worked on this on 7.40 SP8 but this should work also on SP5 as well. Sorry for the users bellow that versions but I'm so used to use new syntax that I couldn't force myself to use old way of coding.

    UPDATE: Thanks to Santi Moreno we have now 7.31 version, you can find it here as an attachment in blog with name ZFALV_V1.1.0.zip. Also GitHub repository is now available here https://github.com/fidley/falv so if you'd like to join us, you're more than welcomed.

  6. Source code

      Always updated source code in NUGG files and examples of usage will be available on my blog -> abapblog.com/falv . Bellow you can find some example videos with usage of FALV.

I really encourage you to try it and give me your feedback about FALV.









Dear everyone,


As functional consultant, I usually work with functional specs.

My task is to review functional specs.


Also, my favorite dictionary is Golden Dict, which is a successor of Stardict - a very popular open source dictionary software.


So I create 2 dictionary for my need to validate all content in documents.

First dictionary is table definition, which give you table name.


Second dictionary is field definition, which give you field name, and some field attributes.


When I press "Ctrl+C+C", definition of the highlighted table/field popup.

This utility enable me to work on document without open SAPGui to lookup for field/table definition.

blog-field definition.png

blog-table definition.png

You can see attached file.


If you are using the same dictionary software(stardict, golden dict, or any report that support stardict format) I would like to share with you the dictionary files.

Follow below link and download the files.

Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.

In Björn Goerke's keynote speech from Barcelona Teched 2015, you may recall that he had crash landed on Mars, and was attempting to escape mars with the help of all of the latest SAP tech available. One of the technologies that we took away from teched, and that Björn used in his keynote was that of the CDS view.


But what if Björn had crashed on Mars and was without a Hana DB? what if he hadn't upgraded yet to full blown Hana, but he wanted to prep for such a future eventuality. That is a similar situation to what I find myself in with my current employer. We have crashed landed too, but we are even further out than Mars... and we have no Hana DB, but can we prep in advance for HANA? perhaps by using CDS Views on our current Oracle DB.


Well it turns out that the answer is a qualified yes - you can run ABAP CDS views outside of HANA (provided that your ABAP system components are at a high enough level (we are at 740 and past SP8), and that your underlying DB is also at a high enough level (ours is). From Horst Keller's blog (views come in 2 flavours) we learned that CDS views come in 2 different implementations:


1) HANA CDS views (you'll never guess but these ones don't work so well on Oracle), can only be edited in Hana Studio.

2) ABAP CDS views - these can be defined and edited in Eclipse and do work on Oracle, and other non SAP DBs too!


So then its time to see if the ABAP CDS views can help me in my escape from my own predicament, remember we have crash landed on Enceladas which as you can see is a moon of Saturn, this is the view from my room on Enceladas of my broken buggy:



To be able to make it back to earth we must repair our rover, and for this we need to order a space suit from Earth, to be delivered by NASAFEDEX.


To place the order we must be able to construct a view of three tables in our ABAP system, these tables are MARA, MAKT and MARC, they contain the details of the spacesuit, that we need to get to and repair our rover, which can then call our spaceship to escape the dreaded Enceladas!


So without further ado lets jump straight into Eclipse and build the CDS views to get us the hell out of here!

To create an ABAP CDS view I simply login to Eclipse, choose File->new->other and then choose DDL Source from the pulldown list.


The ddl source is a new syntax that lets you define in detail your new Core data services view. It is important that the data dictionary name you provide for the view (in red ring below) is not the same name as the ABAP CDS view name (in Green ring below).


Here we select our material, the material description, and also some sizing and extra descriptive data. We are then specifying in the where clause that we want all of the descriptions in English (as we have lost our babel fish in the crash landing) and we are only interested in data from site 0114 (thats the closest site to the European space terminal) and that we want extra-large size 005 (as we ate too much Enceladas and Xmas pud!).

For the low down on CDS syntax and features please take a look at Chris's great blog here.



So really the view is defined as a select statement, defining the join conditions (on matnr above), the fields you want returned (matnr, maktx, ernam, mtart, matkl, size1 and werks) and the where clause.


Now we need to get the data out of our CDS view and to do this we must write a few lines of ABAP:


REPORT zjsp_cdsview_basic.


* Our class for ordering our spacesuit

CLASS zcl_jsp_space_clothes DEFINITION FINAL.





      BEGIN OF tp_spacesuits,

        matnr TYPE mara-matnr,

        maktx TYPE makt-maktx,

        ernam TYPE mara-ernam,

        mtart TYPE mara-mtart,

        matkl TYPE mara-matkl,

        size1 TYPE mara-size1,

        werks TYPE marc-werks,

      END OF tp_spacesuits.



      tp_spacesuits_tty TYPE STANDARD TABLE OF tp_spacesuits.


* its a factory class, for syntactical convenience (did you know you can create Factory classes easily

* with the CTRL-1 assist options in Eclipse!)

    CLASS-METHODS create


        VALUE(r_result) TYPE REF TO zcl_jsp_space_clothes.








* This attribute when filled will contain the spacesuit that we can then order

      mt_spacesuits TYPE tp_spacesuits_tty.




* Here we define the method that will read our spacesuits from the underlying tables



         et_spacesuits TYPE tp_spacesuits_tty,





CLASS zcl_jsp_space_clothes IMPLEMENTATION.


  METHOD main.

* read the space suits

    cds_read_space_clothes_np( ).

* display the results

    display( ).



  METHOD create.

   CREATE OBJECT r_result.



* Here we are calling our ABAP CDS View note the view select is not so different from a regular

* open SQL select, one difference is that ABAP variables are passed with @ symbol, there are other syntactical

* differences for more complex selects, but the syntax is very easy to pickup if you are familiar with open SQL.


  METHOD cds_read_space_clothes_np.

    SELECT * FROM zlo_mar_cdsvw_2

              INTO TABLE @et_spacesuits.

    mt_spacesuits = et_spacesuits.



* Here we can display our selected data, which will simultaneously be transmitted to NASAFEDEX,

* thanks to an additional ALV control that NASA have kindly added for us (not covered by this blog)...

  METHOD display.



         r_salv_table   =  DATA(lv_alvtable)


        t_table        = mt_spacesuits




*      CATCH cx_salv_msg.    "

   lv_alvtable->display( ).






data gv_runmode type char1.




* Instantiate our class and run our main method


  zcl_jsp_space_clothes=>create( )->main(  ).

Now its just a matter of executing our CDS view in ABAP on an Oracle DB:


So as you see it is possible to run CDS views from ABAP with an Oracle DB backend - and thus to be able to escape Enceladus.

In fact I think I hear my spaceship landing to come and rescue me!


What we can take away from all of this is that CDS views can be used from a non HANA DB, just as long as your DB and ABAP version are at a high enough level.

But why would you do this unless you are marooned in space? well for one thing, you could create CDS views for heavy duty select statements, in preparation for a future migration to a Hana DB.

During Barcelona Teched 2015 I was assured that the CDS views should perform considerably faster than their equivalent SQL statements even on Non Hana DBs, but more about that in my next blog - A kill to an ABAP CDS View...

That's all for today, if you have any comments or views about any of the above please get in touch with me in the comments section below.

Note: The ASSIGN technique which I have used in this blog is NOT Recommended by SAP. Please go through the comments section of  ASSIGN – Life made easy. before reading this.

Recently I was working on a custom VOFM routine implementation. My functional counterpart was sitting with me and we had configured the routine by entering the key.

Now it is my job to implement the code. I was able to get the values available in the structure KOMKBV2 inside the routine. But he needed the values from other structures (say LIKP) to implement the custom validation.


The functional consultant sitting next to me was very keen now. He wanted to know how to retrieve the internal table / structure values which are not directly available from the userexit / BADI / Customer Exit interface parameters.


I need to explain him now


So I set a breakpoint in the VOFM routine. We executed the transaction and program stopped at the breakpoint in the routine which I had set earlier.


From the debugger, I started to loop back to previous program from the call stack. I showed him how to check the values of the internal tables & structures of the previous program which called the VOFM routine. 


I told him that we have an option to use


ASSIGN ((<program name>)<internal table[]>) to <field symbol of that type>.


CONSTANTS : lc_likp TYPE char17 VALUE '(SAPMV50A)xlikp[]'.



ASSIGN (lc_likp) to <fs_likp>.


Phew!!! I was able to read the value which he was interested. But now he started to fire questions at me.


“How to find all the internal tables which I can use by this way. How to find that???”


I managed to tell him that if a program a loaded and available in context, we can access the global variables of that program by the above method. But it is not advisable to read the data by this way, as SAP has the right to change logic of the program or name of the internal table in the future release. But he was reluctant to accept that.


“Okay!! Then give me the list of all programs which are loaded and available in context now. I need to know”


Oh God!! Please save me from him. I honestly don’t know, how to get the details of all program which were loaded right now.


  1. Idea..!!! I can activate a trace and execute the transaction. Collect all the programs names displaying in the trace log and give it to him. That’s a good idea…!!! But… how do I get the list of internal table names & its value. I cannot directly get those from the trace.

Without a hope, I was going through the options available in the debugger.  I noticed an icon with name “Replace Tool”. I hardly used that. I click on that and I got a pop up. It has many options coupled as a tree.  I expanded the tree “Special Tools”.  I was able to find “Loaded Programs (Global Data)”

replace tool.png


I doubled click that and the tool was giving me everything I wanted. “Thank God… You saved me ..!!!“


It has two tabs Loaded Programs & Global data. From the global data tab, I was able to get the global variables, internal table, work area value. And the best thing was, I was able to search by CTRL + F based on the variable names. What else do I need..!!!



After he left, I did a quick search on this feature in google. I was amazed to find the below two URLs.





Everything was explained in detail and I had never read / bookmarked it   First thing I did after that was to click on “Follow” and started following the updates on that space.


Thanks for your time to stop by and read this blog.




I came across a problem where the copyright pop up message during logon is appearing as English with logon in Chinese language.


After investing lot of time, found below way to translate those texts.




These entries are stored in Cluster Table "DOKTL". Based on the contents identify the documentation object from table "DOKTL".




Documentation object in this case is "COPYRIGHT_ORACLE". Now to translate these entries go to transaction SE63.

Go to ABAP Objects --> Long texts (Documentation) --> C6 F1 Help --> TX   General Texts



Enter object name as below.


Translate the text as shown below --> Save --> Activate.



Now Login with ZH language, you will find the pop up message as translated above.



Filter Blog

By author:
By date:
By tag: