Page 1 of 2 12 LastLast
Results 1 to 30 of 55

Thread: Communication Efficiency

  1. #1
    Join Date
    Sep 2003
    Posts
    2,405
    If the U.S. tax code were converted to ASCII and this bit pattern represented with a numeral from a very large number base (units position from 0 to (2^n) -1 where n is allowed to be sufficiently large to contain the bytes needed to present the message), could, with appropriate a priori agreements about the number base, this tax code be transmitted from here to Alpha Centauri (or vice versa) in the form:

    (ID1)X1X2X3...(ID2)Y1Y2Y3... Where the IDs identify subtrahends (hexidecimal E) and subtractors (hexidecimal F),

    and the Xs and Ys are limited in range to the hexidecimal values 0 thru D.

    If so, huge amounts of data could be transmitted via polarization modulation across space with very few character sets sending many repetitions of the same surrogate message. Eight (or some few more) characters repeated 100 times increase the likelihood of accurate communication. Are others already doing this?

    This looks to good to be true, so I have probably overlooked a major flaw. Has someone thought of an even more efficient method? I am anxious to be enlightened.

  2. #2
    Join Date
    Sep 2003
    Posts
    2,405
    Quite a few new members have joined since I posted this question. Can anyone point out the flaws, if any? Would a duo-hexidecimal (32) base work better for constructing the pseudo-message?

  3. #3
    Join Date
    Apr 2005
    Posts
    1,785
    I don't quite get what you are proposing.
    Please, could you elaborate?

  4. #4
    Join Date
    Mar 2004
    Posts
    17,325
    Or perhaps simplify it?

    I'm guessing you are trying to get something for nothing - without simply removing redundancies, you are trying to do a type of lossless data compression. But, if you have (for example) 256 possible patterns, you need to send enough data to unambiguously represent those 256 patterns. You may be able to do some sophisticated signal modulation, but ultimately, bits are bits. The number base is irrelevant.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  5. #5
    Join Date
    Sep 2003
    Posts
    2,405
    I don't quite get what you are proposing. Please, could you elaborate?
    I'll try. The bits it would take to express all the information in the Encyclopedia Britanica in ASCII code can also be used to express a large number, N, in binary code. N can also be expressed as the difference between (or sum of ) an infinite set of pairs of numbers using an infinite set of number bases. If we have a colony in the Alpha Centauri system with which we wish to communicate, we will want to avoid message corruption from noise as well as transmit it quickly. We will further wish to enhance the fidelity of the information transmittal by repeating it several times using polarization modulation to represent the information content.

    It would take a lot of digits to express N in binary (ASCII), decimal, or hexadecimal bases and we wish to avoid sending all those bits required to represent such numbers. So we by apriori agreement, with teams on each end of the transmission/reception, select pairs of numbers, expressed in a common base, from the infinite set availale to us such that either their sum or difference is equal to N. The critical constraint on the selection of the number base is to use only the low order digits to express each member of the pair. If the base is 2^99 (actual power of 2 to be determined by communication theory experts), then n1*(2^99)^0 becomes the rightmost digit, n2*(2^99)^1 becomes the second digit from the right, etc. The n's are restricted to values 1 through 15 and as few digit positions as can provide the required flexibility. A transmission of the bit pattern that equals 5432, 3344 would be interpreted by the receiving station to add those numbers expressed in the number base, express the answer in binary and read the ASCII code constituting the data.

  6. #6
    Join Date
    Mar 2004
    Posts
    17,325
    Quote Originally Posted by GOURDHEAD
    It would take a lot of digits to express N in binary (ASCII), decimal, or hexadecimal bases and we wish to avoid sending all those bits required to represent such numbers.
    As I thought, you are trying to get something for nothing. You can remove redundancy, you can use efficient signalling methods, but if you want to send something where there are 256 possible states, you have to send the equivalent of 8 bits. And so on for whatever number of bits you wish to send. No exceptions. If there were, you could have infinite data compression, and reduce the Library of Congress to one bit.
    Last edited by Van Rijn; 2005-Nov-09 at 10:11 AM. Reason: Corrected silly error ...

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  7. #7
    Join Date
    Apr 2005
    Posts
    1,785

    Text Compression

    The classical way of compressing text is by finding strings of
    characters that repeat themselves, might be only a few characters
    or complete sentences, and replace these with a numerical code,
    and then add a table of these codes.
    If you only use the standard letters you can suffice with using
    a reduced ASCII-set, which requires only 7 bits.
    Together with some other tricks, text can be reduced considerably,
    often to less than 20%.

    For accuracy you could want add a control-bit to every byte,
    or send the message more than once.

    In general though, text takes relatively little space.
    You could fit thousands of volumes of plain text on a single CD-ROM.
    It is audio-visual material that takes a lot of space.

  8. #8
    Join Date
    Sep 2003
    Posts
    2,405
    As I thought, you are trying to get something for nothing.
    I prefer to think of it as getting a lot out of a relatively few bits, but being the slow learner that I am, haven't seen where the flaw in my scheme is. Surely someone in the business has thought of this and knows why it won't work.

    Assuming it can be made to work for text, with appropriate modification it will also work for image transmission where the greatest gain in efficiency will be realized. Also, fiddling with this and similar schemes may alert us how to search for information from the Advanced Ones allowing us to convert what appears to be an unusual burst of noise into an intelligent data transmission while invoking polarization demodulation.

    Be kind, at least condescendingly tutorial, show me the flaw.

  9. #9
    Join Date
    Mar 2004
    Posts
    14,547
    I believe your scheme is working at cross purposes with itself. You have some sort of compression scheme for efficiency, and then repeat the compressed message some unspecified number of times for reliability, thereby blowing away any advantage gained in compression. Those two problems should be solved separately, and then integrated to send one entire reliable message in the bandwidth available.

    The base message should be compressed. Compressed, it should have high entropy, and appear to have the characteristics of random noise for high efficiency. I strongly suspect a state-of-the-art compression scheme like LZW will defeat yours -- though I admit I haven't had the interest to analyze yours. If you can show yours is better for typical data, expect wealth to come your way.

    Once you have a compressed message, with natural redundancy squeezed out of it, then for transmission you should add redundancy to it with an error-detection-and-correction code so it can be reconstructed should some of its bits be damaged in transmission. Analyzing the types of damage you expect, the type and distriubution of errors, and deciding what, if any, error rate is acceptable, detected and undetected errors, will help you chose a much better method than just sending the message multiple times. I suppose your solution will be some kind of Reed-Muller code, like NASA uses deep-space.
    0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 ...

  10. #10
    Join Date
    Jul 2005
    Posts
    1,005
    Quote Originally Posted by Van Rijn
    If there were, you could have infinite data compression, and reduce the Library of Congress to one bit.
    Well, you can do that, as long as you know in advance that the Library of Congress is one of the two messages that will be sent

    I'd better duck now...

  11. #11
    Join Date
    Mar 2004
    Posts
    17,325
    Quote Originally Posted by montebianco
    Well, you can do that, as long as you know in advance that the Library of Congress is one of the two messages that will be sent

    I'd better duck now...
    I know you were joking but: Sure, you can activate a message, so you can send a a complex plan (by whatever means) and send a one bit go/no go signal. But, the best you can ever do when transmitting or storing data is to remove all redundancy. If I'm understanding his intent, I think GOURDHEAD was attempting to find an encoding method (not compression method) that could somehow consistently fit a large amount of data into a smaller amount with no loss. And that just isn't possible.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  12. #12
    Join Date
    Jul 2005
    Posts
    1,005
    Quote Originally Posted by Van Rijn
    I know you were joking but: Sure, you can activate a message, so you can send a a complex plan (by whatever means) and send a one bit go/no go signal. But, the best you can ever do when transmitting or storing data is to remove all redundancy. If I'm understanding his intent, I think GOURDHEAD was attempting to find an encoding method (not compression method) that could somehow consistently fit a large amount of data into a smaller amount with no loss. And that just isn't possible.
    That's for sure. If there are 2^X possible messages that can be sent, then X bits are needed. If some messages are more likely than others, those could be encoded with fewer than X bits, and the others encoded with X+1 bits. But no way to get them all below X bits...

  13. #13
    Join Date
    Sep 2003
    Posts
    2,405
    You critics may have a worthwhile point but remember you're dealing with a slow learner and consequently you are shirking your duty to show me why it doesn't work. I have laid out a simple procedure for accomplishing the sending of a message the size of the content of the Encyclopedia Britannica using only 100 or 200 bits, maye a 1000 on the outside, where the content is never known apriori. These huge messages will be sent once a week or so always with content not known apriori by the colonists at AC-A and they will be receiving similar messages from Earth to advise them of the latest news, cures for diseases, answers to previously asked questions, technology development, etc. If these messages were to be sent in their uncoded form, hundreds of millions, maybe billions of bits, would have to be sent, and , even if we use polarization modulation, the noisy universe, working against such an enormous target over 4.3 years, will make sure it will not be received intact.

    My coding system will allow you to send 100 one thousand-bit identical messages (a much smaller target), and although none come through intact, I hope comparison of the content of the entire set will allow a correct version to be retrieved. Bit compression and error detection and correction code should be employed, to the extent it is helpful, to the 100 thousand-bit messages. Standard data pages (either text or pictures) could be inserted every 50 or 100 pages as indicated by test results and experience to enhance message veracity (ensuring confidence that is has been received correctly). The format of the message should include a portion that identifies the power of 2 that identifies the number base so maximum flexibility can be applied to both code and decode the messages.

    1. Any number can be expressed as a numeral which is the sum or difference of two or more other numbers in a common data base.

    2. Any numeral so defined can be expressed in binary code.

    3. Binary code, using ASCII protocol, can be coverted into a string of letters and punctuation symbols.

    4. Letters and punctuatiuon symbols comprise messages.

    What have I not seen? Is there a Nobel prize for communication theory...well, at least trickery?
    Last edited by GOURDHEAD; 2005-Nov-15 at 08:04 PM.

  14. #14
    Join Date
    May 2005
    Location
    N.E.Ohio
    Posts
    20,024
    Quote Originally Posted by GOURDHEAD
    You critics may have a worthwhile point but remember you're dealing with a slow learner and consequently you are shirking your duty to show me why it doesn't work.
    You haven't shown us why you think it does work. You gave us some methods and definitions, but not how it applies to the end product.
    How about a short example, like a short sentence?
    Show us the following: the original sentence, the information being sent, and how it is decoded.
    1. Any number can be expressed as a numeral which is the sum or difference of two or more other numbers in a common data base.
    How does that make it shorter. It takes 8 bits to describe 100. It takes at least 8 bits to describe any number that can be used as a difference to get to 100. The difference will involve one number that is greater than, and one number that is less than.
    2. Any numeral so defined can be expressed in binary code.
    So? Computers represent just about anything in binary.
    3. Binary code, using ASCII protocol, can be coverted into a string of letters and punctuation symbols.
    Why is ASCII so important? I can invent my own protocol that would be slightly more effecient than ascii since it would cover only the needed characters and not some of rarely used ones. For instance, how about a 6 bit code that allows 64 characters (26 letters, 10 numbers and some assorted symbols)
    4. Letters and punctuatiuon symbols comprise messages.
    And spaces, and numbers, and pictures, etc.
    What have I not seen? Is there a Nobel prize for communication theory...well, at least trickery?
    I'm sure it would be covered in some other Nobel prize (most likely math).

  15. #15
    Join Date
    Jul 2004
    Posts
    107
    I've finally taken the time to read through Gourdhead's explanation, and I don't quite understand how this method is supposed to work. I am also suspicious (as are others) that it can achieve the claimed data compression. Perhaps we could work through a practical example so as to help with both issues.


    Say we want to transmit this text: "Message"

    That would be 7 characters, each represented by 8 bits. Specifically, the text would be represented as: 01001101 01100101 01110011 01110011 01100001 01100111 01100101

    So then all these bits are appended to each other to created the number N which in decimal form would be 21,785,119,738,128,229.

    And now this is supposed to be represented somehow as the sum of a pair or pairs of numbers... I think. Here is where I don't understand how the method is supposed to work.

    Gourdhead, could you work through the rest get the resulting data which would actually be transmitted? Then we could also see how many bits would be required compared to the original 56 bits making up the message.

  16. #16
    Join Date
    Mar 2004
    Posts
    17,325
    Quote Originally Posted by SirBlack
    Gourdhead, could you work through the rest get the resulting data which would actually be transmitted? Then we could also see how many bits would be required compared to the original 56 bits making up the message.
    This is the crux of the issue.

    Quote Originally Posted by GOURDHEAD
    1. Any number can be expressed as a numeral which is the sum or difference of two or more other numbers in a common data base.

    2. Any numeral so defined can be expressed in binary code.
    How many bits does it require to exactly represent that number? I think you will find there is no savings on average if starting with an arbitrarily chosen number.

    "Infinite compression" (lossless compression beyond the removal of all redundancy) is essentially the information equivalent of the perpetual motion machine. Take a look at the link 01101001 supplied earlier:

    http://en.wikipedia.org/wiki/Information_entropy

    It is physically impossible, but also should be clear if you think about small values. How would you consistently represent 256 values in 7 bits?

    But, if you do find a way to do it, you can make a lot of money, so if you think you have a shot, go for it. There actually was a company in the early '90s that said they could do this and they got a lot of money from hopefuls, despite many negative comments. They kept delaying their product, and when they did demonstrate something when using arbitrary input data ... it produced garbage for output.

    Ah, well.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  17. #17
    Join Date
    Sep 2003
    Posts
    2,405
    So then all these bits are appended to each other to created the number N which in decimal form would be 21,785,119,738,128,229.
    Very little efficiency can be gained when sending short messages and I need software (e.g., BASIC) capable of computing a specific answer from a selected pair of numbers. As someone has suggested, a modified ASCII code may increase the efficiency.

    The following, although not providing an exact answer, will, I hope, demonstrate the procedure:

    The decimal number we wish to send is 21,785,119,738,128,229 ("message" in ASCII) requiring 56 bits if not coded as I suggest.

    We could send X59-XY where X is the coefficient of (2^59)^0, 59 designates the power of 2 forming the number base (576,460,752,303,423,488), and Y is the power of 2 designating the number base of the subtrahend. 59 was somewhat arbitrarily chosen to make sure I had a value greater than 21,785,119,738,128,229, and I need computational software to get more specific.

    We could send xxxx0101100100101101xxxxyyyyyyyy using a total of 48 bits when coded as I suggest. Here the x's refer to the coefficient of the power of the number base which power increases, starting at zero in the right most position, from right to left as we normally express decimal numbers. The y's indicate the number base of the subtrahend. Note that 8 bits are used to indicate subtraction (or addition or multiplication) which we could eliminate by apriori agreement and have a savings based on 32 versus 56 bits in this case (a savings of 43%). The very large messages will have savings of 99.8% or more.
    Last edited by GOURDHEAD; 2005-Nov-15 at 08:17 PM.

  18. #18
    Join Date
    Jul 2005
    Posts
    1,005
    Quote Originally Posted by GOURDHEAD
    Very little efficiency can be gained when sending short messages and I need software (e.g., BASIC) capable of computing a specific answer from a selected pair of numbers. As someone has suggested, a modified ASCII code may increase the efficiency.

    The following, although not providing an exact answer, will, I hope, demonstrate the procedure:

    The decimal number we wish to send is 21,785,119,738,128,229 requiring 56 bits if not coded as I suggest.

    We could send X59-XY where X is the coefficient of (2^59)^0, 59 designates the power of 2 forming the number base (576,460,752,303,423,488), and Y is the power of 2 designating the number base of the subtrahend. 59 was somewhat arbitrarily chosen to make sure I had a value greater than 21,785,119,738,128,229, and I need computational software to get more specific.

    We could send xxxx0101100100101101xxxxyyyyyyyy using a total of 48 bits when coded as I suggest. Here the x's refer to the coefficient of the power of the number base which power increases, starting at zero in the right most position, from right to left as we normally express decimal numbers. The y's indicate the number base of the subtrahend. Note that 8 bits are used to indicate subtraction (or addition or multiplication) which we could eliminate by apriori agreement and have a savings based on 32 versus 56 bits in this case.
    I think the trick here is that you can certainly encode some 56-bit numbers with fewer than 56 bits, you just can't encode all of them that way with the same encoding scheme, if each 56-bit number is to be represented uniquely. If all 56 bit numbers are possible, it must be the case that at, if some of them are encoded with fewer than 56 bits, others must be encoded with more than 56 bits. If we choose an arbitrary 56-bit number, if your scheme has a distinct representation for every number, than there is just no way to do it with fewer than 56 bits. You have translated a 56 bit number to 48 bits; if we choose an arbitrary 56 bit number, can you always do this? If so, you must be assigning the same encoding to different numbers.

    Now, if some numbers are possible messages that could be transmitted while others are not, it may well be possible to design an encoding scheme which uses fewer bits than the straightforward scheme, and if all numbers are possible but some are more likely than others, then it is possible to design an encoding scheme which uses fewer bits than the straightforward scheme on average. What is the nature of the information to be transmitted? If there is structure/redundancy, it can be done. If any 56 bit number is a possible message, then there must be at least 2^56 distinct encodings, and there must be at least 56 bits in the encoding scheme.

  19. #19
    Join Date
    Sep 2003
    Posts
    2,405
    If so, you must be assigning the same encoding to different numbers.
    Yes the same encoding process but with sufficient flexibility to distinguish between "message", "massage", "masters", "misters", "process", etc. Note that the decimal number representing the contents of the encyclopedia Britannica expressed in ASCII would be very large, 2 to the 200 or 300th power perhaps (wildly guessing) and maybe requiring multiplication of numbers in the transmitted message while not exceeding several hundred bits to transmit. When put into practice, we will no doubt develop/discover shortcuts primarily through protocols which precisely define apriori agreements. Note that we are not transmitting mesages per se; we are transmitting instructions of how to reconstruct messages constructed by a sender a few light years away.
    Last edited by GOURDHEAD; 2005-Nov-15 at 07:07 PM.

  20. #20
    Join Date
    Jul 2005
    Posts
    1,005
    I'm not really sure I'm understanding the last post. The distinction between sending messages and sending instructions on how to construct messages seems artificial to me; if there are 256 messages that could be constructed at the end of the process, then 256 distinct sets of instructions must be sent.

    Now, you also mention encyclopedia Britannica; if the intention is that the messages encoded are text, then there is a lot of structure and redundancy to text; some messages (in fact, if we were to do some calculations, almost all) simply make no sense interpreted as text. So with this assumption about the nature of the information being transmitted (the 2^200 or 2^300 numbers are severe overestimates - these are enormous numbers), it would certainly be possible to encode things down to many fewer bits than straight character-by-character encoding. Let us suppose the EB takes 100 million bits in a straightforward character-by-character encoding (I don't know if that is accurate). Do you wish to be able to send any 100 million bit message at all? Or only those which make sense when interpreted as text? If the latter, probably large gains are possible. If the former, there's just not much that can be done...

  21. #21
    Join Date
    Sep 2003
    Posts
    2,405
    You have translated a 56 bit number to 48 bits; if we choose an arbitrary 56 bit number, can you always do this? If so, you must be assigning the same encoding to different numbers.....Do you wish to be able to send any 100 million bit message at all? Or only those which make sense when interpreted as text? If the latter, probably large gains are possible. If the former, there's just not much that can be done...
    We merely specify which two numbers, expressed as a small multiple of the specified power of 2, that, when subjected to the specified operation, will yeild the 56-bit ASCII message.

    Trying to remember that I am speaking out of ignorance and very much need guidance from an expert communications theorist, I believe we can send any message (actually instructions to reconstruct) at all, including graphics, provided we design the protocols cleverly and optimize the apriori agreements.

    Another approach, which is more restrictive, is to always use the same very large number base for each operand and the same operator (add, subtract, multiply, or divide) thus reducing the variables to the coefficients of the exponents of the number bases. A standard easily recognized filler would be appended at the end of the message as part of its construction to get the process to produce the number interpreted in ASCII to render the message accurately. The filler would consist of something the size of I Corinthians, 13 repeated until the answer is correct (suffixed with bits as required) when the operands are combined by the specified operators
    Last edited by GOURDHEAD; 2005-Nov-15 at 07:59 PM.

  22. #22
    Join Date
    Mar 2004
    Posts
    14,547
    Quote Originally Posted by GOURDHEAD
    We merely specify which two numbers, expressed as a small multiple of the specified power of 2, that, when subjected to the specified operation, will yeild the 56-bit ASCII message.
    You assume two such small numbers exist. Why?

    For instance the sum or difference of any two multiples of some specified (natural number) power of 2 must be even. What if the number you are trying to represent is odd?
    0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 ...

  23. #23
    Join Date
    Jul 2004
    Posts
    107
    Maybe it's just me, but I still don't understand clearly how you're representing N using X, Y, and some power of 2.

    Quote Originally Posted by GOURDHEAD
    X is the coefficient of (2^59)^0, 59 designates the power of 2 forming the number base (576,460,752,303,423,488)
    First thing, (2^59)^0 is equal to 1 as is anything raised to the power of 0. This can't possibly what you intended. So I'm taking a guess that the first term is simply X*(2^59)

    Quote Originally Posted by GOURDHEAD
    and Y is the power of 2 designating the number base of the subtrahend.
    Looks like you mean the second term is 2^Y

    So by my interpretation you mean N = X*(2^59) - 2^Y

    Quote Originally Posted by GOURDHEAD
    We could send X59-XY
    But then I look at this and it implies X is used in the second term somehow. So perhaps you meant something more like N = X*(2^59) - X*(2^Y)

    But further on you say

    Quote Originally Posted by GOURDHEAD
    We merely specify which two numbers, expressed as a small multiple of the specified power of 2, that, when subjected to the specified operation, will yeild the 56-bit ASCII message.
    Which seems to say that X and Y are each used a multiples of some power of 2. So maybe the equation should be N = X*(2^59) - Y*(2^59)



    Now everything I've read seems to be some variation on using two terms each with some coefficient and some power of 2. So I'm going to look at an abstract version of this that covers all of the above euqations.

    N = A*(2^B) - C*(2^D)

    There's a major problem with this, N cannot be odd. This problem is still present if you change the subraction into an addition or multiplication. So that would leave division: N = (A*(2^B)) / (C*(2^D)). Though since B and D are arbitrary, this could be simplifed to N = (A/C) * (2^E) where E = B-D. And though I can't state explicitly why, I have a feeling this equation won't be able to do what you want it to do.

    At this point, we need to make sure the math you're trying to use to represent N is valid before we even worry about how many bits would be in the final transmission.

  24. #24
    Join Date
    Mar 2004
    Posts
    14,547
    Quote Originally Posted by SirBlack
    Maybe it's just me, but I still don't understand clearly how you're representing N using X, Y, and some power of 2.
    It's not just you. I've made a couple of stabs at understanding Gourdhead's method but I can't understand what he's saying. His nomenclature and notation are very confusing.

    Gourdhead, any chance you could restate the method? An example would really really help. Maybe that "Message" challenge was too complicated, with numbers too large. Suppose the message you wanted to send was ASCII "i". In binary that is 01101001. (Hey!) For your convenience, 01101001 in hex is 69 and in decimal is 105. That shouldn't swamp you computationally.

    Please show exactly what bits you would send with your compression method, and how the fields of bits you send are to be interpreted by the receiver. (It's OK if your method produces a string longer than 8 bits. Most compression methods are anti-efficient with short data.)

    I think we'd all like to understand what you are proposing. Thanks.
    0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 ...

  25. #25
    Join Date
    Sep 2003
    Posts
    2,405
    For instance the sum or difference of any two multiples of some specified (natural number) power of 2 must be even. What if the number you are trying to represent is odd?

    ......First thing, (2^59)^0 is equal to 1 as is anything raised to the power of 0. This can't possibly what you intended. So I'm taking a guess that the first term is simply X*(2^59)
    I apologize for my lack of skill to present this idea clearly; the frightening thought is I may be overlooking a fault that is so obvious to you that you are reluctant to mention it to me.

    I did indeed mean (2^59)^0 which specifies the units position. I get the odd numbers from "the units position" similar to what is done in decimal notation. What you may have overlooked is that the coefficient of the units position [(2^59)^0], as is the coefficient for each numeral position, in my scheme is "overrestricted" by having its coefficients limited to the hexidecimal numbers 0 through D (decimal 13) where "E" and "F" may be needed as delimiters; if not, the position multipliers can be expanded to include "E" and "F". Note that the "units" position in hexidecimal includes decimal numerals 0 through 15. If practice so dictates I could expand the units position restriction to 31 or 63, but, for now, I prefer the more severe constraint to minimize the number of bits used for transmittal. Remember there is an infinite set of operands which, when combined as directed by the four major arithmetic operators, will result in the desired answer, so the arbitrary restriction placed on the position multipliers will be tolerable. Also, the filler can assist in causing the number to be transmitted to be of the form and size (fine tuned as) we wish. Obviously we wish to limit it to being used only once and that for content accuracy verification.

    Maybe it's just me, but I still don't understand clearly how you're representing N using X, Y, and some power of 2.
    The x's and y's indicate hexidecimal bit positions required to specify the position multiplier. That's why each was repeated to indicate four positions. It might have been more clear had I used x1x2x3x4 and x5x6x7x8. If it is required to expand the position multipliers to 31 or 63, the number of bit positions will increase accordingly.

    In summary let's assume the colonists (or natives) at Alpha Centauri A wish to send a message to Earth consisting of 10^18 words. That is probably larger than the content of the Encyclopedia Britannica. Such a message coded in ASCII (for example) can be expressed as a numeral of an extremely large number base. This number base can be expressed as an even larger power (than 18) of 2 which, in turn, can be expressed as sums or differences of numbers expressed in the same or other powers of 2 number bases. Since our position multipliers are overrestricted and we wish to use only the leastmost 4 or 5 positions with multipliers limited to 0 through 13 (or 15,31,or 63), we will probably elect to add several (more than 2) numbers to arrive at the inordinately large (decimal notation) number that comprises the message. Assuming that the average word size is 7 characters as was the case in "message" which was expressed in decimal notation as a number greater than 21 million. This suggests using a number base on the order of 10^(18+9) or 10^27 which can be transmitted in less than 1000 bits using my coding system. Hey! It's just arithmetic; we can each add.

  26. #26
    Join Date
    Nov 2004
    Posts
    1,124
    Quote Originally Posted by GOURDHEAD
    This suggests using a number base on the order of 10^(18+9) or 10^27 which can be transmitted in less than 1000 bits using my coding system. Hey! It's just arithmetic; we can each add.
    Sorry, you're not going to get 10^27(of what ever unit you're using) to fit in 1000 bits.

    I am totally clueless on what you're trying to describe here. It's been awhile since I've played with data transmissions or programming, but if I was to take a wild quess on what you're trying to do, it looks like you're trying to create a multidimensional array of some sort.

    Maybe an example like 01101001 suggests?
    Quote Originally Posted by 01101001
    Gourdhead, any chance you could restate the method? An example would really really help. Maybe that "Message" challenge was too complicated, with numbers too large. Suppose the message you wanted to send was ASCII "i". In binary that is 01101001. (Hey!) For your convenience, 01101001 in hex is 69 and in decimal is 105. That shouldn't swamp you computationally.

    Please show exactly what bits you would send with your compression method, and how the fields of bits you send are to be interpreted by the receiver. (It's OK if your method produces a string longer than 8 bits. Most compression methods are anti-efficient with short data.)

    I think we'd all like to understand what you are proposing. Thanks.

  27. #27
    Join Date
    Mar 2004
    Posts
    17,325
    Right. Gourdhead, until you provide a real world example with the steps laid out, this is just so much word salad. It makes no sense to me.

    I think that you will find that, if you start with a message with all conventional data redundancy removed, whatever value you end up with at the end of the process - assuming it can be used to accurately reconstruct the original message - will end up requiring at least as many bits to correctly represent it as the original message.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  28. #28
    Join Date
    Jul 2004
    Posts
    107
    Ah ha, I finally think I understand, though it took a while even with that last post...

    It sorta comes from thinking about representing N as the addition or subtraction of different numbers in different bases. For instance 203310 = 102048 - F2048. Of course there's there's a vast number of ways to do this if any number of digits in any base is allowed. But since the aim is to use a small amount of bits in the final message, both the digits and bases allowed are restricted. Only digits 1 through F are allowed, and only bases that are a power of 2.

    But this seems rather cumbersome to talk about in terms of digits and bases, so here is where it changes to each term being a power of 2 multipled by some coefficient. The previous example now becomes 2033 = 1*(2^11) - 15*(2^0).

    Now I'll try to put this into a general form: N = X1*(2^P1) ? X2*(2^P2) ? X3*(2^P3) ? ...
    where each X could be a value from 1 to 15, each P could be a value from 0 to 255, and each ? would be replaced by one of the fundamental mathematical operations such as addition or subtraction (or maybe multiplication). The ranges for P or X could be enlarged as necessary to deal with very large values of N, but there still has to be some specific restrictions, especially for X. For the sake of dicussion, we should probably just stick with the restrictions mentioned above.

    One thing I'm not sure about is what if any restrictions apply to the number of terms allowed in this equation. And I think this may become an important issue. Certainly any possible N can be represented by the equation if any number of terms is allowed. But the final goal is to represent N in a form that requires fewer bits to transmit than the straightforward binary form of N, which means we have a limit of just how many terms can be used.

    What needs to be examined first is just how many terms are necessary to represent any arbitrary N. Now for values of N near a power of 2 (or multiple of a power of 2), two terms is sufficient. For instance 65521 = 1*(2^16) - 15*(2^0). However, an N further away from a multiple of a power of 2 isn't so easy. I don't think there's a way to represent 65519 with just two terms (keeping in mind the restrictions on X). Finding a three term representation wouldn't be hard, however I expect I could find an N less than the previous numbers which would require more than three terms. And perhaps so on for more terms.

    As we deal with larger and larger values of N, particular ranges will require more and more terms to represent properly. I'd love to have some sort of algorithm that could give the minimum number of terms necessary for each particular N, but unforutnately I don't. However I still suspect there may be values of N where the representation of N given by the equation requires more bits to send than the original N. The question now is whether this would happen for relatively few values of N or for a large enough amount that this method becomes useless in too many cases to be practical.

  29. #29
    Join Date
    Sep 2003
    Posts
    2,405
    Ah ha, I finally think I understand, though it took a while even with that last post...
    Bravo!! Protocol or header or a combination of both can eliminate assigning bits to specify arithmetic operators and for transmission purposes a 5-bit code may suffice. The composers of the messages to be transmitted can convert the completed message to ASCII for which that set of bits is the equivalent of a very, very large number which can be expressed as you describe. I am more and more convinced that an algorithm can be developed for a standard number base on the order of 2^300 and standard filler data can be added to fine tune the transmitted number such that it lends itself to 5 or less positions and multipliers from 0 through 15. Once the composers determine how many times the filler is to be used, it can be inserted every 50 pages or so, as opposed to all at the end, as a means to verify mesage fidelity and of error correction in case the universe was particularly unfriendly on that 4.3 year transmission.

    No one has commented on the polarization modulation to which we could add pulse coded modulation. Has someone like SETI looked into how mutually familiar civilizations would communicate across such distances with high fidelity? Am I correct in assuming the universe is less noisy with respect to pulse coded polarization modulated messages? If there are several planets in the Alpha Centauri system, the explorers will have much data to communicate back to Earth and Earth will have to send lotsa data to keep the colonists current with both technology development and personal data from friends and kin.

  30. #30
    Join Date
    Mar 2004
    Posts
    14,547
    Quote Originally Posted by GOURDHEAD
    Bravo!! Protocol or header or a combination of both can eliminate assigning bits to specify arithmetic operators and for transmission purposes a 5-bit code may suffice. The composers of the messages to be transmitted can convert the completed message to ASCII for which that set of bits is the equivalent of a very, very large number which can be expressed as you describe. I am more and more convinced that an algorithm can be developed for a standard number base on the order of 2^300 and standard filler data can be added to fine tune the transmitted number such that it lends itself to 5 or less positions and multipliers from 0 through 15. Once the composers determine how many times the filler is to be used, it can be inserted every 50 pages or so, as opposed to all at the end, as a means to verify mesage fidelity and of error correction in case the universe was particularly unfriendly on that 4.3 year transmission.
    I still don't understand. I appreciate and understand SirBlack's notation. I don't understand how Gourdhead uses it, given the comments above.

    ? X1*(2^P1) ? X2*(2^P2) ? X3*(2^P3) ? ...

    (I added a leading ?, beacuse the first term can be added or subtracted, too.)

    Where does the base, like 2^300 above fit in? Is it actually:

    ? X1*((2^300)^P1) ? X2*((2^300)^P2) ? ...

    with Xi a 4-bit quantity, a number in the range 0-15, and the '?' either addtion or subtraction. (Of course if Xi is 0, you don't bother to send it and its associated Pi, thereby supposedly gaining efficiency, right?)

    If that is the case, then there are many, many numbers that cannot be generated this way. For instance (and not limited to) any number that when divided by 2^300 leaves a remainder greater than 15 and less than (2^300)-15, cannot be generated from this method. Tell me how would you represent the number 16 with this method. (Second thought: do the Pi values have to be different? If not, that's a way out -- but it costs efficiency big time. If any Pi values can be identical, then imagine how many bits it would take to represent, for insatnce, 2^150 with a sum of small values.)

    Hey, if I'm not understanding the method, it's probably because you haven't supplied a clear definition of it. If I misstated it above, then please, Gourdhead, define it. I am growing tired of trying to analyze what I think you are claiming.
    Last edited by 01101001; 2005-Nov-17 at 07:51 AM.
    0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 ...

Similar Threads

  1. Solar panel efficiency
    By Siguy in forum Off-Topic Babbling
    Replies: 35
    Last Post: 2008-Oct-23, 03:03 PM
  2. Plants using qm as an efficiency tool?
    By Jetlack in forum Life in Space
    Replies: 22
    Last Post: 2008-Jul-25, 07:18 AM
  3. Heating efficiency question
    By kzb in forum Science and Technology
    Replies: 47
    Last Post: 2008-Jan-05, 08:40 AM
  4. Efficiency of power supplies
    By drhex in forum Science and Technology
    Replies: 14
    Last Post: 2007-Nov-24, 11:45 PM
  5. Charge Transfer Efficiency of a CCD
    By Normandy6644 in forum Space/Astronomy Questions and Answers
    Replies: 0
    Last Post: 2005-Sep-06, 01:22 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
here
The forum is sponsored in-part by: