Announcement

Collapse
No announcement yet.

help with philosophy...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by dbjmofo View Post
    again, we program ourselves. we're fully capable of evolving on our own without having to do routine maintenance. with computers they can only really do so much as you tell them to. you can't take a computer programmed to work in a car factory to work in a wheat plant and expect it to adapt. it's programmed to do what it was intended to do.

    the example the professor used in class was C3P0 and R2D2 and how they were the ideal example of "strong AI". they were allowed to move around on their own and had a sense of "Free Will".


    I think you're mixing "knowledge" and "learning" just a bit.

    An infant kept completely alone in a sterile environment will never program themselves. They will be totally nonfunctional for all intents and purposes.

    And you're limiting your argument to current technology. Yes a specific robot will not adapt to a variety of work at this point. In a few years though a worker class machine could be constructed that wasn't task specific. Your question was what could be, not what is.

    Comment


    • #17
      Do some reading on Gödel Machines....

      "The growing literature on consciousness does not provide a formal demonstration of the usefulness of consciousness. Here we point out that the recently formulated Gödel machines may provide just such a technical justification. They are the first mathematically rigorous, general, fully self-referential, self-improving, optimally efficient problem solvers, conscious or self-aware in the sense that their entire behavior is open to introspection, and modifiable. A Gödel machine is a computer that rewrites any part of its own initial code as soon as it finds a proof that the rewrite is useful, where the problem-dependent utility function, the hardware, and the entire initial code are described by axioms encoded in an initial asymptotically optimal proof searcher which is also part of the initial code. This type of total self-reference is precisely the reason for the Gödel machines optimality as a general problem solver: any self-rewrite is globally optimal—no local maxima!—since the code first had to prove that it is not useful to continue the proof search for alternative self-rewrites."

      Comment


      • #18
        A computer will perform as programmed.

        Comment


        • #19
          to explain how a computer works our professor spoke about the chinese room.

          in this situation there is a man who speaks no chinese and has never seen it, in a room with an "in" and an "out" bin similar to "input" and "output" on a computer. also in the room is a chinese-to-chinese handbook that basically tells you what to respond with if a certain sequence of characters are inputted. the information coming in is all chinese. the person goes to the handbook and finds the incoming sequence and then outputs a sequence associated with it. overtime the person would get quicker at outputting only because he recognizes the characters, not because he understands them.

          this example shows how computers can only do simple character exchange. tying this back to the original question posted in the first post, there is no way you could "teach" a computer to feel frustration because they won't actually feel it the same way a human does.

          the argument that this particular philosopher (Searle) makes is that you can only make a strong AI if you make it out of the same stuff as humans, implying that only clones of humans will ever be true strong AI.

          Comment


          • #20
            Would it work to argue AI versus a learning computer?

            Comment


            • #21
              Originally posted by Shibby View Post
              Would it work to argue AI versus a learning computer?
              we used learning computers as an example of weak AI. but do share how you'd argue AI against a learning computer?

              Comment


              • #22
                Just throwing stuff out here... Can intelligence be an "artificial" thing? If an organic material is used to create the learning capacity in something like a cyborg, is it really artificial?

                Can a learning computer ever be anything more if it can't go against the probability factors or most popular response? Take IRobot for example: The robot saved Will Smith instead of going for the child. Sunny killed his "Father/Creator" because he was angry or tricked. Could a learning computer ever have a bond beyond rationality? Even Sunny wouldn't be able to understand risking your life to save a loved one. Only would know that it's done.

                Comment


                • #23
                  What's that old book about the robots that take over?

                  Comment


                  • #24
                    Originally posted by Shibby View Post
                    Just throwing stuff out here... Can intelligence be an "artificial" thing? If an organic material is used to create the learning capacity in something like a cyborg, is it really artificial?

                    Can a learning computer ever be anything more if it can't go against the probability factors or most popular response? Take IRobot for example: The robot saved Will Smith instead of going for the child. Sunny killed his "Father/Creator" because he was angry or tricked. Could a learning computer ever have a bond beyond rationality? Even Sunny wouldn't be able to understand risking your life to save a loved one. Only would know that it's done.
                    in the iRobot situation you could say that sunny processed the pros/cons at a deeper level than just what would be good now but actually what would be good in the future.

                    Comment


                    • #25
                      Ok, so write about you being a woman that only can figure what she doesn't want to write about. That would qualify as AI :D

                      Comment


                      • #26
                        Originally posted by Shibby View Post
                        Ok, so write about you being a woman that only can figure what she doesn't want to write about. That would qualify as AI :D
                        :rofl:

                        priceless response.

                        Comment


                        • #27
                          what makes us different than robots/computers is that we're capable of reproduction and therefore evolution. Our learning and adaptation lead to further adaptations in further generations. The issue with computers is that their evolution is not determined by their environment and their own reproduction, but rather by our desires.

                          Frustration is an abstract concept which manifests itself as an emotional response to failure. For computers to realize emotional responses and complex abstract ones at that would require a semiotic understanding of language which passes the binary code on which their predicated. Enotions and abstract thought demand that two opposing propostitions are able to be understood as true, concurrently. Until a computer can do this, it will not experience frustration.

                          Comment


                          • #28
                            Originally posted by Shibby View Post
                            Ok, so write about you being a woman that only can figure what she doesn't want to write about. That would qualify as AI :D
                            oh shibbster.

                            Comment


                            • #29
                              finished my paper. i had my professor giving a good look over it and he added a lot of funny/good points. if anyone is truly interested (which i'm sure a majority of you aren't) i'll pm you a copy.

                              Comment


                              • #30
                                *crickets, crickets, crickets*
                                Last edited by Shibby; 04-27-10, 04:01 AM.

                                Comment

                                Working...
                                X